Cartpole Camera System – OpenCV + PS EYE + IR

We tried using colored tape before. It was okay after manual tuning, but kind of sucked. Commerical motion tracking systems use IR cameras and retroreflectors.

We bought some retroreflective tape and put it on the pole. http://a.co/0A9Otmr

We removed our PS EYE IR filter. The PS EYE is really cheap (~7$) and has a high framerate mode (100+ fps). People have been using it for a while for computer vision projects.

http://wiki.lofarolabs.com/index.php/Removing_the_IR_Filter_from_the_PS_Eye_Camera

We followed the instructions, but did not add the floppy disk and sanded down the base of the lens to bring the image back into focus.

We bought an IR LED ring light which fit over the camera with the plastic cover removed and rubber banded it in place.

http://a.co/2sGUY08

If you snip the photoresistor it is always on, since the photoresistor is high resistance in the dark. We used a spare 12V power supply that we soldered a connector on for.

We had also bought an IR pass filter on amazon, but it does not appear to help.

Useful utilties: qv4l2, v4l2-ctl and v4l2-utils. You can change lots of stuff.

qv4l2 -d 1 is very useful for experiementation

Useful options to  v4l2-ctl : -d selects camera, -p sets framerate -l gives a list of changeable options. You have to turn off the automatic stuff before it becomes changeable. Counterintuitively auto-exposure seems to have 1 as off.

There has been a recent update to opencv to let the v4l2 buffer size be changed. We’re hoping this will really help with our latency issues

A useful blog. We use v4l2-ctl for controlling the exposure programmatically

http://www.jayrambhia.com/blog/capture-v4l2

Oooh. The contour method + rotated rectangle is working really well for matching the retroreflective tape.

https://docs.opencv.org/3.3.1/dd/d49/tutorial_py_contour_features.html

You need to reduce the video size to 320×240 if you want to go to the highest framerate of 187fps

 

In regards to the frame delay problem from before, it’s not clear that we’re really seeing it? We are attempting both the screen timestamp technique and also comparing to our rotary encoder. In the screen timestamp technique, it is not so clear that what we measure there is latency, and if it is, it includes the latency of the monitor itself, which is irrelevant.

img_5311 img_2511

 

Aruco in opencv

So there isn’t great documentation on the python bindings as far as I can find. There are docs on the c++ bindings.  Trying to do this on a mac was a hellish uphill battle, and opencv in the virtual machine has been… hmm actually pretty okay? Well, I did this on my fresh new triple boot ubuntu flash drive.

Invaluable is to go into the python REPL and type

Then you can see what all the available functions are. They’re more or less self explanatory, especially since they are described in the opencv c++ tutorials.

http://docs.opencv.org/3.1.0/d9/d6d/tutorial_table_of_content_aruco.html

I believe the python bindings are generated programmatically, and they are fairly systematic, but always a touch different from the c++ function calls. A big difference is typically the python calls don’t modify in place.

Anyway, to get you up, I cobbled together some really basic code. It can generate a tag and save it

And this is a basic program to detect the markers

They are sprinkled with the requisite garbage and cruft of me wiggling around with print statements to figure out what everything is.

It sounds like more of what I want is to use Aruco boards. They sound good. I’m looking into using this for maybe robot configuration sensing.

 

Some opencv testing code

I made a little module to have a more controlled and programmatic testing of tracking algorithms and stuff.

I could use real world data, like a video recording, but I’d like to start here. I think this is smart. I also could have used a more complicated 3d imaging package. vpython makes sense, since it is easy, but getting programmatic access to the images in unsupported somehow as far as I can tell. Now, there is no way something that might work here will necessarily transfer over to real video even after I add noise and point mismatch, but it should simplify some things. I’ve been having more trouble than makes sense to me getting good rotations off of a KLT tracker that is clearly doing a pretty bang up job.