Annihilating My Friend Will with a Python Fluid Simulation, Like the Cur He Is

Will, SUNDER!
A color version

As part of my random walk through topics, I was playing with shaders. I switched over to python because I didn’t feel like hacking out a linear solver.

There are a number of different methods for simulating fluids. I had seen Dan Piponi’s talk on youtube where he describes Jos Stam’s stable fluids and thought it made it all seem pretty straightforward. Absolutely PHENOMENAL talk. Check it out! We need to

  • 1. apply forces. I applied a gravitational force proportional to the total white of the image at that point
  • 2. project velocity to be divergence free. This makes it an incompressible fluid. We also may want to project the velocity to be zero on boundaries. I’ve done a sketchy job of that. This requires solving a Laplace equation. A sketch:
    • v_{orig} = v_{incomp} + \nabla w
    • \nabla \cdot v_{incomp}=0
    • \nabla ^2 w = \nabla \cdot v_{orig}. Solve for w
    • v_{incomp}=v_{orig} - \nabla w
  • 3. Advect using interpolation. Advect backwards in time. Use v(x,t+dt) \approx v(x-v(x)*dt,t) rather than v(x,t+dt) \approx v(x,t)+dv(x,t)*dt . This is intuitively related to the fact that backward Euler is more stable than forward Euler. Numpy had a very convenient function for this step https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.map_coordinates.html#scipy.ndimage.map_coordinates

Given those basic ideas, I was flying very much by the seat of my pants. I wasn’t really following any other codes. I made this to look cool. It is not a scientific calculation. I have no idea what the error is like. With a critical eye, I can definitely spot weird oscillatory artifacts. Maybe a small diffusion term would help?

When you solve for the corrections necessary to the velocity to make it incompressible \nabla \cdot v = 0 , add the correction ONLY to the original field. As part of the incompressible solving step, you smooth out the original velocity field some. You probably don’t want that. By adding only the correction to the original field, you maintain the details in the original

When you discretize a domain, there are vertices, edges, and faces in your discretization. It is useful to think about upon which of these you should place your field values (velocity, pressure, electric field etc). I take it as a rule of thumb that if you do the discretization naturally, you are more likely to get a good numerical method. For example, I discretized my velocity field in two ways. A very natural way is on the edges of the graph. This is because velocity is really a stand in for material flux. The x component of the velocity belongs on the x oriented edges of the graph on the y component of velocity on the y oriented edges. If you count edges, this means that they actually in an arrays with different dimensions. There are one less edges than there are vertices.

This grid is 6×4 of vertices, but the vx edges are 6×3, and the vy edges are 5×4. The boxes are a grid 5×3.

For each box, we want to constrain that the sum of velocities coming out = 0. This is the discretization of the \nabla \cdot v = 0 constraint. I’m basing this on my vague recollections of discrete differential geometry and some other things I’ve see. That fields sometimes live on the edges of the discretization is very important for gauge fields, if that means anything to you. I did not try it another way, so maybe it is an unnecessary complication.

Since I needed velocities at the vertices of the grid, I do have a simple interpolation step from the vertices to the edges. So I have velocities being computed at both places. The one that is maintained between iterations lives on the vertices.

At small resolutions the code runs at real time. For the videos I made, it is probably running ~10x slower than real time. I guarantee you can make it better. Good enough for me at the moment. An FFT based Laplace solver would be fast. Could also go into GPU land? Multigrid? Me dunno.

I tried using cvxpy for the incompressibility solve, which gives a pleasant interface and great power of adding constraints, but wasn’t getting good results. i may have had a bug

This is some code just to perform the velocity projection step and plot the field. I performed the projection to 0 on the boundaries using an alternating projection method (as discussed in Piponi’s talk), which is very simple and flexible but inefficient. It probably is a lot more appropriate when you have strange changing boundaries. I could have built the K matrix system to do that too.

The input velocity field is spiraling outwards (not divergence free, there is a fluid source in the center)
We project out the divergence free part of that velocity field, and project it such that the velocity does not point out at the boundary. Lookin good.

Presolving the laplacian matrix vastly sped up each iteration. Makes sense.

Why in gods name does sparse.kron_sum have the argument ordering it does? I had a LOT of trouble with x vs y ordering. np.meshgrid wasn’t working like I though it should. Images might have a weird convention? What a nightmare. I think it’s ok now? Looks good enough anyway.

And here is the code to make the video. I converted to image sequence to an mp4 using ffmpeg

Code to produce the velocity graphs above.

I should give a particle in cell code a try

Links

Cartpole Camera System – OpenCV + PS EYE + IR

We tried using colored tape before. It was okay after manual tuning, but kind of sucked. Commerical motion tracking systems use IR cameras and retroreflectors.

We bought some retroreflective tape and put it on the pole. http://a.co/0A9Otmr

We removed our PS EYE IR filter. The PS EYE is really cheap (~7$) and has a high framerate mode (100+ fps). People have been using it for a while for computer vision projects.

http://wiki.lofarolabs.com/index.php/Removing_the_IR_Filter_from_the_PS_Eye_Camera

We followed the instructions, but did not add the floppy disk and sanded down the base of the lens to bring the image back into focus.

We bought an IR LED ring light which fit over the camera with the plastic cover removed and rubber banded it in place.

http://a.co/2sGUY08

If you snip the photoresistor it is always on, since the photoresistor is high resistance in the dark. We used a spare 12V power supply that we soldered a connector on for.

We had also bought an IR pass filter on amazon, but it does not appear to help.

Useful utilties: qv4l2, v4l2-ctl and v4l2-utils. You can change lots of stuff.

qv4l2 -d 1 is very useful for experiementation

Useful options to  v4l2-ctl : -d selects camera, -p sets framerate -l gives a list of changeable options. You have to turn off the automatic stuff before it becomes changeable. Counterintuitively auto-exposure seems to have 1 as off.

There has been a recent update to opencv to let the v4l2 buffer size be changed. We’re hoping this will really help with our latency issues

A useful blog. We use v4l2-ctl for controlling the exposure programmatically

http://www.jayrambhia.com/blog/capture-v4l2

Oooh. The contour method + rotated rectangle is working really well for matching the retroreflective tape.

https://docs.opencv.org/3.3.1/dd/d49/tutorial_py_contour_features.html

You need to reduce the video size to 320×240 if you want to go to the highest framerate of 187fps

 

In regards to the frame delay problem from before, it’s not clear that we’re really seeing it? We are attempting both the screen timestamp technique and also comparing to our rotary encoder. In the screen timestamp technique, it is not so clear that what we measure there is latency, and if it is, it includes the latency of the monitor itself, which is irrelevant.

img_5311 img_2511

 

Aruco in opencv

So there isn’t great documentation on the python bindings as far as I can find. There are docs on the c++ bindings.  Trying to do this on a mac was a hellish uphill battle, and opencv in the virtual machine has been… hmm actually pretty okay? Well, I did this on my fresh new triple boot ubuntu flash drive.

Invaluable is to go into the python REPL and type

Then you can see what all the available functions are. They’re more or less self explanatory, especially since they are described in the opencv c++ tutorials.

http://docs.opencv.org/3.1.0/d9/d6d/tutorial_table_of_content_aruco.html

I believe the python bindings are generated programmatically, and they are fairly systematic, but always a touch different from the c++ function calls. A big difference is typically the python calls don’t modify in place.

Anyway, to get you up, I cobbled together some really basic code. It can generate a tag and save it

And this is a basic program to detect the markers

They are sprinkled with the requisite garbage and cruft of me wiggling around with print statements to figure out what everything is.

It sounds like more of what I want is to use Aruco boards. They sound good. I’m looking into using this for maybe robot configuration sensing.

 

Some opencv testing code

I made a little module to have a more controlled and programmatic testing of tracking algorithms and stuff.

I could use real world data, like a video recording, but I’d like to start here. I think this is smart. I also could have used a more complicated 3d imaging package. vpython makes sense, since it is easy, but getting programmatic access to the images in unsupported somehow as far as I can tell. Now, there is no way something that might work here will necessarily transfer over to real video even after I add noise and point mismatch, but it should simplify some things. I’ve been having more trouble than makes sense to me getting good rotations off of a KLT tracker that is clearly doing a pretty bang up job.