## ROS: robot operating system

I was following mac install instructions

, but the pip instruction failed. I turned off the anaconda distro I recently installed by commenting out the line in my ~/.profile file

brew info assimp

brew install ros/deps/pyassimp

I give up on mac. It sucks. Using an ubuntu virtual box. Setup is super easy. Mostly just a apt-get

http://wiki.ros.org/ROS/Tutorials

So what is ROS? Well, my impression is that it’s focus is kind of like unix piping. It’s a uniform interface you can use to ram little chunks of programs into one another.

But also the community just has a bunch of robotics programs

rviz

urdf

gazebo

pcl

Check this out http://wiki.ros.org/visp. Depending on how well this works could be pretty cool.

You need to be running roscore in another window for stuff to work

Install instructions here

nope.

Okay the problem I had was that a weirdo python distro had been install doing something else (Goddamn you Moose. What gives you the right?), so I removed it and had to rerun all the workspace creation stuff. Now we’re good

## Visual Odometry and Epipolar Stuff

I went on a huge projective geometry kick a few years back. I loved it when i found out about it.

Anyway, forgotten a great deal of it now. Looking into using cameras to determine displacement of a robot.

https://en.wikipedia.org/wiki/Essential_matrix

Useful stuff

So if we translate and rotate 3d vectors we have

$x' = Rx+t$

If we take the cross product we have the equation

$t\times t=0$

$t \times x' = t \times R x$

Since whatever comes out of a cross product is perpendicular to what went in

$x' \cdot (t \times x') = 0 = x' \cdot t \times Rx$

Now we can just interpret these 3d vectors as 2d projective plane vectors from two different camera viewpoints (homogenous coordinates).

This last equation gives the epipolar constraint. It describes what the ray that projects to the object from camera 1 looks like in camera 2. And that ray is going to be a line. So the essential matrix converts from points in 1 camera to lines in the other camera.

Ok.

Then how do you get the R and t back from point correspondence? A cross product can be represented as a skew symmetric matrix (it’s a linear process that takes a vector to another vector).

$t\times = [t]_x$

The cross product kills 1 vector, so this vector has eigenvalue 0 for this matrix.

And it flips two other vectors with some minus signs. In an orthogonal basis with t as one element, it would look something like

$\begin{matrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}$

So when we take the SVD of E, it will have 2 equal eigenvalues and 1 zero eigenvalue. $E = USV^T$

The matrix $V^T$ transforms from the x frame to the t frame.

The matrix U transforms from the t frame to the x’ frame.

We can just unflip the two vectors switched up by the cross product in the t basis(The matrix that does this is called W) and compose U and V to reconstitute R.

Then we can find $[t]_x=ER^{-1}$

Note that the t you find is unknown up to scale. You can only find the direction of translation, not how far.

The problem comes from that you can’t tell the difference between images of dollhouses and real houses. There is an unavoidable scale ambiguity. BUT, one would have estimates from all sorts of other places. Context maybe, other sensors, current estimates of velocity, estimates of the actual 3d position of the points you’re using.

Makes sense, I think. Now I have to actually try it.

## Getting goddamn wifi on the goddamn orange pi pc

I tried the Lubuntu and Raspbian distros, maybe another one would work out of the box. i’ve heard rumblings that an openelec distro might work with wifi? Also I’ve heard that 8188eu chipsets will work out of the box. Can’t confirm either.

http://www.armbian.com/orange-pi-pc/

used the legacy jessie server 3.4.112 build here.  unpacked it using keka.

installed on sd card using dd. https://www.raspberrypi.org/documentation/installation/installing-images/mac.md

Made the wifi drivers as described here

http://forum.armbian.com/index.php/topic/749-orange-pi-pc-wireless-module-8192cu/

make scripts failed

After making the scripts I mostly followed the github itself instructions.

https://github.com/pvaret/rtl8192cu-fixes

I had to edit the kernel files to comment stuff out like discussed here

http://forum.armbian.com/index.php/topic/1121-unable-to-build-kernel-scripts/

Seemed to work.

(Putting the wifi dongle in the lower usb port might be important? I’ve heard reports of that)

iwlist scan wlan0

finds the local router

My Edimax RTL8188CUS adapter does appear to be close to working now. I am connected to the router

with

changed the /etc/network/interfaces file to have

Can’t ping external ips. TBD if everything is actually fixed.

Edit:

http://www.cyberciti.biz/faq/linux-setup-default-gateway-with-route-command/

Gateway was misconfigured

To the /etc/network/interfaces. I think until I did this it didn’t find the wifi until you logged on with ethernet

## A Stack Monad in Python

So I saw

where he uses a stack as an example of a state monad.

I get the Maybe monad I think and some others, but the State monad is one more level of abstraction that confuses me still. When I try to reason about it, I fail to see the point sometimes. Pure state to state functions are immediately composable. To also have a return value and return state usually doesn’t seem necessary.

The actual monad is the full state transforming and value returning function. This is very abstract.

Most functions you’ll be writing will return a function that takes state and outputs (val, state) pair. It’s weird.

Though I’d try doing it in python. It is sticky. Maybe I need to churn through it a bit more. The fact that I need to keep putting those toss away functions in there seems strange. Perhaps I need a bit more think. Although the do notation does expand out to that often I think.

It is neato how the associativity kind of works.

Edit:

I later realized I had forgotten >> aka then

Pretty straightforward. Just added in the throwaway lambda into a function. Can do this manually or by referring back to bind (which is better since then basically just has the logic of bind in it).

## Keras and Learning Sine

One perspective on machine learning is that it is interpolation, curve fitting.

So I decided to try to fit sin(x) using a neural network.

Linear layers with Relu units will make piecewise linear curves.

It works, just not well.

Deep networks tend to get kind of hung up.

The final result is never all that great. Seems to have a lot of trouble at the minima and maxima of sine, probably because a relu is not a good model for what is happening at those points.  And it takes a lot of iterations for the rest to look pretty good. Perhaps the optimizer needs more fine tuning. I have not looked within the box much at all. Maybe a more aggressive optimizer would converge faster for this very simple task.

Here I plotted the result at intermediate steps of convergence. The accuracy tends to propagate out from 0 and sort of wrap around the curves. I wonder if this is due to where the thing is initialized