## USB nrf24L01

I bought a cheap USB stick to nrf24l01 adapter on aliexpress.

An alternative is to use a arduino nano or something with the https://github.com/TMRh20/RF24

Would be much more flexible too.

However the convenience of just being able to pop in the little man is nice.

They module plugs in pointing away from the usb port. Red and Blue LEDs turn on.

Documentation is very poor. But apparently takes AT commands

I found this on a french ebay site.

Seems to be accurate. The actual seller I got it from suggested AT+ADD to set address which does not work

AT Commands
Baudrate : AT+BAUD=n where n =  1-6 (1:4800,2:9600,3:14400,4:19200,5:38400,6:115200) (default 9600Kbps)
NRF Rate : AT+RATE=n where n =  1-3 (1:250K, 2:1M, 3:2M ) (default 2Mbps)
Local Address : AT+RXA=0Xnn,0Xnn,0Xnn,0Xnn,0Xnn where nn are the local receiving address (default 0xff,0xff,0xff,0xff,0xff)
Operating Freq. : AT+FREQ=2.nnnG where nnn = 400 / 525 (default 2.400G)
Checksum mode : AT+CRC=n where n = 8 /16 (default : 16 bit)
System info : AT?

AT? gets

ϵͳ��Ϣ��
�����ʣ�9600
Ŀ��Է���ַ�� 0xFF,0xFF,0xFF,0xFF,0xFF
���ؽ��յ�ַ0��0xFF,0xFF,0xFF,0xFF,0xFF
����Ƶ�ʣ�2.400GHz
У�鷽ʽ��16λCRCУ��
���书�ʣ�0dBm
���д������ʣ�2Mbps
�������Ŵ����棺����


Probably that is Chinese. But we can get some guesses. What each of those means. The second address is RX, the first is TX. I checked by changing them with AT+RXA= commands

It has an STC chip on it STC11L04E. They’ve included source that does not accept AT commands, so is there a way to reprogram the chip? after looking at the datasheet seems plausible, but not worth the energy.

And the usb to serial chip is the ubiquitous CH340.

Put it on two computers. Or on the same comp.

In two windows type

screen /dev/cu.wchblah blah blah 9600

where the blah blah blah can be found using ls/dev and finding the full device name

Typing in just pipes it right over to the other man (9600baud). Pretty good.

Edit: Check out Pete’s post below for more info. Apparently there are version differences between boards

## Some Projective Geometry for our laser scanner

We’re building a simple laser scanner. Camera attached to a board, with line laser hot glued to a servo on same board.

The line laser will scan. This laser line defines a plane. a pixel on the camera corresponds to a ray originating from the camera. Where the ray hits the plane is the 3d location of the scanned point.

Projective geometry is mighty useful here. To take a point to homogenous coordinates add a 1 as a fourth coordinate.

Points with 0 in the fourth coordinate are directions aka points on the plane at infinity (that’s a funky projective geometry concept, but very cool).

Planes are also defined by 4 coordinates. The first 3 coordinates are the normal vector of the plane. $a \cdot x = c$. The fourth coordinate is that value of c, the offset from the origin. We can also find the plane given 3 points that lie on it. This is what I do here. What we are using is the fact that a determinant of a matrix with two copies of the same row will be zero. Then we’re using the expansion of a determinant in term of its minors, probably the formula they first teach you in school for determinants. Because of these facts, this minor vector will dot to zero for all the points on that plane.

Then finally we’re finding the intersection of the ray with the plane. The line is described as a line sum of two homogenous points on the line. we just need to find the coefficients in the sum. You can see by dotting the result onto the plane vector that the result is zero.

Then we dehomogenize the coordinates by dividing by the fourth coordinate.

import numpy as np

#origin is camera position. z is direction camera is looking. x is to the right. y is up.
#Waiiiiiit. That's a left handed cooridnate system? Huh. Whatever. May come out mirrore
PCameraHomog = np.array([0.,0.,0.,1.])
#Baseline distance of
#Let's use units of meters
PLaser = np.array([ 0.3 ,0,0])

#I have measured my angle from -x going clockwise. God that is dumb.
laserAngle = 60.

PLaserHomog = np.append(PLaser, [1.])
upDirHomog = np.array([0.,0.,1.,0.])

planeMat = np.stack((PLaserHomog, upDirHomog, laserDirHomog))

def colminor(mat,j):
subMat = np.delete(mat, j, axis=1)
return (-1.)**j * np.linalg.det(subMat)

#The homogenous vector describing the plane coming off of the line laser. p dot x = 0 if x is on plane
laserPlaneHomog = np.array(map(lambda j: colminor(planeMat, j) , range(4)))

#Should all be zero
print np.dot(laserPlaneHomog, laserDirHomog)
print np.dot(laserPlaneHomog, upDirHomog)
print np.dot(laserPlaneHomog, PLaserHomog)

def pixelDir(x,y):
# pix / f = objsize / objdist
# f = pix * objdist / objsize
f = 100. #camera Width of 1m object at 1m in pixels, or 8m object at 8m.
return np.array([ x / f ,  y / f , 1., 0.])

cameraRay = pixelDir(10,20)

#pos is on line between camera pos and ray and lies on laserplane. Hence pos dot plane = 0, which you can see will happen
posHomog = np.dot(cameraRay, laserPlaneHomog) * PCameraHomog - np.dot(PCameraHomog, laserPlaneHomog) * cameraRay
print posHomog

def removeHomog(x):
return x[:3]/x[3]

pos3 = removeHomog(posHomog)

print pos3



## Some opencv testing code

I made a little module to have a more controlled and programmatic testing of tracking algorithms and stuff.

I could use real world data, like a video recording, but I’d like to start here. I think this is smart. I also could have used a more complicated 3d imaging package. vpython makes sense, since it is easy, but getting programmatic access to the images in unsupported somehow as far as I can tell. Now, there is no way something that might work here will necessarily transfer over to real video even after I add noise and point mismatch, but it should simplify some things. I’ve been having more trouble than makes sense to me getting good rotations off of a KLT tracker that is clearly doing a pretty bang up job.

import cv2
import numpy as np

class MyCam():
def __init__(self, frameSize=(480,640), focus =600, avgPointPos=np.array([0,0,3]), sigma = .5, pointNum=300):
self.pointCloud = sigma * np.random.randn(pointNum, 3)
self.pointCloud = map(lambda pnt: pnt + avgPointPos, self.pointCloud)
self.t = np.zeros(3)
self.R = np.identity(3)
self.frameSize = frameSize
self.focus = focus
pnts = np.array(self.projectPoints())
frame = np.zeros(self.frameSize + (3,))

for pnt in pnts.astype(int):
if pnt[0] > 0 and pnt[1] > 1 and pnt[0] < self.frameSize[0] and pnt[1] < self.frameSize[1]:
cv2.circle(frame,tuple(pnt),5,[0,0,255],-1)
return frame
return map(lambda pnt: pnt + vec, points)
def transformPoints(self):
rotated = np.dot(self.pointCloud, self.R.T)
return translated
def projectPoints(self):
transformed = self.transformPoints()
inFrontofCameraPoints = filter(lambda pnt: pnt[2] > 0, transformed)
return map(lambda pnt: self.focus * pnt[:2]/pnt[2] + np.array(self.frameSize)/2, inFrontofCameraPoints)

cam = MyCam()
'''
cv2.imshow('frame',frame)
k = cv2.waitKey(0)
'''

angle = .1

rotateZ = np.array([[np.cos(angle), np.sin(angle), 0],
[-np.sin(angle), np.cos(angle), 0],
[0,0,1]])
while(1):
cv2.destroyAllWindows()