Simple GnuRadio Sonar Rangefinding

 

random_source-grc

A simple graph where you can see a peak move as you move the microphone closer and farther from the speaker.

Uses the FFT method to compute the cross-correlation between the source signal and the microphone signal.

The source signal is a random binary signal. The repetition of 2 helpful I think because of the window applied in the Fourier Transform elements. Since a binary signal at the sampling rate has a lot of high frequency components, I hypothesize that Even a very sharp low pass filter might hurt. Repetition ought to push the signal somewhere in the middle of the spectrum.

Suggestion:

Could use averaging to increase signal level.

 

The plan is to master the sonic case, where the electronics is much simpler, and then move into using my LimeSDR attached to IR diodes for a DIY Lidar unit. We’ll see where we get.

 

An interesting package that i should have investigated first

https://github.com/kit-cel/gr-radar

 

Elm, Eikonal, and Sol LeWitt

We saw this cool Sol LeWitt wall at MASS MoCA. It did not escape our attention that it was basically an eikonal equation and that the weird junctures were caustic lines.

It was drawn with alternating colored marker lines appearing a cm away from the previous line. This is basically Huygens principal.

So I hacked together a demo in elm. Elm is a Haskell-ish language for the web.

 

 

 

So I made a quick rip and run elm program to do this. This is the output, which I could make more dynamic.

The algorithm is to turn a list of points into their connecting lines. Then move the line perpendicular to itself, then recompute the new intersection points. It’s somewhat reminiscent of Verlet integration. Lines coordinates are momentum-like and points are position like and we alternate them. This is a finite difference version of the geometric Huygen’s principle.

Alternative methods which might work better include the Fast Marching Method or just using the wave equation and then plotting iso surfaces.

I also had to resample the function to get only the maximum y value for each x value in order to duplicate the LeWitt effect.

sol

These are the helper functions with lots of junk in there

And this is the svg main program.

 

 

notes on elm

elm is installed with npm

elm-repl

you import packages (including your own) with

import ThisPackage

and you check types by just writing them and hitting enter rather than :t

elm-live is a very handy thing. A live reloading server that watches for changes in your files.

elm-make myfile.elm

will generate the javascript and html

This is a good tutorial and a good snippet to get you going

 

Differences from Haskell:

elm isn’t lazy which is probably good.

The composition operator (.) is now <<

elm doesn’t have the multiline pattern match of haskell. You need  to use case expressions. I miss them.

typeclass facilities are not emphasized.

The list type is List a rather than [a]

 

A couple of interesting deep learning topics

https://hackernoon.com/up-to-speed-on-deep-learning-july-update-4513a5d61b78

Image Segmentation

How to find objects as subimages in an image:

https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4

Basically, use classifying networks on suggested subboxes. Then there are some tricks layered on top of that idea, like using a netowkr to suggest possible subboxes. There exist implementations of these things in tensorflow, caffe and others.

http://blog.qure.ai/notes/semantic-segmentation-deep-learning-review

One shot learning

https://hackernoon.com/one-shot-learning-with-siamese-networks-in-pytorch-8ddaab10340e

Differentiate whether two pictures are of the same object using only the one image.

One-Shot Imitation learning

https://arxiv.org/abs/1703.07326

Gstreamer

gstreamer is tinker toys for putting together media applications. Very reminiscent of gnuradio although it doesn’t have a nice gui editor. You smash together a bunch of blocks

It keeps coming up so I am looking into it more.

 

https://gstreamer.freedesktop.org/documentation/installing/on-linux.html

sudo apt install libgstreamer1.0-dev

copy example

https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html#

gcc hello_gstream.c pkg-config --cflags --libs gstreamer-1.0

 

v4l2src is the webcam source of the /dev/video0 device

 

apt-get install gstreamer0.10-ffmpeg

gst-launch-1.0 -v \
v4l2src \
! qtdemux \
! h264parse \
! ffdec_h264 \
! ffmpegcolorspace \
! x264enc \
! rtph264pay \
! udpsink host=127.0.0.1 port=5000

helpful idiom

gst-inspect | grep “h264”

This let me view my webcam

gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=1280,height=720 ! xvimagesink

The video/x-raw is a “cap”, a capability, kind of defining the type of video flowing through. It isn’t a conversion step as I understand it. It is telling the graph which of the possible types of video available you’ve picked (your webcam can just be told to give you different stuff).

Ugh. The gstreamer elements are super useful, but where is an organized list of them. The manual just has a big dump. Most of these are probably not useful.

https://gstreamer.freedesktop.org/documentation/plugins.html

videoconvert sounds like a good one

There are some fun opencv and opengl ones. Like face detection or wacky effects. Handdetect is a curious one

fakesrc for testing

special sinks for os x –  osxvideosink

playbin for playing from a uri

x264enc – encodes into h264

uvch264 – gets a h264 stream right from the webcam

Using the Logitech C920 webcam with Gstreamer 1.2

Or you can just change the parameter to v4l2src to output h264. Ok this is not working on my webcam. I get

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.

instead

gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=640,height=480 ! x264enc tune=zerolatency !  h264parse ! avdec_h264 ! xvimagesink

encodes h264 and then decodes it. May want to change that zerolatency to another setting option. Maybe?

https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-ugly-plugins/html/gst-plugins-ugly-plugins-x264enc.html

 

okay continuing ahead with the streaming. I can’t get h264 to stream. It gives a ERROR: from element /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0: Internal data flow error. when combined with the stock example code

GARBAGE. DO NOT USE.

 

https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-rtpbin.html

However. using h263 it does work. Needed to change ffenc to avenc from their example and ffdec to avdec

Sending

receiving

 

for receiving on my macbook

gst-launch-1.0 -v rtpbin name=rtpbin \

you need to specify a host for the udpsinks to get the video on another computer.

I would estimate the latency at 1/4 second maybe. Much better than other things I’ve tried.

okay default latency on rtpbin is 200ms.

on receiving side set latency=0 as an option to rtpbin (not totally sure if transmitting side should have it too.)

I wonder how bad that will fail in the event of packet loss? It’s probably not a good setting for some circumstances, but for a non-critical application on a LAN it seems pretty good.

I think the latency might have crept up a bit over a minute. Not too bad though.

 

https://github.com/GStreamer/gst-rtsp-server

 

Making a Podcast

I followed a couple tutorial websites

I’m using Zencastr for the moment. It is a browser based skype recording thing. Our first episode came out really out of sync between the two tracks

I’m using audacity to mix the two tracks together into a single file.

Hosting is on a blogspot. I think this was a bad choice. Should have used wordpress. Anyway, simple enough. Make a post per episode.

Using Feedburner to collect up the RSS feed from the blogspot and giving it more metadata. This service feels so crusty and ancient. I wonder if it still best practice

Domain names on Google Domains.

Google Drive with shared links was used for hosting. This works ok, but is not good enough for itunes. It is missing byte range requests and maybe nice urls with filenames in them? Google drive has some abilities that stopped in 2015 that would’ve been helpful. If you modify the usual shared link to look like

https://drive.google.com/uc?export=download&id=0ByV2UyFOHalnSHdkbTZnLVJUc2M

it works better, replacing that junk at the end with your junk

Using Amazon S3 for storage. I already had an AWS account and bucket, so no biggy. The cost should be cheap according to what I’m seeing? $0.04 /GB/month for storage and a couple of cents per 1000 requests supposedly. We’ll see. I’ve been burned and confused by AWS pricing before.

Submit podcast on https://podcastsconnect.apple.com/#/ for itunes

 

 

Nerve: Elixir OS packager for Raspberry Pi

I found out about Elxir Nerve on the Functional Geekery podcast. Seems right up my alley.

It builds a minimal linux image with erlang running. Runs on the raspberry pis and beaglebone.

Erlang and elixir does intrigue me a lot, but I haven’t gotten over the hump yet.

Summary of experience so far: Kind of painful. Docs aren’t great. Being an elixir newbie hurts. Strong suspicion that going outside the prebuilt stuff is gonna be tough.

https://hexdocs.pm/nerves/installation.html#linux

Installation

Getting Started

https://hexdocs.pm/nerves/getting-started.html#content

mix nerves.new hello_nerves

need to export the target variable. Why is this  not part of the config file. There probably is a reason.

export MIX_TARGET=rpi0

Building the firmware

mix firmware

writing to sd card

mix firmware.burn

says I need to install fwup

makes sense. Not in apt-get. Downloaded the deb file and installed

https://github.com/fhunleth/fwup

 

Booted up. Shows things on hdmi. Cool

went to

https://github.com/nerves-project/nerves_examples/tree/master/hello_wifi

run the following before building to set wifi stuff

mix deps.get

mix firmware

mix firmware.burn

 

Hmm. Can’t get it to work. i have Erlang 20 and It wants 19. Upon further inspection, this example is completely deprecated. Sigh.

 

Alright.

mix nerse.new wifi_test

https://github.com/nerves-project/nerves_examples/blob/master/hello_network/mix.exs

https://github.com/nerves-project/nerves_network

https://hexdocs.pm/nerves_network/Nerves.Network.html

add in the nerves_network dependency into mix.exs

added

at the end of the config file

 

Alright. I admit temporary defeat. The pi zero is an awful thing.

 

Hmmm!

If you plug the usb port of the pi zero into your computer it shows up as a serial device

in my case /dev/ttyACM0

you can open that up with screen or the arduino serial monitor

baud 115200

And you have access to an elixir console.

Interesting.

 

I was able to get linklocal ethernet connection working. You have to setup nerves_network usb0 to use method :linklocal.

I used nerves_init_gadget

https://github.com/fhunleth/nerves_init_gadget

In addition on unbutu you have you go into the netowrk settings and change the ipv4 dropdown in the options to linklocal only. Then the pi is available at nerves.local

 

The edimax wifi dongle does not work by default

https://www.grappendorf.net/tutorials/nerves-pizero-edimax.html

hmm buildroot https://buildroot.org/

This is intriguing. It is a build tool for getting linux on embedded systems

 

 

 

 

Quantum Information and Computation Resources

A playlist I keep in no particular order. In particular I recommend Gottesman’s Quantum Information Course from Perimeter Institute which has tons of other interesting physics courses and colloquia as well.  Also check out the CSSQI lectures. Some of these videos are just links to a whole glut of connected videos from the same people, so look on the sidebar and at other videos from the same users.

 

Preskill’s classic notes. I don’t find them very approachable?

http://www.theory.caltech.edu/people/preskill/ph229/

Quantum Computing since Democritus. Unbelievable.

http://www.scottaaronson.com/democritus/

I liked these

https://people.eecs.berkeley.edu/~vazirani/

The Simons Institute has had some interesting workshops. I coincidentally have been interested in SAT problems recently coming from a logic side.

https://simons.berkeley.edu/workshops/qhc2014-boot-camp

 

Recommended Books:

Nielson and Chaung – The standard

Kitaev and other – Classical and Quantum Information

Mermin

Mark Wilde – Quantum Information. Much less quantum computation. Good different perspective.

 

Interesting Languages – Both have tutorial videos available

Quipper – Haskell based

Liquid – Microsofty F# based

 

IBM Quantum Experience – Let’s you run on real hardware!

https://www.research.ibm.com/ibm-q/

Rigetti has a similar thing going on

http://rigetti.com/

 

 

 

 

 

 

 

Drone Notes

April 2017

Racing firmware mostly

CleanFlight vs betaflight vs iNav. A family of related firmwares

iNav for autonomous, betaflight for latest

Betaflight might be taking over?

 

 

Ardupilot seems to be leader for autonomous drones

pixhawk is premier computer

http://ardupilot.org/dev/docs/raspberry-pi-via-mavlink.html

 

gstream for video streaming

Gstreamer basic real time streaming tutorial

https://gstreamer.freedesktop.org/documentation/tutorials/basic/gstreamer-tools.html

 

uv4l?

UV4L

even better for streaming pi?

The only thing we have working is the webrtc brwoser based camera.

You need to click call to make it start

 

 

https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4

 

 

get avr branch of ardupilot

go into examples folder

make apm2

make apm2 upload

I am not registering my apm2.6 as a serial device. Ok,  my usb cable was bad. What are the odds?

installing apmplanner from http://ardupilot.org/planner2/docs/installation-for-linux.html

command is missing an underscore

rtl is return to launch

 

 

SITL is the recommended simulator

Installed vagrant to use SITL on mac

http://ardupilot.org/dev/docs/setting-up-sitl-using-vagrant.html

http://sourabhbajaj.com/mac-setup/Vagrant/README.html

I had to make a Vagrantfile to get it to work. By default vagrant was trying to use some nonsense

Make Vagrantfile with

https://www.vagrantup.com/intro/getting-started/boxes.html

 

 

JMavSim for software in the loop on pixhawk 2

https://pixhawk.org/users/hil

https://pixhawk.org/dev/hil/jmavsim

 

 

What is the difference between apm planner and mission planner?

Setup pi as access point. Could use as radio then. Not very long range

https://learn.adafruit.com/setting-up-a-raspberry-pi-as-a-wifi-access-point/overview

 

supposedly the apm2.6 will connect through usb

Dronekit

http://python.dronekit.io/guide/quick_start.html

Mavlink and pymavlink. Evidently dronekit uses pymavlink

pymavlink is a low level python control of MAVlink messages.

mavproxy – is a command line ground station software. More feature packed than apm planner? Has ability to use multiple linked ground stations.

mavproxy can forward data to a given port. Useful, but I can’t find it documented in the mavproxy docs themselves

 

dronecode is a set of projects

Dronecode Platform

Really nice looking simulator

https://github.com/Microsoft/AirSim/blob/master/docs/linux_build.md

I had to sign up with epic games and link my gihub account to be able to clone the unreal engine

We’re using a Turnigy 9x. Got a ppm encoder to be able to attach to pixhawk

 

Setting up the pixhawk 2:

The motors need to be plugged in according to their number

http://ardupilot.org/copter/docs/connect-escs-and-motors.html

Download APM planner 2

Flashed the firmware

Ran through the initial calibration. Followed onscreen instructions.

Not immediately getting all the buttons working

http://ardupilot.org/copter/docs/common-rc-transmitter-flight-mode-configuration.htmlSw

Swapped channels 5 and 6 on controller to have flight mode siwtch

Flight modes

Stabilize – self level roll and pitch axes

FS_THR_Value error. Not sure why

Compass is not calibrating. Not sure why.

 

We had lots of problems until we uploaded the latest firmware. It loaded firmware at the beginning, but I guess it wasn’t the latest. We built APM Planner from source and perhaps that reupdating fixed the firmware to 3.5.1

Spinning up it flew but was spinning. We wired up the motors ccw and cw opposite to the wiring diagram but never changed it in the firmware.

 

Drone Code uses QGroundControl. This is sort of an APM Planner alternative.

v.channels gives a dict

channel 2 was right up down

channel 3 was left up down

 

Dronekit Cloud. Web apis for drone control? This kind of seems like for if you have a ton of drones. Forward looking

 

In the field we can connect to the drone using the phone as a hotspot.

 

It seems like only guided mode will accept mavlink commands

The controller modes override what the pi says.

Stabilize mode should ignore mavlink commands? In case they get wonky.

RTL.

So we set the controller to have flight mode settings. In those three modes, in case something goes wrong.

put this in a dronerun file

python “$@” &

So that you won’t have the program stop when ssh pipe dies.

Need to set RTL speed and altitude. Dafult may be alarming

WPNAV_SPEED

250 up default

150 down default

Crash on RTL mode. (Toilet bowl behavior? Seemed to be moving in a circle. ) I also felt like the loiter mode responded counter intuitively to my commands.

 

We’d like to use raspberry pi camera for visual odometry

Mavlink message is implemented in ardupilot

https://github.com/PX4/OpticalFlow

http://mavlink.org/messages/common#OPTICAL_FLOW_RAD

http://ardupilot.org/dev/docs/copter-commands-in-guided-mode.html

actual source

https://github.com/ArduPilot/ardupilot/blob/master/ArduCopter/GCS_Mavlink.cpp#L967

 

 

 

Movidius Neural Compute Stick

https://developer.movidius.com/getting-started

Installed VirtualBox and ubuntu 16.04 on my macbook (welcome to the dangerzone). Nice and fresh. sudo apt-get update and upgrade. I ran into problems eventually that are not cleared up on the forum. Switched to using a native 16.04 installation. The setup ran without a hitch. Beautiful.

Get the latest SDK

https://ncs-forum-uploads.s3.amazonaws.com/ncsdk/MvNC_SDK_01_07_07/MvNC_SDK_1.07.07.tgz

following these instructions

https://developer.movidius.com/getting-started/software-setup

I had to restart the terminal before running setup.sh for ncapi. It added something to my bashrc I think. Ok. Actually that is mentioned in the manual. Nice.

Now to test. In the bin folder

also example 01-03

They all seem to run. Excellent.

Looks like ~100ms for one inference for whatever that is worth

“example00 compiles lenet8 prototxt to a binary graph, example01 profiles GooLeNet, example03 validates lenet8 using a simple inbuilt image.”

https://developer.movidius.com/getting-started/run-inferencing-on-ncs-using-the-api

Go to ncapi/c_examples

make

 

options for ncs-fullcheck are inference count and loglevel

go to py_examples

stream_infer

It really likes oxygen mask.

But was successful on sunglasses and a coffee mug. Although it did oscillate a little.

The README is interesting in the stream_infer

Stat.txt holds the average rgb and std dev values.

I wonder if I could run two sticks?

A lot of the stuff is gstreamer related

The movidius beef seems to be

You just load the tensor and then get it back.

There is some recommended preprocessing of the image and grabbing the label files and stuff but that is all standard python. Change the mean and std dev to match the netowrk. Also convert to a float16 array. Resize to 227×227

I’ve never used gstreamer. I wonder if there is a problem using the standard opencv stuff. Doesn’t seem like there should be.

 

In the last couple days, they released instructions on how to run on a raspberry pi.

 

object localization would be very useful for us.

Get the script for the faster r-cnn

https://github.com/rbgirshick/py-faster-rcnn/blob/master/data/scripts/fetch_faster_rcnn_models.sh

copy contents

chmod +x that script and run it

 

To take a new network and make it run

you run the  mvNCCompile on the prototxt ( which describes the shape of the network and other things) and the caffemodel weightfile

for example

python3 ./mvNCCompile.pyc ./data/lenet8.prototxt -w ./data/lenet8.caffemodel -s 12 -o ./lenet8_graph

then you can profile and check it’s correctness. It is unclear at this point how easy it will be to take stock networks and get them to run.

https://huangying-zhan.github.io/2016/09/22/detection-faster-rcnn.html

 

 

 

Blockchain

 

This is a great video.

To summarize:

Digital signatures are a way to verify that you wrote a message. Ordinarily I of the public private key communication as giving out the public key so that people can encode messages that I decrypt with the private key. This is the opposite. I lock up the message with the private key and people can unlock/verify it with the public key. It is difficult for people to find a way to lock up a message that will decrypt with that same public key.

Cryptographic Hash functions make a summary of a small number of bits that is difficult to find a message that makes the same summary hash.

Your bitcoin address is the public key. I think you have a wallet that can manage multiple public/private pairs. Possibly transferring money between them.

Bitcoin has the miners append a number to the end of the transaction list trying to find a hash that has a long string of zeros. Since you’re just basically randomly trying numbers anyway, it doesn’t hurt you to add in more transactions as they come in I think.

The blocks form kind of a funky linked list, where instead of a pointer you have the hash of the previous block. People trust the longest chain they can find, which was really hard to compute.

Miners can be incentivized to include transactions into their block. In general it seems like the protocol is a bit extendable by the consensus of the community. You can sort of vote on changes to the protocol by including your vote in the the hashed block.