ngspice, huh

Got tipped off to ngspice. Circuit simulation. Not bad. Can’t get it to graph properly on my computer though. I vastly prefer specifying the circuit in a file to specifying it in a schematic. That sucked.

http://www.ngspice.com

is a solid online doohickey

Hypothetically this is a copy of that code. Wonder how long that’ll last

http://www.ngspice.com/index.php?public_circuit=T6XqVE

Axebot Lives.

 

Random Potentials and QFT I

There is a remarkable mathematical similarity between systems with random potentials and interacting quantum systems (QFT). I think that systems with random potentials are conceptually more straightforward.

The problem often is that you want to take the average value with respect to the distribution of possible potentials. The choice where you place this averaging (integration with respect to the probability distribution) and you can make smart approximations by placing this averaging where it shouldn’t go.

Let us suppose we have a system with H=H_0+V where H is an ordinary hamiltonian for a single particle and V is a random non time varying (for simplicity) potential background.

Solving the system with a given instance of V is no big deal. Put it in a computer. It’s doable basically.

The trouble is averaging over all the V.

The mother of all questions in quantum mechanics (or mechanics in general) is how to time evolve a system.

The hiccup we run into is the simple fact that

\overline{e^{-iHt}} \ne e^{-i\bar{H}t}

Try expanding the exponentials to see.

1-i \overline{H}t-\overline{HH}t^2/2! +..\ne 1-i\overline{H}t-\bar{H}\bar{H}t^2/2! + ..

The problem starts with that second term. On the left side we have the correlations of V. On the right side we never have any correlations.

What are we to do? The right side is easily computed. The average value of V is the only thing that shows up (and it would probably be a reasonable convention to set that equal to 0 and put any bias it has into H_0). The left side is a hairy mess. Naively, we would have to solve the parametized by V eigenvalue problem in closed form, then average that answer with respect to to distribution of V, basically an impossible task.

So what are we to do?

Well, what we can do is start working on moments. Then we can do better and better.

Even if your initial \psi is set, after any evolution \psi becomes a random variable inheriting it’s randomness from the randomness of V. That means we can ask about its average values (a bad idea or at least you should know what you’re asking. The phase of wavefunctions is a fragile and sometimes irrelevant thing, when you average over it you may get 0. Again the classic misconception that the summed average object has to be the same  as characteristic example of the object. The sum of 1 and -1 is 0, and 0 is neither 1 nor is it -1. Pedantic and yet obtuse? You’re welcome.).

Sooooo. We can form \psi \psi and evolve that.

(e^{-iHt}\otimes e^{-iHt})( \psi\otimes\psi)

I assume you can totally see where I’m going with this. Because I see it only hazily. But I know there is something here.

How does this connect to QFT?

 

Some thoughts on many body quantum mechanics

There is a lot going on all at once when you open up a textbook on many body. Also, an insistence (which for practical calculational reasons makes a lot of sense, but leaves a conceptual hole) on using second quantized talk and notations, when before the thing you’re most familiar with and spend years grappling is schroedinger style quantum mechanics. It is not impossible to do most if not all things you do in second quantized form in first quantized form.

A good question to ask throughout is what the hell are we talking about? In the sense of what subspace are we talking about really.

Let us suppose that things really are made of “just” electrons. Then the total vector space is clear and the total Hamiltonian is clear. A reasonable wavefunction will be \psi(x_1,x_2,x_3,...) which is properly antisymmetric in all the variables, one variable for each electron. In a discretized form the vector will look like \psi_{i_1 i_2 i_3 ...} where it is properly antisymmetrized between indices, one index for each electron. If you have a lattice of N sites and m electrons, then the number of amplitudes you have to give to specify your vector is roughly N^m/m! (Number for independent particles divided by permutations) not including some mild savings that you could achieve because of the antisymmetrization. This is quickly enormous. You might equivalently talk about solving PDE in 1,2,3 dimensions. 3d PDEs are way harder because for even modest lattice sizes the total size of vectors is huge.

So really, it is unacceptable to really discuss a vector in the context of the full configuration space of all the particles for most purposes.

Let us say that a power on high has given you the true ground state wavefunction of the system. How would you construct a reasonable space to work in? A simple answer is the single particle subspace, which can be constructed \sum_P (-1)^P \psi(x)\psi_0 (x,x,x,x,x...), where we are summing over all permutations of the variables with that pesky -1 for odd permutations. Perhaps I should stick to distinguishable particles for the moment. It cleans things up a bit and we don’t need all these sums for symmetrization. \psi(x_{new})\psi_0 (x,x,x,x,x...)

For concreteness, we’ll worry about evolving a given wavefunction forward in time, sort of the canonical quantum mechanics problem. It tends to be that if you can solve that one, you can solve any other question you might ask. Evolving wavefunctions forward in time is related to all the high falutin Green’s function techniques that maybe I’ll get to in another post.

If the extra particle had no interaction with the other particles, let us say it is on venus or something, this would be a very good wavefunction indeed. The new wavefunction would evolve independently of the ground state wavefunction (which evolves trivially with a phase since it is a energy eigenstate) under its own hamiltonian.

\psi(x,t)e^{-i\omega_0 t}\psi_0 (x,x,x,x,x...)

However, if the particle really does interact, then the wavefunction does not maintain this nice factored form, and that is what we really should expect. The factored form implies that really the new particle is not correlated at all in what it is doing to all the other particles. If they interact then of course they get correlated. If we want to be perfectly honest we really need to play in the full unusably huge vector space.

Luckily physics is not honest so ALL HOPE IS NOT LOST.

Here’s the dumbest thing we can do (I don’t mean that disparagingly. The dumbest thing you can do is the first thing you should do. Don’t get fancier than you need to.). Let’s hope that by restricting ourselves to wavefunctions of the form \psi(x)\psi_0 (x,x,x,x,x...) we can do a good enough job of describing the system. Maybe if they aren’t interacting that much those wavefunctions will still be pretty good since they were really good when they didn’t interact at all. Now all we need to specify is the new \psi(x) that we put on there, which has only N entries. Much more swallowable.

Ok. So how can we go about evolving the system in time? Well, we have the Hamiltonian, which probably has messy coulomb interactions or something in it. What about if we just find the matrix elements of the true hamiltonian in the subspace (which form and N\times N matrix) we’re working in, then solve the thing by exponentiating it? Sure. Doable. For reasonable lattice sizes, we can totally solve the eigenvalues of that matrix or whatever. When we find the effective interaction between the ground state particles and the new particle, it is essentially the potential that would   be between a continuum blob with a density that is equal to the probability density of the ground state wavefunction  and the new particle V(x)=\int dx \frac{\rho(x_1}{|x-x_1|} .

Cool. Let’s say we’ve done this. Was this smart? Wellllllllllllllllllllllllllllll…

No.

What do we really want?

What we actually want is the matrix elements between the true correctly evolved wavefunction, whatever that happens to be, and our goofy subspace. Let’s call the projection to our subspace the operator P which is 1 if you’re in the subspace and 0 if you’re outside.

What we kind of would prefer is this

<\psi |Pe^{-iHt}P|\psi >

The evolution is for real and honest, but we only allow our restricted beginning and end states.

What we’ve done is this

<\psi |e^{-iPHPt}|\psi >

Which has dishonest evolution and really can’t be trusted.

Here is where things start to and will continue to be interesting. How do we achieve the former? We are now touching upon one of the many faces of renormalization! Isn’t that exciting!

Let us form a definition of the effective hamiltonian

e^{-iP H_{eff} P t} = P e^{-iHt} P

Ok. Definitions are nice, but to be certain, this does not immediately tell us how to calculate H_{eff}.

We’ll start there next time.

(PS wordpress latex freaks if you have an double whitespace. Are you kidding me?)

 

 

 

Recording Audio in Python

I was thinking that maybe recording audio might be convenient way to get reasonable amounts of data into my computer. Probably better than using an arduino for a bunch of reasons. Plus I’ve already bought some external DACs for recording audio.

A useful Link

http://www.swharden.com/blog/2013-05-09-realtime-fft-audio-visualization-with-python/

I followed these instructions https://gist.github.com/jiaaro/9767512210a1d80a8a0d

to install

Here is some very slightly modified from the pyaudio example code. May want to eventually select which input device you want

Maybe I’ll write in a moving fft or some realtime watching would be nice.

Pretty Good!

 

Curses and Ncurses or whatevs.

So I was wondering how to make slightly less wonky command line programs. Sometimes they aren’t. Like top or nano or vi and stuff.

I think the answer is curses,

I basically read this

https://docs.python.org/2/howto/curses.html

Here’s a short example program that I think sort of demonstrates enough to get you going.

Press q to quit

z, r, and p do stuff.

Very curious that you have to do this sequence of build up and tear down or else it freaks. Ctrl-c does exits nicely due to that bit of code about signals. Ctrl-c send SIGINT signal

 

Python Wave File Fun and Audio on Arduino

I had never touched the struct library before. I believe it is the main way to use python to interact with bit formats.

https://docs.python.org/2/library/struct.html

Strings are byty things, so I guess struct uses them as a bridge between pythons weirdo types and the bare metal of c types. I think something similar can be done using numpy.

16 bit wave is signed integers, 8bit is unsigned. That goofed me off. I never got the resaved file to sound not alike a demon though, so there is something wrong in the code. The outputted c file worked like a charm, so I guess who cares. That’s all I wanted anyhow, I was just rewriting the wave for debugging.

Tried using an R-2R ladder to make a ADC for the arduino. It worked, we buffered the output with a simple emitter follower circuit.

Way overkill though.

I was thinking of using the analog in in some kind of feedback loop with a capacitor as an ADC when I found this.

http://highlowtech.org/?p=1963

This does the same thing with no parts (well, still need a buffer to get full volume). Clev. Majorly clev.

A speaker can’t respond at ~60kHz (nor would you hear it) so the fact that there is a pwm and not a true ADC going does not matter. Noice.

 

Incidentally, installed crayon for wordpress. Code looks way freaking better.

Havin’ Some Problems with Beaglebone

When I turned on the beaglebone black and saw that it served its own IDE, I was immensely pleased. Also Javascript! Language of the future (Not necessarily due to inherent qualities, but due to it’s adoption as a langua fraca. There is no easier toolchain to get going)!

However things turned to shit and now I’m a little more hesitant to embrace dat bone.

The pin headers are too long and unmarked. It’s a pain to count from the side to find the pin you need. Especially since the pins have different functionality. Not all pins are PWM for example.

A problem that took forever was getting the xbox controller to work really properly. Apparently some kind of EMI was occurring. dmesg|grep usb had some business about babble. Found a forum post that said to use a hub or usb extender and then you’ll be find. Used an usb extender and it worked. Bizarre.

For some unknown reason I had difficulty getting more than 2 PWM to works. It spit mysterious messages about device trees (which apparently are important. A lot of things in this world are important.).

I went into uEnv.txt in the boot folder and started disabling stuff. My hypothesis was that the pins being configured for other purposes that was blocking their use.

First I uncommented the line to disable HDMI. Then, I accidentally bricked the beaglebone by uncommenting a line in uEnv.txt that enables eMMC which is where the beaglebone boots from. I am an idiot.

I unbricked it by loading the debian image onto a microSD. Not a big deal except for the scary factor. I do not like formatting things. Especially not with sudo. Also, I did not have a microSD card handy and had to wait for one.

I essentially followed the instructions here https://www.raspberrypi.org/documentation/installation/installing-images/mac.md

replacing the raspberry pi image with the beaglebone image. dd is an odd duck. I used /dev/disk3 rather than /dev/rdisk3 which might explain why it took so goddamn long. Apparently one is buffered and the other isn’t.

So yeah. I love having the beef of a full linux on board stuff. Really I think control from an xbox controller would have been unfeasible natively on an arduino (don’t quote me on that. Never actually looked. I’ve heard the Leonardo can do usb stuff maybe). But for simple purposes I think you’re better off with an arduino controlled over serial. Could just then make some python program to accept the xbox controller on a host computer. Check out this clever snippet for a simple Serial protocol. Then you can type in numbers followed by a letter for various commands. The – ‘0’ thing is because the numbers come in sequential order in ascii.

I could type into the serial console

123a

And then it would set servo a to 123 degrees.

 

 

Also,  update on the h-bridge:

It caught on fire.

 

 

 

Installing icestorm on a mac

So I caught a hiccup

You need to install Xcode probably

Firstoff, you need to install libusb

brew install libusb1

make and install libftdi from source

http://www.intra2net.com/en/developer/libftdi/download.php

brew install python3

 

brew install gnu-sed

brew install gawk

brew install mercurial

brew install bison

go to icestorm website

Run all the the commands. like it says. Maybe you’ll find more packages that you need installed that I had already. Look at the error and brew install some guesses.

Here’s a kicker, go into the makefile for the iceprog and add a 1 to the end of -lftdi so it becomes -lftdi1

I changed the Makefile in iceprog to have the following stuff. I don’t think my libftdi was installing where expected by default

LDLIBS = -L/usr/local/lib -L/usr/lib -lftdi1 -lm
CFLAGS = -MD -O0 -ggdb -Wall -std=c99 -I/usr/include/libftdi1
DESTDIR = /usr/local

I guess cc automatically peels lib off the front. Go fig.

Okay. Cool.

Then I git cloned the repository from the hackaday tutorial to see if I could get it to run

http://hackaday.com/2015/08/27/learning-verilog-for-fpgas-hardware-at-last/

./build.sh demo

I got the error

Can’t find iCEstick USB device (vedor_id 0x0403, device_id 0x6010).

D’oh.

Well.

brew install lsusb

used lsusb. I can see it right there. The computer can see the thing.

Apparently i think the arduino drivers I have installed previously conflicted with libftdi or something, so I had to unload them

kextstat | grep FTD

Copy and paste the names you find to unload the ftdi devices

sudo kextunload -b com.FTDI.driver.FTDIUSBSerialDriver

sudo kextunload -b com.apple.driver.AppleUSBFTDI

finally I got blinking lights.

Jesus.

Xbox Controller BeagleBone

So I got my new beaglebone black in the mail today and decided to take it for a spin.

Go to getting started, install the drivers. Make sure you can write a blinking lights program.

Plug in the beaglebone to an ethernet port somewhere

Pull up your terminal

ssh 192.168.7.2

sudo apt-get install xboxdrv

in your cloud9 IDE terminal run this command

npm install node-xbox-drv

Very similar to arduino

Can just use a node library to attach to the controller

https://github.com/Jabbath/node-xboxdrv

Make a new file and put this in it

var xbox = require('node-xboxdrv');
var controller = new xbox("045e","028e",{});
var b = require('bonescript');

b.pinMode(“USR0”, b.OUTPUT);
b.pinMode(“USR1”, b.OUTPUT);
b.pinMode(“USR2”, b.OUTPUT);
b.pinMode(“USR3”, b.OUTPUT);

controller.on(‘a’,toggle);

var state = b.LOW;
controller.on(‘leftY’,function(data){console.log(data);})
controller.on(‘leftY’,function(data){console.log(data);})

function toggle() {
if(state == b.LOW) state = b.HIGH;
else state = b.LOW;
b.digitalWrite(“USR3”, state);
}