Classical Landauer-Buttiker

A simplistic picture of the Landauer-Buttiker picture says that the states in a 1-d ish system are quantized. Either your reservoir is filling that pipe or it doesn’t depending on hour energy levels line up. That’s how you get quantized conductance. (alternatively, to current is proportional to the velocity of electrons times the density of electrons, which is inversely proportional to velocity in 1-d. the two factors cancel out.)

Band structures and 1-d systems and Chern insulators all occur in classical wave systems, however an important piece of the story in the quantum case is the Pauli-exclusion principle. You just plain don’t have that classically. Also the wavefunction is normalized since it has a probability interpretation. You don’t have that classically either.

But, here’s a thought: Instead of looking at energy or momentum flow (which is more or less the immediate analog of electrical current flow), instead consider information flow in the sense of the Shannon-Hartley theorem.

I must profess, that while I love and I think others love talking about information theory, I think applying it in physics mainly produces bullshit.

The Shannon Hartley theorem gives a way to estimate the maximum bit/s you can transmit through a channel.

The Shannon-Hartley makes sense. There are common physical limitations to channels.

  1.  You have finite power or ability to make signal amplitude. Any system you make has a finite voltage it can go up to.
  2.  You can’t detect  detect arbitrarily weak signals. There will be thermal noise or even quantum noise stopping you at some point. You can’t control everything.
  3. You only have a certain bandwidth of sensitivity and production. Designs do not work arbitrarily fast.

Well, by the related Shannon sampling theorem a band limited signal is the same as a sampled signal, sampled at roughly the same frequency as the bandwidth. In a sense, a band limited signal can’t change all that fast. You could come up with a similar bound by limiting the time derivative to stay below some value.

Then each sample only holds a finite number of bits. The noise level N chunks up the finite signal range S into a number of chunks S/N. This is ~ ln(S/N) bits.

Hence you have roughly BW * ln(S/N) bits per second being sent.

A slow wave has high spatial information density. But it also travels slow. In 1-d these cancel each other out just like in the quantum case.

You can only send information through the channels that can actually propagate, i.e. in the band.

The S/N quantization plays a role somewhat like the Pauli exclusion principle. Every channel can only be filled up with S/N bits like every channel can only be filled with 1 electron. I don’t think the antisymmetry aspect of fermions is very relevant in this case.

The analog of current would be the difference in information rates between left to right and right to left. Worrying about a pretty silly extreme, Landauer’s principle, this information has to be dissipated out somewhere or stored. Information has to be preserved at this extreme, like electrical charge

The conductance would be the derivative of information flow to bandwidth \dot S=G\Delta \omega? And G=ln(1+S/N)?

Perhaps also, if you had thermal reservoirs at the two ends and you were communicating by raising and lowering the temperatures slightly? That would be somewhat analogous to the chemical potential reservoirs.

This isn’t a perfect analogy, but it’s not bad. I can’t imagine why such considerations would be relevant to anyone.

Hmm. Perhaps this is an instance of quantized entropy flow conductance? . Now I’m really talking bullcrap. There is a universal quantum of heat conductance. Maybe this is an approach to that?

https://en.wikipedia.org/wiki/Thermal_conductance_quantum

http://www.nature.com/nature/journal/v404/n6781/full/404974a0.html

 

 

Installing OpenCV 3 for Aruco on a Raspberry Pi 3

Basically followed these instructions

Install guide: Raspberry Pi 3 + Raspbian Jessie + OpenCV 3

Except! Don’t get his suggested zips. They did not work since they were to old to have aruco. I did this and there was only the dictionary of constants and none of the functions in cv2.aruco.

Here’s a script that is basically compiled from the page, except git cloning the opencv directories.

Put this into a install_opencv.sh and run

then run the script

This process takes a while. However the early stages might still ask you stuff so get into the cloning before you leave it be for a couple hours. OpenCV is pretty big. I think it’s using 40% of a 8gb SD card, so plan accordingly. I think you can delete the clone directories once it’s installed.

Calling a C function from Python

I’m working on piping together numpy and a gpu accelerated FFT on the raspberry pi.

http://www.aholme.co.uk/GPU_FFT/Main.htm

I’ve never done anything like this.  This is very useful.

http://www.scipy-lectures.org/advanced/interfacing_with_c/interfacing_with_c.html

The official docs are less useful. Maybe once you already know what’s going on.

This is the basic code needed to make a squaring function callable from python. It’s in a file square.c. You need that initsquare function which is called when you import. You need to list your available functions in that SquareMethods thing. And you need to extract and rebuild python style formats using the built in macros.

put this in setup.py

Next, I gotta figure out how to get arrays

https://docs.scipy.org/doc/numpy/reference/c-api.dtype.html#c-type-names

https://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html

These actually are pretty useful

 

Here is some basic garbage mostly copied from those advanced scipy notes above. I cut out out the iterator stuff. That is probably the way you’d usually want to do things. However, ultimately, I’ll be able to handoff the input and output array pointer to gpu_fft. I think it is in the right format.

The way I’m currently structuring things I’m probably not going to hold your hand. Unless I withhold access to the raw C file from an intermediate python library module?

But not today. Maybe tomorrow.

 

From Whence Superconductivity?

The self-energy used to be weird to me. But now it makes sense after I found out about the Schur complement. It can make sense even to have the decaying part of the self-energy, making an effectively non-unitary evolution of your single particle wavefunction, if there is a huge space for it to leak into, or if you approximate the Schur complement.

But how do you get the effective superconducting term from Schur complements? a^\dagger a^\dagger ? And how do you get the ground state to be a superposition of different particle number states?

The story that made sense to me about superconductivity is that if the 2 particle density matrix factorizes well, then that factor is the pairing wavefunction. I don’t know how that can fit into my current approach. Some systematic way of factoring the interaction term? That would be nice for plasmons and such, but also is alarming because it becomes clear that you’re never going to get out more for the computation than you put in. Maybe that is a truism.

It seems to me that since particle are actually conserved, we need to attach the system to external leads.

So first off, if you had a lead attached with such terms, schur complementing that lead out will almost certainly induce the term into the effective hamiltonian in the interior of the system. That is the proximity effect, where a superconductor placed next to an ordinary material will infect the material with superconductivity for a small distance (this is related to the coherence length/ size of cooper pairs in the superconductor. I’ve never really had those straightened out precisely.)

Now perhaps, if the interacting term is attractive, we can use this lead as a way to soften the number conservation constraint. Perhaps the lead doesn’t need to be superconducting.

Alternatively the “lead” could be an identical copy of our system instead. Or maybe an infinite number of copies leading to a huge homogenous sample.

 

 

 

Hash Vectors and Interacting Particles

An approach I don’t see much is using a hash table for vectors. I have seen a key value pair list vector. It makes sense. I think it gives you more algorithmic flexibility in the indices. Typical vectors are encoded in contiguous arrays index by integers. But encoding things that way feels kind of rigid. Perhaps in some circumstances the flexibility is worth the performance hit?

Here’s an implementation

 

Basically it subclasses dict and overrides plus operator to do vector addition. Also scalar multiplication and bind is a monadic approach to linear operators.

Here is a requisite fermionic annihilation and creation operator example to see where I’m trying to go with this. I want to automatize interacting perturbation theory about a fermi surface (particle and hole creation). I think I see how I could derive interesting things like effective single particle Hamiltonians (perturbatively Schur complement out the higher particle number subspaces). I’d like to see how I can automatically or manually do infinite summations like RPA and others, but I don’t yet.

I think that mostly I may not want to use the bind interface to do things. A lot of stuff is just index twiddling and it doesn’t make much sense to use the full machinery. Building the Hamiltonian terms directly out of a  and adags will be ghastly inefficient.

I think to get the noninteracting green’s functions to work I need to build a basis transformer into the noninteracting energy eigenbasis?

I’m working in a particle number emphasizing basis. This may be very bad.

Could I implement a renormalization procedure by Schur complementing out high frequency subspace then rescaling (this is loose talk. I’m not sure what I mean yet)?

J a^\dagger terms for injecting from external leads. Fully interacting Landauer-Buttiker conductance? Also connection to generating function techniques in QFT?

Maybe I should be doing this in Haskell? I like Haskell. I like types. I like persistence and non mutation. I think I’ll want numpy facilities down the road though.

I am trying actively to avoid thinking about how unoptimized and slow this will be. Maybe Pypy or other accelerating compilers will help.

Topologically Non-Trivial Circuit or Making the Haldane Model with a Gyrator

The gyrator is a funny little guy. You don’t hear about him much.

https://en.wikipedia.org/wiki/Gyrator

He’s kind of like a transformer. He converts voltage to current and current to voltage in such a way as to dissipate no power. You can build active ones out of op amps or in the microwave regime you can build them out of pieces with built in permanent magnets. I saw an interesting article suggesting that people could use a sample in the Quantum Hall Effect as a gyrator. I’m chewing on that.

The gyrator feels very much like the Lorentz force or rotational stuff, hence the name probably. In ordinary dynamics, if the force is proportional to the velocity the dynamics is dissipative and lossy, like sliding on mud. However, if a key piece is that the velocity and force point in different directions, the force can be non-dissipative.

The device breaks time reversal symmetry. If you flip time, the currents flip direction but the voltages stay the same. If you look at the equations one of the current and voltage connecting equations has a minus sign and one doesn’t. This is important to keep the thing non-dissipative. Flipping time would flip which equation has the minus sign in it, which isn’t the same dynamics anymore. Resistors also aren’t time reversal symmetric, but that is not unexpected since they are dissipative and dissipation relies on entropy production and the arrow of time.

Capacitors and inductors are time reversal symmetric. When you flip the current sign you also flip the sign on the derivative with respect to t that is in their defining equation.

Anyhow, this all makes it plausible that you can get a funky topological character to a 2 dimensional grid built out of these things.

The basic arrangement can be built out of gyrators and capacitors.drawing

The laws of motion come from conservation of current at every node

(j\omega C + z/R - 1/Rz)V=0

This equations gives the dispersion relations C\omega=2 \sin(ka)

The 2d arrangement is very similar, just place gyrators also going in the y direction. But to get a topologically nontrivial band structure we need to dimerize the unit cell (or work on a hexagonal lattice where your unit cell is sort of force to be doubled). In totality you have 4 gyrator parameters to play with and 2 capacitance parameters. And you can in fact arrange the parameters such that the circuit is in the topological regime. It’s easiest to see this by writing the thing in terms of Pauli matrices with the pesudospin referring to which point of the unit cell

j \omega C V= \sigma\cdot B(k_x,k_y) V

If you arrange the parameters (not even that carefully, which is kind of the point) you can get it such that the normalized B vector wraps the sphere as you traverse the entire Brillouin zone.

This circuit has band gap. There will be chiral propagating modes within the band gap on the edge.

Food for thought: Circuits have to have real values for voltage and current.  Hence there always has to be a correspondence between a plus and minus frequency in the system. The is not a general constraint on Schroedinger’s equation. Is this similar to particle-hole symmetry? If so, can you construct a nontrivial phase with the analog of a Majorana mode. Would that be a non oscillating charge or current that must occur at some positions (not particularly exotic)? Not sure how much this line of thought makes any sense.

Pipe Raspberry Pi Video into ffmpeg and opencv: A Failure So Far

Trying to get video off of raspberry pi in a low latency way.

 

Piping raspivid through netcat as suggested in raspicam documentation

Mplayer does a decent job. Maybe 0.1 second latency. Pretty dang good.

VLC did not do so good. Maybe 3 second latency. Perhaps some fiddling would fix?

Eventually, we want the stream in a program somewhere, hopefully python is acceptably fast. Here is a site that I heavily cribbed from

http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/

The colors are screwed up. This is not fast enough for our purposes. If you want, I believe you can fix it with cv2.cvtColor

You can see that I’ve tried a bunch of ffmpeg tags but none seem to help.

It does not appear that python is the speed hangup. I inspected with python -m cProfile

 

 

Python Xbox Controller Mac

Working on a raspberry pi based robot car right now and wanted to get an xbox controller to control it.

Tried using the python package inputs. Didn’t work

Had to uninstall and reinstall latest version of mac xbox driver

https://github.com/360Controller/360Controller

PyGame ended up working. Then Pickling to serialize and pumping over udp. Here’s the sending and receiving program.

 

 

 

 

 

Band Structure

I could do band structure by plotting the eigenvalues of a matrix parametrized as a function of k. And I have.

However, a more general method, and truer to physical reality, is to not assume a perfect lattice. Just write down the real space hamiltonian, diagonalize it, and then somehow find the right plot in there.

I think a good way of thinking about it is that density matrices exist. Then you may massage them to get what you want.

In particular, a good plot is the density matrix \rho_E (k,k) This is an object who . You will want to smooth out on E. I used a Cauchy distribution. There are reasons to do so. I’ll get to them later.

What I’m doing is all ad hoc, although I’ll make it better.

The dispersion relation is plotted in kind of an extended Brillouin scheme. I need to think if that’s what I want.

In order to get bands, we need to plot level sets of this density matrix. Maybe integrate with respect to energy first.

Another thing that might be fun to do is the try to show the band structure changing as a function of position. Similar problems to windowed Fourier analysis.

ssh

t2less

The difference between these two is t2 or t1 being larger in magnitude around the t1=-t2 point. You can see the inversion of the bands. The more constant-like wavecetor flips up to the top band,  once flip flopping becomes energetically favorable.

This is two bands. The y axis is energy. The x axis is wavevector. There is a gapped region.

It is interesting to see what happens when you tune the parameters.

t1 = t2

equal

t2=-t1

opposite

It is this point where the flip from a clockwise to counterclockwise encirclement of the origin occurs when the unit cell is expressed as psuedospin. Winding number change. Changing the band topology.

Trivial energy histogram

trivial

Nontrivial histogram. 2 Edge states in the gap. Both are exactly at zero. (well off by a factor of 10^-15 anyhow)

nontrivial

Here’s a real space plot of one of the deg states. Its that huge spike in the corner

edgestate

Here’s the other edge state

otheredge

Pretty neat