## Peltier Coolers and Thermal Circuits

A great many things in the world follow the paradigm of electric circuits. Electric circuits are god.

The abstraction of many things as one thing is the realm of mathematics. The mathematical reason so many things act like electrical circuits is that they are operating under physical laws that take the form of Laplace’s equation $\nabla\cdot\epsilon\nabla\phi$.

1. A Potential. The potential is connected to a more physical quantity by the gradient $-\nabla \phi = \vec{E}$.
2. A Constitutive relation. The first vector is connected to a current by a linear relation using some material properties, ie Ohm’s law or $D=\epsilon E$
3. A Conservation Law. The divergence of the current is conserved.$\nabla\cdot \vec{J}=0$. What flows in must flow out. Or the divergence matches sources $\nabla\cdot \vec{J}=Source$.

The circuit formulation is

1. $E=\nabla V$
2. $\sigma E=J$ Ohm’s Law in its continuous form
3. $\nabla \cdot J =0$ Conservation of electric current

The regions with different $\sigma$ can be chopped up into an effective discrete circuit element problem.

By analogy we can solve other problems that take the same form, for example heat conduction.

1. $F=\nabla T$ We don’t usually call F anything, but it is the local temperature gradient
2. $C F=Q$ Fourier’s Law.
3. $\nabla \cdot Q=0$ Conservation of heat current, aka energy conservation

From this follows the theory of thermal circuits.

Ok. So we’ve been trying to build a cloud chamber. We’ve been buying peltier coolers, which cool one side and heat the other side when you power them.

The details of using Peltier coolers has been sparse. Clearly I just don’t know where to look.

Best thing I’ve found http://www.housedillon.com/?tag=peltier.

## The Particle Photon: A Cloud Enabled Arduino

Bought one last month and it came in the mail.

Some Notes

Set it up with the particle app on your phone

curl https://api.particle.io/v1/devices/210040000340000009370006/ledToggle -d access_token=ef7c43146253453545ea635435e316445775474 -d “command=on”

The -d in curl commands allow you to send multiple post data.

npm install g particlecli

particle call my_device_name led on

Got to annoy people with beeps over the internet. Pretty good.

## AWS and Computing Clusters and MPI

Just been curious about parallel computation. Clusters. Gives me a little nerd hard-on.

Working my way up to running some stuff on AWS (Amazon Web Services).

So I’ve been goofing around with mpi. Mpi (message passing interface) is sort of an instant messager for programs to pass around data . It’s got some convenient functions but its mostly pretty low level.

I’ll jot some fast and incomplete notes and examples

Tried to install mpi4py.

sudo pip install mpi4py

but it failed, first had to install openmpi

To install on Mac I had to follow these instructions here. Took about 10 minutes to compile

so mpi4py

give this code a run

#mpirun -np 3 python helloworld.py from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() name = MPI.Get_processor_name() print "Hello. This is rank " + str(rank) + " of " + str(size) + " on processor " + name

the command mpirun runs a couple instances. You know which instance you are by checking the rank number which in this case is 0 through 2.

Typically rank 0 is some kind of master

lower case methods in mpi4py work kind of like how you’d expect.  You can communicate between with comm.send and comm.recv

#mpirun -np 2 python helloworld.py from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() name = MPI.Get_processor_name()
 if rank == 0: comm.send("fred",dest=1) else: counter = comm.recv(source=0) print counter

However I think the these are toy methods. Apparently they use pickle (python’s fast and dirty file storage library) in the background. On the other hand, maybe since you’re writing in python anyhow, you don’t need the ultimate in performance and just want things to be easy. On the third hand, why are you doing parallel programming if you want things to be easy? On the fourth hand, maybe you

The capital letter mpi functions are the ones that are better, but they are not pythony. They are direct translations of the C api which uses no returns values. Instead you pass along pointers to the variables you want to be filled.
from mpi4py import MPI import numpy as np comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() name = MPI.Get_processor_name()  nprank = np.array(float(rank)) result = np.zeros(1) comm.Reduce(nprank, result, op=MPI.SUM, root=0)  if rank == 0: print result 

Coming back from a couple weeks off of not reading about them, I find myself mystified once again.

I think they are a way to chain extra data through functions.

And you need them to make basic ghc programs? That sucks.

getArgs gets the command line arguments in a list of strings

getLine takes a string from the program

foldr1 is a variant on foldr that takes the first element as the first accumulator value

putStrLn

putting stuff on seperate lines in a do block implies >>

>>= is implied by <- notation

Both are bind operations but a little different?

It all kind of does what you think it should from looking at it, but the monadic backend is deeply puzzling (look at the type definitions). I watched a youtube video of Brian something explaining how monads are a natural way of achieving function composition for functions that don’t take in and output the same type, but I can’t really recall how that made so much sense. Monads are slippery

save this is a file hello.hs and run

ghc hello.hs

./hello

 module Main where import System.Environment main :: IO () main = do args <- getArgs num <- getLine putStrLn ("Hello, " ++ show (foldr1 (+) (map read args))) putStrLn num 

## Green’s Functions: Functions Are Vectors

Matrices are the only math problem man actually knows how to solve.

Everything else is just sort of bullshitting our way through the best we can.

Multiplying matrices like $AB$ is a notation that let’s us reason about very large complicated objects at a very simple level.
The convention of multiplication of matrices seems kind of arbitrary (rows times columns) but it is simple.

The two main powerful questions about a matrix we can get an answer to are what is a matrices inverse and what are it’s eigenvalues.

Hence the emphasis on linear differential equations. Any problem phrased in these terms is just a matrix problem.

Here’s the biggest most important point of Green’s Functions. If the differential operator $L$ is considered to be a matrix, then the Green’s function $G$ is it’s matrix inverse $L^{-1}$.

Boom.

I am a beginner in functional programming.

Over the past couple weeks, I’ve been revisiting Haskell. As I’ve brought it up in conversation, I’ve been asked a couple times what to define what functional programming even is when I’ve brought it up. Honestly, I do not yet know how to express it in words. But I think I know how it feels. A very gooey statement to make about a typically precise field.

I’ve been asked if passing in callback functions as arguments is functional programming. I think it is an aspect but that it doesn’t gets to the heart of the matter.

A couple things without poisoning my impression by reading the definition from wikipedia:

• Functions are like, really important
• Ordinary programs are like a very explicit list of instructions for the machine. Functional programming doesn’t feel like that.
• If you’re mind wants to explode, maybe it’s functional programming
• Seems Mathy. In that abstract proofy math kind of way. You build programs kind of like how you’d build a proof. Take axioms and tie them together with minimal lemmas and theorems into more complex programs. Also, the goals are mathy. You try to program things as general as possible, a task aided by focussing on the function.
• Recursion is not required but it’s got the flavor.
• While most programming languages have functional bits and some functional paradigms in them now, the only one’s that really have the taste are the pure ones, Lisp, Scheme, Haskell, etc. The languages that have the alternatives to the functional style basically end up programming kind of the same.

Stuff I’ve heard but don’t really get:

• Lazy Evaluation: Stuff doesn’t evaluate until it needs to and then it only evaluates as much as it needs. If you’ve got a thing that builds a list of 1000, but then you only ask for the first 3 elements, it’ll only build the list when you ask for those elements and then also only those three. I think that’s what’s going down.
• The type system is F’ed up.
• How I could ever conceive of the weirdly clever things they do.
• That functional programming is actually good. The boiler plate of wiring thing to thing is 99% of what programs are, and the algorithm is a minimal component. I am disbelieving but hopeful that functional programming could alleviate that.

## A Start on Green’s Functions

What is a Green’s Function?

While that doesn’t sound like an esoteric question to me anymore, it must to a new initiate.

First off, I think you can achieve immense clarity by understanding that everything that is called a Green’s function is not exactly the same thing. For problem solving and calculating, the similarity of all the things called Green’s functions is helpful, since knowledge of one situation let’s you understand how to manipulate the solution in another.
They do all have a similar flavor.

Conceptually, however, the Green’s function is a many faced god. And for that purpose, it is best to emphasize the differences between these very conceptually different cases.

To start off, for some slight concreteness, let’s list some problems that will have solutions in the form of something called a Green’s function.

1. You have some weird potato shaped charge and hunks of metal. What is the electric potential everywhere?
2. You have a ball getting randomly kicked around. Given it starts at home plate, what is the probability it gets to first?
3. You have a single quantum particle
4. You have a rumbling tumultuous ocean or rubber sheet. How does the height at one place correlate with the height at another?
console.log("Hey There Buddo")
Installed Jetpack to piggyback on WordPress.com and get some nice $\LaTeX$ going
$i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$
$\hbar$