Peltier Coolers and Thermal Circuits

A great many things in the world follow the paradigm of electric circuits. Electric circuits are god.

The abstraction of many things as one thing is the realm of mathematics. The mathematical reason so many things act like electrical circuits is that they are operating under physical laws that take the form of Laplace’s equation \nabla\cdot\epsilon\nabla\phi.

  1. A Potential. The potential is connected to a more physical quantity by the gradient -\nabla \phi = \vec{E}.
  2. A Constitutive relation. The first vector is connected to a current by a linear relation using some material properties, ie Ohm’s law or D=\epsilon E
  3. A Conservation Law. The divergence of the current is conserved.\nabla\cdot \vec{J}=0. What flows in must flow out. Or the divergence matches sources \nabla\cdot \vec{J}=Source.

The circuit formulation is

  1. E=\nabla V
  2. \sigma E=J Ohm’s Law in its continuous form
  3. \nabla \cdot J =0 Conservation of electric current

The regions with different \sigma can be chopped up into an effective discrete circuit element problem.

By analogy we can solve other problems that take the same form, for example heat conduction.

  1. F=\nabla T We don’t usually call F anything, but it is the local temperature gradient
  2. C F=Q Fourier’s Law.
  3. \nabla \cdot Q=0 Conservation of heat current, aka energy conservation

From this follows the theory of thermal circuits.

Ok. So we’ve been trying to build a cloud chamber. We’ve been buying peltier coolers, which cool one side and heat the other side when you power them.

 

The details of using Peltier coolers has been sparse. Clearly I just don’t know where to look.

Best thing I’ve found http://www.housedillon.com/?tag=peltier.

 

 

The Particle Photon: A Cloud Enabled Arduino

Bought one last month and it came in the mail.

Some Notes

Set it up with the particle app on your phone

curl https://api.particle.io/v1/devices/210040000340000009370006/ledToggle -d access_token=ef7c43146253453545ea635435e316445775474 -d “command=on”

The -d in curl commands allow you to send multiple post data.

 

I recommend not using curl and downloading the command line interface

npm install g particlecli

particle login

particle call my_device_name led on

Got to annoy people with beeps over the internet. Pretty good.

AWS and Computing Clusters and MPI

Just been curious about parallel computation. Clusters. Gives me a little nerd hard-on.

Working my way up to running some stuff on AWS (Amazon Web Services).

So I’ve been goofing around with mpi. Mpi (message passing interface) is sort of an instant messager for programs to pass around data . It’s got some convenient functions but its mostly pretty low level.

I’ll jot some fast and incomplete notes and examples

Tried to install mpi4py.

sudo pip install mpi4py

but it failed, first had to install openmpi

To install on Mac I had to follow these instructions here. Took about 10 minutes to compile

so mpi4py

give this code a run

#mpirun -np 3 python helloworld.py
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
name = MPI.Get_processor_name()
print "Hello. This is rank " + str(rank) + " of " + str(size) + " on processor " + name

the command mpirun runs a couple instances. You know which instance you are by checking the rank number which in this case is 0 through 2.

Typically rank 0 is some kind of master

lower case methods in mpi4py work kind of like how you’d expect.  You can communicate between with comm.send and comm.recv

#mpirun -np 2 python helloworld.py
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
name = MPI.Get_processor_name()


if rank == 0:
comm.send("fred",dest=1)
else:
counter = comm.recv(source=0)
print counter

However I think the these are toy methods. Apparently they use pickle (python’s fast and dirty file storage library) in the background. On the other hand, maybe since you’re writing in python anyhow, you don’t need the ultimate in performance and just want things to be easy. On the third hand, why are you doing parallel programming if you want things to be easy? On the fourth hand, maybe you

The capital letter mpi functions are the ones that are better, but they are not pythony. They are direct translations of the C api which uses no returns values. Instead you pass along pointers to the variables you want to be filled.
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
name = MPI.Get_processor_name()

nprank = np.array(float(rank))
result = np.zeros(1)
comm.Reduce(nprank, result, op=MPI.SUM, root=0)

if rank == 0:
print result

Monads are f’ed up

Coming back from a couple weeks off of not reading about them, I find myself mystified once again.

I think they are a way to chain extra data through functions.

And you need them to make basic ghc programs? That sucks.

getArgs gets the command line arguments in a list of strings

getLine takes a string from the program

read converts strings into integers that can be added

foldr1 is a variant on foldr that takes the first element as the first accumulator value

putStrLn

putting stuff on seperate lines in a do block implies >>

>>= is implied by <- notation

Both are bind operations but a little different?

It all kind of does what you think it should from looking at it, but the monadic backend is deeply puzzling (look at the type definitions). I watched a youtube video of Brian something explaining how monads are a natural way of achieving function composition for functions that don’t take in and output the same type, but I can’t really recall how that made so much sense. Monads are slippery

save this is a file hello.hs and run

ghc hello.hs

./hello


module Main where
import System.Environment
main :: IO ()
main = do
args <- getArgs
num <- getLine
putStrLn ("Hello, " ++ show (foldr1 (+) (map read args)))
putStrLn num