## Elm, Eikonal, and Sol LeWitt

We saw this cool Sol LeWitt wall at MASS MoCA. It did not escape our attention that it was basically an eikonal equation and that the weird junctures were caustic lines.

It was drawn with alternating colored marker lines appearing a cm away from the previous line. This is basically Huygens principal.

So I hacked together a demo in elm. Elm is a Haskell-ish language for the web.

So I made a quick rip and run elm program to do this. This is the output, which I could make more dynamic.

The algorithm is to turn a list of points into their connecting lines. Then move the line perpendicular to itself, then recompute the new intersection points. It’s somewhat reminiscent of Verlet integration. Lines coordinates are momentum-like and points are position like and we alternate them. This is a finite difference version of the geometric Huygen’s principle.

Alternative methods which might work better include the Fast Marching Method or just using the wave equation and then plotting iso surfaces.

I also had to resample the function to get only the maximum y value for each x value in order to duplicate the LeWitt effect.

These are the helper functions with lots of junk in there

And this is the svg main program.

notes on elm

elm is installed with npm

elm-repl

you import packages (including your own) with

import ThisPackage

and you check types by just writing them and hitting enter rather than :t

elm-live is a very handy thing. A live reloading server that watches for changes in your files.

elm-make myfile.elm

will generate the javascript and html

This is a good tutorial and a good snippet to get you going

elm isn’t lazy which is probably good.

The composition operator (.) is now <<

elm doesn’t have the multiline pattern match of haskell. You need  to use case expressions. I miss them.

typeclass facilities are not emphasized.

The list type is List a rather than [a]

## A whole new world. Propositions as types. Idris. Rule-rewriting

I’ve been going down a rabbit hole lately. I suppose it started with I wanted to solve the annihilation creation operator algebra (and more long term understand how to program up anyons in a elegant way) in a new way. So I rebooted up my haskell.

I was hoping (through a misunderstanding of what pattern matching and lazy evaluation achieve ) I’d somehow be able to write

This is screwy from a functional programming perspective. I’m no guru, but I think the problem is that really you want f to take a function and return a new function on line 2 whereas you want f to take an integer and return an integer on line 3.

You can hack this together, but I feel it is not an elegant approach.

Anyhow, this problem is solvable in mathematica. Mathematica (of is it called Wolfram now? I’m a tad confused there) at its core is a rule rewriting system. This is a different flavor than a functional programming langauge, I think. All the current applicable rules float around somewhere in the enviroment. Wehn you make an = expression you add a new rule. This is similar to functional programming have all the functions floating around and you can define new functions to put into the environment. equals sign and rule replacement /.{x->y} are very very related.

However, I think there is an element of nondeterminism in the rule rewriting. It can try to apply rules and fail and the untry that rule. I think? Mathematica is pretty vague as far as I can tell on what it is trying. Functional programs I think go more one way. You apply a function but then never unapply it. There is no try; only do. You could definitely build up some kind of history system to achieve this but it feels much less intrinsic to the basic evaluation procedure. The order of applications is free to be chosen. Top down, bottom up, whatever. Some sort of procedure.

So this got me curious about how I could implement such a system in a cute way or what else I could do.

Then I came across the Strange Loop talks.

This one in particular intrigues me.

He mentions Session Types, a typing system for communication that is analogous in some way to linear logic. Linear logic has kind of a no copy rulewhere you use up propositions when you use them rather than being usable as much as you like. Like the no-cloning theorem in Quantum. This brought me back to take a peek at Baez’s Rosetta Stone paper. Interesting and cryptic. I guess this is good for communicating. Messages get sent and then used up when received. Sounds neato. Looking at the concrete specification Scribble makes this a bit more clear. Maybe this would be a good paradigm for a quantum computing programming language? Sort of let the quantum information transfer between gates be treated as a network call. It makes sense in that you would want to run different parts of quantum algorithms on quantum and classical computers. The interface probably will be network like. Quantum-Quantum Sockets and Quantum-Classical sockets. Wait for a measurement signal on the classical side, then send a classical valued configuration signal to the gates. The gates send quantum signals to each other. The classical side also needs to setup the gate connections too.

Smart people freak out about types. I don’t understand why yet. They seem important in the very roots of mathematics.  Circumventing paradoxes and such. Types allow to compiler to save you from yourself (and others), I think is the main point. But they also make you figure out. I have found Haskell’s type system to be useful for sketching out what I mean by writing down data structures. It can be a useful part of the thinking process. Like writing function declarations without writing the inner code. “Oh yeah, I’ll need a vectorizing function. Let’s write that down but code it up later.” I’ve also found myself yearning for this in python sometimes. Stronger typing would let me pass stuff in without whatever it is I’m passing in being implied within the code of the function itself. “Error? On what? Oh, the dict should have a ‘foo’ key? Huh. Whoopsy.” I can and should and sometimes do put that stuff in comments, but you have to read those. Which is not guaranteed.

Also, Lambda cube? Huh.

Then I saw some more talks mentioning Coq, Agda, and Idris. Language that are related somehow. Languages that touch or are intended for theorem proving. This is a subject I’ve seen mentioned before and not understood what that could mean. Idris is very close variant of Haskell. This seems like a good road to understand more. Coq has a long history and a difficult reputation.

Maybe the next thing I need is to learn some basics of logic to be a better programmer.

Propositions = Types.

What does that mean?

https://en.wikipedia.org/wiki/Sequent_calculus

This stuff is very evocative of lambda calculus. How have I somehow not seen this before? I guess I just sort of glazed over when the Curry-Howard isomorphism was mentioned. Assumed that was a very high falutin thing not worth checking out.

I’m still in the middle of it all. We’ll see.