The really cute part of it is using electrical conduit as rails, which are shockingly inexpensive. Like a couple bucks for 4 feet! Holy shnikes!

We’ve been printing up a storm for the last couple weeks. A ton of parts!

We already had a lot of motors and stuff lying around. Declan bought a lot of stuff too just for this. Assorted bearings and bolts. The plans have a bill of materials.

Repetier host seemed to work pretty well for controlling the board

Used the RAMPS branch of the mpcnc marlin repo

Edited the header files as described in this post so that we could use both extruders as extra x and y motor drivers. It did not seem like driving two motors from the same driver board was acceptable. Our bearings are gripping the rails a little too tight. It is tough to move.

This is an interesting web based g-code maker. Ultimately a little to janky though. It works good enough to get started http://jscut.org/jscut.html . Not entirely clear what pocket vs interior vs whatever is. engrave sort of seemed like what I wanted. Went into inkscape with a reasonable png and traced bitmapped it, then object to path. It’s also nice to just find an svg on the internet

The following code was needed to zero repetier and the RAMPS at the touch off point. We added it as a macro. It is doing some confusing behavior though.

1

2

G92 X0 Y0 Z0

@isathome

pycam is the best I can find for 3d machining. Haven’t actually tried it yet

Just got back from Chile from a vacation visiting Will. Nice place.

We took a leisurely amount of time getting to Torres del Paine, which was the main feature of our trip. We travelled through Santiago, Punta Arenas and Puerto Natales. We spent a very tired day in the children’s science museum and rode the funicular. There wasn’t that much to do in the latter two cities, maybe we could have shaved some time from them. Our hostel in Punta Arenas was notably ramshackle. We spent 5 days backpacking in the park. Absolutely gorgeous. The wind on the second day was like nothing I’ve ever experienced. I was a little concerned about staying on my feet. Hiking poles for the win. Day 4 was cold and wet and miserable, but it ended up ok in the end. We were able to get a spot in a refugio when it just was too overwhelming to try to set up our tents in a flooded muddy campsite on that day. I think Beth in particular was at the end of her rope. I basically didn’t poop the entire first week I was there, but one glorious day on the mountain the heavens parted for me, and I was fine from then on. I didn’t quite pack food right. There ended up being camp stores at most of the places we stayed, but if I hadn’t been able to re up on cookies it would have been a lean couple days food wise. Ramen Bombs for the win. We drank right from the streams, which is unusual for us. Usually we filter.

All told we did ok on food. I really like the al pobre thing. What isn’t to like a about steak, onions, and eggs on fries? Chileans seem a little eager on the mayo. Nobody does breakfast right except the US. The street food was good. I love the fried tortilla thing that you can just slather salsa on. It was like 30 cents. The empanadas were also pretty great cheap food. Ceviche was also very tasty. They toss out avocado like it’s nuthin down there. Sandwiches were kind of shitty. Don’t know if that is entirely fair, but that is how it felt. Highlight meal of the trip was at Cafe Artimana in Puerto Natales. Yeah, I got some al pobre. But also basil lemonade stuff

After the hiking, we scheduled an early return to Santiago rather than busting our asses to a glacier viewpoint. In the airport at Punta Arenas, we got the southernmost dominos in the entire world. Ben Will and Declan went on a taxi quest to go get it. Wandered around Santiago, saw some churches and cathedrals, a fort, ate churros, etc. Declan was on a quest for a neck pillow. We did a prison themed Escape Room. People felt like we got a little cheated because some of the puzzles felt like bullshit? I think they really expect to break room records. I suck at escape rooms. We were able to spend a day in Valparaiso, which had a super awesome street art scene.

I spent the last day puking my guts out. So it goes. Not sure how exactly. The street sausage may have put me over the top. I guess I’m a sensitive fellow? I get pretty consistently unwell on trips.

Chile has tons of fluffy street dogs. They’re pretty friendly, although they do chase cars and motorcycles. Idiots.

Chile has a way lower english quotient than other trips I made. I’ve been surprised how common at least some english has been in Europe and Asia, and was now equally surprised how little there was in Chile. It makes sense. A lot of the continent is spanish speaking. It was really useful to have Will around, who has gotten shockingly good at Spanish from an outsiders perspective.

Yesterday morning, I was struck with awe at how amazingly cool the dependently typed approach (Agda, Idris, Coq) to proving programs is. It is great that I can still feel that after tinkering around with it for a couple years. It could feel old hat.

In celebration of that emotion, I decided to write a little introductory blog post about how to do some proving like that in Haskell. Haskell can replicate this ability to prove with varying amounts of pain. For the record, there isn’t anything novel in what follows.

One methodology for replicating the feel of dependent types is to use the singletons pattern. The singleton pattern is a way of building a data type so that there is an exact copy of the value of a variable in its type.

For future reference, I have on the following extensions, some of which I’ll mention when they come up.

1

{-# LANGUAGE GADTs, DataKinds, TypeFamilies, RankNTypes, UndecidableInstances, PolyKinds #-}

Here’s how the pattern goes.

Step 1: define your ordinary data type. Bool is already defined in the Prelude, but here is what it looks like

1

data Bool=True|False

Step 2: turn on the DataKinds extension. This automatically promotes any data type constructor like True or False or Just into types that have apostrophes in front of them 'True, 'False, 'Just. This is mostly a convenience. You could have manually defined something similar like so

1

2

3

data True

data False

data Justa

Step 3: Define your singleton datatype. The singleton datatype is a GADT (generalized abstract data type). GADT declarations take on a new syntax. It helps to realize that ordinary type constructors like Just are just functions. You can use them anywhere you can use functions. Just has the type a -> Maybe a. It might help to show that you can define a lower case synonym.

1

2

just::a->Maybea

just=Just

just is clearly just a regular function. What makes constructors a special thing (not quite “just functions”) is that you can also pattern match on them. Data constructors are functions that “do nothing”. They hold the data until eventually you get it back again via pattern matching.

So why don’t you write the type signature of the constructors when you define them? Why does using a data statement look so different than a function definition? The GADT syntax brings the two concepts visually closer.

Letting you define the type signature for the constructor let’s you define a constrained type signature, rather than the inferred most general one. It’s similar to the following situation. If you define an identity function id, it has the polymorphic type a -> a. However, you can explicitly constrain the function with an identical implementation. If you try to use boolid on an Int it is a type error.

1

2

3

4

idx=x

boolid::Bool->Bool

boolidx=x

The GADT syntax let’s you constrain what the type signature of the constructor is, but also very interestingly, let’s the type checker infer information when you pattern match into the GADT.

With all that spiel, here is the singleton type for Bool as a GADT.

1

2

3

data SBoolswhere

STrue::SBool'True

SFalse :: SBool 'False

You have made an exact copy of the value at the type level. When you pattern match into a variable x of type SBool s in the STrue branch, it knows that s ~ 'True and in the SFalse branch it knows that s ~ 'False.

Here’s the analogous construction for a Peano natural number

1

2

3

4

5

data Nat=Zero|Succ Nat

data SNatswhere

SZero::SNat'Zero

SSucc :: SNat n -> SNat ('Succn)

Now let’s talk about programming.

Addition is straightforward to define for our Nat.

1

2

3

nplus::Nat->Nat->Nat

nplus Zerox=x

nplus(Succx)y=Succ(nplusxy)

Let’s replicate this definition at the type level. The extension we’ll want is TypeFamilies. Type families enables a syntax and feature for defining functions on types very similarly to how you define regular functions.

1

2

3

type family NPlusxywhere

NPlus'Zero x = x

NPlus ('Succx)y='Succ(NPlusxy)

Now finally, we can exactly mirror this definition on singletons

1

2

3

snplus::SNatn->SNatn' -> SNat (NPlus n n')

snplus SZerox=x

snplus(SSuccx)y=SSucc(snplusxy)

In the type signature SNat is kind of like a weirdo forall. It is a binding form that generates a new type variable you need to express the typelevel connections you want. The type variable n is a typelevel thing that represents the value.

Now let’s talk about proving. Basically, if you’re intent is proving things, I think it is simplest if you forget that the original data type ever existed. It is just a vestigial appendix that makes the DataKinds you need. Only work with singletons. You will then need to make a safe layer translating into and out of the singletons if you want to interface with non-singleton code.

We’re going to want to prove something about equality. The standard definition of equality is

Why is this a reasonable equality? You can construct using Refl only when you are just showing that a equals itself. When you pattern match on Refl, the type checker is learning that a ~ b. It’s confusing. Let’s just try using it.

We can prove a couple facts about equality. First off that it is a symmetric relation. If then .

1

2

symm::Eq1ab->Eq1ba

symm Refl=_

When we pattern match and expose the incoming Refl, the type checker learns that a ~ b in this branch of the pattern match. Now we need to return an Eq1 b a. But we know that a ~ b so this is the same as an Eq1 a a. Well, we can easily do that with a Refl.

1

2

symm::Eq1ab->Eq1ba

symm Refl=Refl

Similarly we can prove the transitivity of equality.

1

2

trans::Eq1ab->Eq1bc->Eq1ac

trans Refl Refl=Refl

Pattern matching on the first equality tells the type checker that a ~ b, the second that b ~ c. Now we need to return a Eq1 a c but this is the same as Eq1 a a because of the ~ we have, so Refl suffices.

It’s all damn clever. I wouldn’t have come up with it.

Now let’s start proving things about our addition operator. Can we prove that

1

2

proof1::Eq1'Zero 'Zero

proof1=Refl

This one is straightforward since obviously 'Zero is 'Zero. How about something slightly more complicated .

The typechecker can evaluate addition on concrete numbers to confirm this all works out.

Here’s a much more interesting property

1

2

proof2::SNatx->Eq1(NPlus'Zerox)x

proof2x=Refl

This one is also straightforward to prove. Looking at the definition of NPlus knowing that the first argument is 'Zero is enough to evaluate forward.

Here’s our first toughy. This may seem silly, but our definition of NPlus did not treat the two arguments symmetrically. it only pattern match on the first. Until we know more about x, we can’t continue. So how do we learn more? By pattern matching and looking at the possible cases of x.

1

2

3

proof3::SNatx->(Eq1(NPlusx'Zero)x)

proof3 SZero=Refl

proof3(SSuccx)|Refl<-proof3x=Refl

The first case is very concrete and easy to prove. The second case is more subtle. We learn that x ~ 'Succ x1 for some x1 when we exposed the SSucc constructor. Hence we now need to prove Eq1 (NPlus ('Succ x1) 'Zero) ('Succ x1). The system now has enough information to evaluate NPlus a bit, so actually we need Eq1 ('Succ (NPlus x1 'Zero)) ('Succ x1). The term (NPlus x1 'Zero) looks very similar to what we were trying to prove in the first case. We can use a recursive call to get an equality proof that we pattern match to a Refl to learn that(NPlus x1 'Zero) ~ x1 which will then make the required result Eq1 ('Succ x1) ('Succ x1) which is obviously true via Refl. I learned a neat-o syntax for this second pattern match, called pattern guards. Another way of writing the same thing is

1

proof3(SSuccx)=case(proof3x)of Refl->Refl

Follow all that? Haskell is not as friendly a place to do this as Idris or Agda is.

Now finally, the piece de resistance, the commutativity of addition, which works in a similar but more complicated way.

A question: to what degree does this prove that nplus or snplus are commutative? The linkage is not perfect. snplus is type constrained to return the same result as NPlus which feels ok. nplus is syntactically identical to the implementation of the other two, but that is the only link, there is nothing compiler enforced.

The existence of non-termination in Haskell also makes everything done here much less fool-proof. It wouldn’t be all that hard to accidentally make a recursive call in one of our proofs that is non terminating and the compiler would accept it and say nothing.

I recommend you check out the links below for more.

Casadi is something I’ve been aware of and not really explored much. It is a C++ / python / matlab library for modelling optimization problems for optimal control with bindings to IPOpt and other solvers. It can produce C code and has differentiation stuff. See below for some examples after I ramble.

I’ve enjoyed cvxpy, but cvxpy is designed specifically for convex problems, of which many control problems are not.

Casadi gives you a nonlinear modelling language and easy access to IPOpt, an interior point solver that works pretty good (along with some other solvers, many of which are proprietary however).

While the documentation visually looks very slick I actually found it rather confusing in contents at first. I’m not sure why. Something is off.

It also has a bunch of helper classes for DAE building and other things. They honestly really put me off. The documentation is confusing enough that I am not convinced they give you much.

The integrator classes give you access to external smart ode solvers from the Sundials suite. They give you good methods for difficult odes and dae (differential algebraic equations, which are ODEs with weird constraints like x^1 + y^1 == 1) Not clear to me if you can plug those in to an optimization, other than by a shooting method.

Casadi can also output C which is pretty cool.

I kind of wondered about Casadi vs Sympy. Sympy has lot’s of general purpose symbolic abilities. Symbolic solving, polynomial smarts, even some differential equation understanding. There might be big dividends to using it. But it is a little harder to get going. I feel like there is an empty space for a mathemtical modelling language that uses sympy as it’s underlying representation. I guess monkey patching sympy expressions into casadi expression might not be so hard. Sympy can also output fast C code. Sympy doesn’t really have any support for sparseness that I know of.

As a side note, It can be useful to put these other languages into numpy if you need extended reshaping abilities. The other languages often stop at matrices, which is odd to me.

Hmm. Casadi actually does have access to mixed integer programs via bonmin (and commercial solvers). That’s interesting. Check out lotka volterra minlp example

The optim interface makes some of this look better. optim.minimize and subject_to. Yeah, this is more similar to the interfaces I’m used to. It avoids the manual unpacking of the solution and the funky feel of making everything into implicit == 0 expressions.

https://web.casadi.org/blog/opti/

Here is a simple harmonic oscillator example using the more raw casadi interface. x is positive, v is velocity, u is a control force. I’m using a very basic leap frog integration. You tend to have to stack things into a single vector with vertcat when building the final problem.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

from casadi import *

import matplotlib.pyplot asplt

g=9.8

N=100

x=SX.sym('x',N)

v=SX.sym('v',N)

u=SX.sym('u',N-1)

#theta = SX('theta', N)

#thdot = SX('thetadot', N)

dt=0.1

constraints=[x[0]-1,v[0]]# expressions that must be zero

Very fast. Very impressive. Relatively readable code. I busted this out in like 15 minutes. IPopt solves the thing in the blink of an eye (about 0.05s self reported). Might be even faster if I warm start it with a good solution, as I would in online control (which may be feasible at this speed). Can add the initial condition as a parameter to the problem

I’m a guy who is somewhat familiar with Haskell who is trying to learn Rust. So I thought I’d try to replicate some cool Haskell functionality in Rust. I would love to hear comments, because I’m trying to learn. I have no sense of Rust aesthetics yet and in particular I have no idea how this interacts with the borrow system. What follows is a pretty rough brain dump

GADTs (Generalized algebraic data types) are an extension in Haskell that allows you to write constrained type signatures for your data constructors. They also change how the type checking of pattern matching is processed.

GADTs are sometimes described/faked by being built by making data types that hold equality/unification constraints. Equality constraints in Haskell like a ~ Int are fairly magical and the Rust compiler does not support them in an obvious way. Maybe this is the next project. Figure out how to fake ’em if one can. I don’t think this is promising though, because faking them will be a little wonky, and then GADTs are a little wonky on top of that. See https://docs.rs/refl/0.1.2/refl/ So we’ll go another (related) road.

This is roughly what GADTs look like in Haskell.

1

2

3

data MyGadtawhere

TBool::Bool->MyGadt Bool

TInt::Int->MyGadt Int

And here is one style of encoding using smart constructors and a typeclass for elimination (pattern matching is replicated as a function that takes callbacks for the data held in the different cases). Regular functions can have a restricted type signature than the most general one their implementation implies. The reason to use a typeclass is so that we can write the eliminator as returning the same type that the GADT supplies. There isn’t an explicit equality constraint. A kind of Leibnitz equality

elimMyGadt___=error"How did TBool2 get type MyGadt2 Int?"

instance MyGadtElim Boolwhere

elimMyGadt(TBool2x)fb fi=fbx

The

1

forallf::*->*

is a problem for Rust. Rust does not have higher kinded types, although they can be faked to some degree. https://gist.github.com/CMCDragonkai/a5638f50c87d49f815b8 There are murmurs of Associated Type Constructors / GATs , whatever those are , that help ease the pain, but I’m pretty sure they are not implemented anywhere yet.

I’m going to do something related, a defunctionalization of the higher kinded types. We make an application trait, that will apply the given type function tag to the argument. What I’m doing is very similar to what happens in the singletons library, so we may be getting some things for free.

Then in order to define a new typelevel function rip out a quick tag type and an App impl.

1

2

3

4

structF1{}

impl<A>App1<A>forF1{

typeT=Vec<Vec<A>>;

}

It might be possible to sugar this up with a macro. It may also be possible to write typelevel functions in a point free style without defining new function tag names. The combinators Id, Comp, Par, Fst, Snd, Dup, Const are all reasonably definable and fairly clear for small functions. Also the combinator S if you want to talk SKI combinatory calculus, which is unfit for humans. https://en.wikipedia.org/wiki/SKI_combinator_calculus For currying, I used a number for how many arguments are left to be applied (I’m not sure I’ve been entirely consistent with these numbers). You need to do currying quite manually. It may be better to work with tuplized arguments

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

structVec1{}// number is number of aspplications left.

impl<A>App1<A>forVec1{

typeT=Vec<A>;

}

structId();

impl<A>App1<A>forId{

typeT=A;

}

// partially applied const.

// struct Const()

// call this next one const1

structConst1();

structConst<B>(PhantomData<B>);// should be Const1

Anyway, the following is a translation of the above Haskell (well, I didn’t wrap an actual i64 or bool in there but I could have I think). You need to hide the actual constructors labeled INTERNAL in a user inaccessible module.

1

2

3

4

5

6

7

8

9

10

11

12

13

enumGadt<A>{

TIntINTERNAL(PhantomData<A>),

TBoolINTERNAL(PhantomData<A>)

}

// then build the specialzied constructors that

fn TBool()->Gadt<bool>{

Gadt::TBoolINTERNAL(PhantomData)

}

fn TInt()->Gadt<i64>{

Gadt::TIntINTERNAL(PhantomData)

}

The smart constructors put the right type in the parameter spot

Then pattern matching is a custom trait per gadtified type. Is it possible to unify the different elimination traits that will come up into a single Elim trait? I’m 50-50 about whether this is possible. What we’re doing is a kind of fancy map_or_else if that helps you.

Gadt::TIntINTERNAL(PhantomData)=>panic!("Somehow TInt has type Gadt<bool>"),

Gadt::TBoolINTERNAL(PhantomData)=>case1// Will never be reached though. god willing

}

}

}

Usage. You have to explicitly pass the return type function to the eliminator. No inference is done for you. It’s like Coq’s match but worse. BTW the dbg! macro is the greatest thing on earth. Well done, Rust.

1

2

3

4

5

6

7

8

letz=TInt().gadtElim::<Id>(true,34);

let z2=TBool().gadtElim::<Id>(true,34);

dbg!(z);

dbg!(z2);

structF7{}// You need to do this. Kind of sucks. macroify? App!(F<A> = Vec<A>)

One could also make an Eq a b type with Refl similarly. Then we need typelevel function tags that take two type parameter. Which, with currying or tupling, we may already have.

Questions:

Is this even good? Or is it a road of nightmares? Is this even emulating GADTs or am I just playing phantom type games?

We aren’t at full gadt. We don’t have existential types. Rust has some kind of existential story evolving (already there?), but the state of it is confusing to me. Something to play with. Higher rank functions would help?

Are overlapping typeclasses a problem in Rust?

Again, I have given nearly zero thought to borrowing and how it interacts with this. I’m a Rust n00b. I should think about it. Different eliminators based on whether you own or are borrowing?.

Networkx also has it’s own flow solvers, but cvxpy gives you some interesting flexibility, like turning the problem mixed integer, quadratic terms, and other goodies. Plus it is very easy to get going as you’ll see.

So here’s a basic example of putting these two together. Very straightforward and cool.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

import networkx asnx

import cvxpy ascvx

import matplotlib.pyplot asplt

import numpy asnp

from scipy.sparse import lil_matrix

#graph is an networkx graph from somewhere

#print(edgedict)

nEdges=len(graph.edges)

nNodes=len(graph.nodes)

posflow=cvx.Variable(nEdges)

negflow=cvx.Variable(nEdges)

# split flow into positive and negative parts so we can talk about absolute value.

# Perhaps I should let cvxpy do it for me

constraints=[0<=posflow,0<=negflow]

absflow=posflow+negflow

flow=posflow-negflow

L=nx.incidence_matrix(graph,oriented=True)

source=np.zeros(nNodes)#lil_matrix(n_nodes)

# just some random source placement.

source[7]=1

source[25]=-1

# cvxpy needs sparse matrices wrapped.

Lcvx=cvx.Constant(L)

#sourcecvx = cvx.Constant(source)

# flow conservation

constraints.append(Lcvx*flow==source)

# can put other funky inequality constraints on things.

I’ve found looking at my statistics that short, to the point, no bull crap posts are the most read and probably most useful.

I’ve been tinkering around with typed template Haskell, which a cursory internet search doesn’t make obvious even exists. The first thing to come up is a GHC implementation wiki that seems like it might be stalled in some development branch. No. Typed template Haskell is already in GHC since GHC 7.8. And the first thing that comes up on a template haskell google search is the style of Template Haskell where you get deep into the guts of the syntax tree, which is much less fun.

Here’s a basic addition interpreter example.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

{-# LANGUAGE TemplateHaskell -#}

import Language.Haskell.TH

import Language.Haskell.TH.Syntax

data Expr=Val Int|Add Expr Expr

eval' :: Expr -> Int

eval'(Valn)=n

eval' (Add e1 e2) = (eval'e1)+(eval'e2)

eval::Expr->TExpQ Int

eval(Valn)=[||n||]

eval(Add e1 e2)=[||$$(eval e1)+$$(eval e2)||]

The typed splice $$ takes a out of TExpQ a. The typed quote [|| ||] puts it in. I find that you tend to be able to follow the types to figure out what goes where. If you’re returning a TExpQ, you probably need to start a quote. If you need to recurse, you often need to splice. Etc.

You tend to have to put the actual use of the splice in a different file. GHC will complain otherwise.

1

2

ex1::Int

ex1=$$(eval(Add(Val1)(Val1)))

At the top of your file put this to have the template haskell splices dumped into a file

1

{-# OPTIONS_GHC -ddump-splices -ddump-to-file #-}

Or have your package.yaml look something like this

1

2

3

4

5

6

7

8

9

10

11

12

executables:

staged-exe:

main:Main.hs

source-dirs:app

ghc-options:

--threaded

--rtsopts

--with-rtsopts=-N

--ddump-splices

--ddump-to-file

dependencies:

-staged

If you’re using stack, you need to dive into .stack/dist/x86/Cabal/build and then /src or into the executable folder something-exe-tmp/app to find .dump-splices files.

1

2

app/Main.hs:11:10-35:Splicing expression

eval(Add(Val1)(Val1))======>(1+1)

Nice. I don’t know whether GHC might have optimized this thing down anyhow or not, but we have helped the process along.

Some more examples: unrolling the recursion on a power function (a classic)

1

2

3

4

5

6

7

8

--ordinary version

power0x=1

powernx=x *(power(n-1)x)

--templated up

power' :: Int -> TExpQ (Int -> Int)

power'0=[||const1||]

power' n = [|| \x -> x * $$(power'(n-1))x||]

You can unroll a fibonacci calculation

1

2

3

4

5

6

7

8

9

10

--unrolled fib woth sharing

fib0=1

fib1=1

fibn=(fib(n-1))+(fib(n-2))

--we always need[||||]wheneve there isaCode

fib' :: Int -> TExpQ Int

fib'0=[||1||]

fib' 1 = [|| 1 ||]

fib'n=[||$$(fib' (n-1)) + $$(fib'(n-2))||]

This is blowing up in calculation though (no memoization, you silly head). We can implement sharing in the code using let expression (adapted from https://wiki.haskell.org/The_Fibonacci_sequence). Something to think about.

1

2

3

4

5

6

fib4::Int->TExpQ Int

fib4n=gon[||(0,1)||]

where

go::Int->TExpQ(Int,Int)->TExpQ Int

gonz|n==0=[||let(a,b)=$$(z)ina||]

|otherwise=go(n-1)[||let(a,b)=$$(z)in(b,a+b)||]

Tinkering around, you’ll occasionally find GHC can’t splice and quote certain things. This is related to cross stage persistence and lifting, which are confusing to me. You should look in the links below for more info. I hope I’ll eventually get a feel for it.

If you want to see more of my fiddling in context to figure out how to get stuff to compile here’s my github that I’m playing around in https://github.com/philzook58/staged-fun

Oleg Kiselyov: I’m still trying to unpack the many things that are going on here. Oleg’s site and papers are a very rich resource.

I don’t think that staged metaprogramming requires finally tagless style, as I first thought upon reading some of his articles. It is just very useful.

You can write things in a finally tagless style such that you can write it once and have it interpreted as staged or in other ways. There is a way to also have a subclass of effects (Applicatives basically?) less powerful than Q that is sound.

In the lasttwo posts, I described the basics of how to build and manipulate the Fibonacci anyon vector space in Haskell.

As a personal anecdote, trying to understand the category theory behind the theory of anyons is one of the reasons I started learning Haskell. These spaces are typically described using the terminology of category theory. I found it very frustrating that anyons were described in an abstract and confusing terminology. I really wondered if people were just making things harder than they have to be. I think Haskell is a perfect playground to clarify these constructions. While the category theory stuff isn’t strictly necessary, it is interesting and useful once you get past the frustration.

Unfortunately, I can’t claim that this article is going to be enough to take you from zero to categorical godhood

but I hope everyone can get something out of it. Give it a shot if you’re interested, and don’t sweat the details.

The Aroma of Categories

I think Steve Awodey gives an excellent nutshell of category theory in the introductory section to his book:

“What is category theory? As a first approximation, one could say that category theory is the mathematical study of (abstract) algebras of functions. Just as group theory is the abstraction of the idea of a system of permutations of a set or symmetries of a geometric object, so category theory arises from the idea of a system of functions among some objects.”

For my intuition, a category is any “things” that plug together. The “in” of a thing has to match the “out” of another thing in order to hook them together. In other words, the requirement for something to be a category is having a notion of composition. The things you plug together are called the morphisms of the category and the matching ports are the objects of the category. The additional requirement of always having an identity morphism (a do-nothing connection wire) is usually there once you have composition, although it is good to take especial note of it.

Category theory is an elegant framework for how to think about these composing things in a mathematical way. In my experience, thinking in these terms leads to good abstractions, and useful analogies between disparate things.

It is helpful for any abstract concept to list some examples to expose the threads that connect them. Category theory in particular has a ton of examples connecting to many other fields because it is a science of analogy. These are the examples of categories I usually reach for. Which one feels the most comfortable to you will depend on your background.

Hask. Objects are types. Morphisms are functions between those types

Vect. Objects are vector spaces, morphisms are linear maps (roughly matrices).

Preorders. Objects are values. Morphisms are the inequalities between those values.

Sets. Objects are Sets. Morphisms are functions between sets.

Cat. Objects are categories, Morphisms are functors. This is a pretty cool one, although complete categorical narcissism.

Systems and Processes.

The Free Category of a directed graphs. Objects are vertices. Morphisms are paths between vertices

Generic Programming and Typeclasses

The goal of generic programming is to run programs that you write once in many way.

There are many ways to approach this generic programming goal, but one way this is achieved in Haskell is by using Typeclasses. Typeclasses allow you to overload names, so that they mean different things based upon the types involved. Adding a vector is different than adding a float or int, but there are programs that can be written that reasonably apply in both situations.

Writing your program in a way that it applies to disparate objects requires abstract ways of talking about things. Mathematics is an excellent place to mine for good abstractions. In particular, the category theory abstraction has demonstrated itself to be a very useful unified vocabulary for mathematical topics. I, and others, find it also to be a beautiful aesthetic by which to structure programs.

In the Haskell base library there is a Category typeclass defined in base. In order to use this, you need to import the Prelude in an unusual way.

1

2

{-# LANGUAGE NoImplicitPrelude #-}

import Prelude hiding((.),id)

The Category typeclass is defined on the type that corresponds to the morphisms of the category. This type has a slot for the input type and a slot for the output type. In order for something to be a category, it has to have an identity morphisms and a notion of composition.

1

2

3

classCategory cat where

id::cataa

(.)::catbc->catab->catac

The most obvious example of this Category typeclass is the instance for the ordinary Haskell function (->). The identity corresponds to the standard Haskell identity function, and composition to ordinary Haskell function composition.

1

2

3

instance Category(->)where

id=\x->x

f.g=\x->f(gx)

Another example of a category that we’ve already encountered is that of linear operators which we’ll call LinOp. LinOp is an example of a Kliesli arrow, a category built using monadic composition rather than regular function composition. In this case, the monad Q from my first post takes care of the linear pipework that happens between every application of a LinOp. The fish <=< operator is monadic composition from Control.Monad.

1

2

3

4

newtype LinOpab=LinOp{runLin::a->Qb}

instance Category LinOp where

id=LinOp pure

(LinOpf).(LinOpg)=LinOp(f<=<g)

A related category is the FibOp category. This is the category of operations on Fibonacci anyons, which are also linear operations. It is LinOp specialized to the Fibonacci anyon space. All the operations we've previously discussed (F-moves, braiding) are in this category.

The "feel" of category theory takes focus away from the objects and tries to place focus on the morphisms. There is a style of functional programming called "point-free" where you avoid ever giving variables explicit names and instead use pipe-work combinators like (.), fst, snd, or (***). This also has a feel of de-emphasizing objects. Many of the combinators that get used in this style have categorical analogs. In order to generically use categorical typeclasses, you have to write your program in this point free style.

It is possible for a program written in the categorical style to be a reinterpreted as a program, a linear algebra operation, a circuit, or a diagram, all without changing the actual text of the program. For more on this, I highly recommend Conal Elliot's compiling to categories, which also puts forth a methodology to avoid the somewhat unpleasant point-free style using a compiler plug-in. This might be an interesting place to mine for a good quantum programming language. YMMV.

Monoidal Categories.

Putting two processes in parallel can be considered a kind of product. A category is monoidal if it has this product of this flavor, and has isomorphisms for reassociating objects and producing or consuming a unit object. This will make more sense when you see the examples.

We can sketch out this monoidal category concept as a typeclass, where we use () as the unit object.

1

2

3

4

5

6

7

8

classCategoryk=>Monoidalkwhere

parC::kac->kbd->k(a,b)(c,d)

assoc::k((a,b),c)(a,(b,c))

assoc' :: k (a,(b,c)) ((a,b),c)

leftUnitor :: k ((),a) a

leftUnitor'::ka((),a)

rightUnitor::k(a,())a

rightUnitor'::ka(a,())

Instances

In Haskell, the standard monoidal product for regular Haskell functions is (***) from Control.Arrow. It takes two functions and turns it into a function that does the same stuff, but on a tuple of the original inputs. The associators and unitors are fairly straightforward. We can freely dump unit () and get it back because there is only one possible value for it.

1

2

(***)::(a->c)->(b->d)->((a,b)->(c,d))

f ***g=\(x,y)->(fx,gy)ï»¿

1

2

3

4

5

6

7

8

instance Monoidal(->)where

parCfg=f ***g

assoc((x,y),z)=(x,(y,z))

assoc' (x,(y,z)) = ((x,y),z)

leftUnitor (_, x) = x

leftUnitor'x=((),x)

rightUnitor(x,_)=x

rightUnitor'x=(x,())

The monoidal product we'll choose for LinOp is the tensor/outer/Kronecker product.

Otherwise, LinOp is basically a monadically lifted version of (->). The one dimensional vector space Q () is completely isomorphic to just a number. Taking the Kronecker product with it is basically the same thing as scalar multiplying (up to some shuffling).

1

2

3

4

5

6

7

8

instance Monoidal LinOp where

parC(LinOpf)(LinOpg)=LinOp$\(a,b)->kron(fa)(gb)

assoc=LinOp(pure.assoc)

assoc' = LinOp (pure . unassoc)

leftUnitor = LinOp (pure . leftUnitor)

leftUnitor'=LinOp(pure.leftUnitor')

rightUnitor = LinOp (pure . rightUnitor)

rightUnitor'=LinOp(pure.rightUnitor')

Now for a confession. I made a misstep in my first post. In order to make our Fibonacci anyons jive nicely with our current definitions, I should have defined our identity particle using type Id = () rather than data Id. We'll do that now. In addition, we need some new primitive operations for absorbing and emitting identity particles that did not feel relevant at that time.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

rightUnit::FibTreee(a,Id)->Q(FibTreeea)

rightUnit(TTIt_)=puret

rightUnit(IIIt_)=puret

rightUnit' :: FibTree e a -> Q (FibTree e (a,Id))

rightUnit't@(TTT__)=pure(TTIÂ tILeaf)

rightUnit' t@(TTI _ _) = pure (TTIÂ t ILeaf)

rightUnit't@(TIT__)=pure(TTIÂ tILeaf)

rightUnit' t@(III _ _) = pure (IIIÂ t ILeaf)

rightUnit't@(ITT__)=pure(IIIÂ tILeaf)

rightUnit' t@(ILeaf) = pure (IIIÂ t ILeaf)

rightUnit't@(TLeaf)=pure(TTIÂ tILeaf)

leftUnit::FibTreee(Id,a)->Q(FibTreeea)

leftUnit=rightUnit<=<braid

--braid vs braid' doesn'tmatter,but it hasanice symettry.

leftUnit' :: FibTree e a -> Q (FibTree e (Id,a))

leftUnit'=braid' <=< rightUnit'

With these in place, we can define a monoidal instance for FibOp. The extremely important and intriguing F-move operations are the assoc operators for the category. While other categories have assoc that feel nearly trivial, these F-moves don't feel so trivial.

The parC operation is extremely useful to explicitly note in a program. It is an opportunity for optimization. It is possible to inefficiently implement parC in terms of other primitives, but it is very worthwhile to implement it in new primitives (although I haven't here). In the case of (->), parC is an explicit location where actual computational parallelism is available. Once you perform parC, it is not longer obviously apparent that the left and right side of the tuple share no data during the computation. In the case of LinOp and FibOp, parC is a location where you can perform factored linear computations. The matrix vector product can be performed individually . In the first case, where we densify and then perform the multiplication, it costs time, whereas performing them individually on the factors costs time, a significant savings. Applied category theory indeed.

Laws

Like many typeclasses, these monoidal morphisms are assumed to follow certain laws. Here is a sketch (for a more thorough discussion check out the wikipedia page):

Functions with a tick at the end like assoc' should be the inverses of the functions without the tick like assoc, e.g. assoc . assoc' = id

The parC operation is (bi)functorial, meaning it obeys the commutation law parC (f . f') (g . g') = (parC f g) . (parC f' g') i.e. it doesn't matter if we perform composition before or after the parC.

The pentagon law for assoc: Applying leftbottom is the same as applying topright

1

2

3

4

5

leftbottom::(((a,b),c),d)->(a,(b,(c,d)))

leftbottom=Â assoc.assoc

topright::(((a,b),c),d)->(a,(b,(c,d)))

topright=Â (id ***assoc).assoc.(assoc ***id)

The triangle law for the unitors:

1

2

3

4

topright' :: ((a,()),b) -> (a,b)

topright'=(id ***leftUnitor).assoc

leftside::((a,()),b)->(a,b)

leftside=rightUnitor ***id

String Diagrams

String diagrams are a diagrammatic notation for monoidal categories. Morphisms are represented by boxes with lines.

Composition g . f is made by connecting lines.

The identity id is a raw arrow.

The monoidal product of morphisms is represented by placing lines next to each other.

The diagrammatic notion is so powerful because the laws of monoidal categories are built so deeply into it they can go unnoticed. Identities can be put in or taken away. Association doesn't even appear in the diagram. The boxes in the notation can naturally be pushed around and commuted past each other.

This corresponds to the property

What expression does the following diagram represent?

Is it (in Haskell notation parC (f . f') (g . g') )?

Or is it (in Haskell notation (parC f g) . (parC f' g')?

Answer: It doesn't matter because the functorial requirement of parC means the two expressions are identical.

There are a number of notations you might meet in the world that can be interpreted as String diagrams. Three that seem particular pertinent are:

Braided and Symmetric Monoidal Categories: Categories That Braid and Swap

Some monoidal categories have a notion of being able to braid morphisms. If so, it is called a braided monoidal category (go figure).

1

2

3

classMonoidalk=>Braidedkwhere

over::k(a,b)(b,a)

under::k(a,b)(b,a)

The over and under morphisms are inverse of each other over . under = id. The over morphism pulls the left morphism over the right, whereas the under pulls the left under the right. The diagram definitely helps to understand this definition.

These over and under morphisms need to play nice with the associator of the monoidal category. These are laws that valid instance of the typeclass should follow. We actually already met them in the very first post.

If the over and under of the braiding are the same the category is a symmetric monoidal category. This typeclass needs no extra functions, but it is now intended that the law over . over = id is obeyed.

1

classBraidedk=>Symmetrickwhere

When we draw a braid in a symmetric monoidal category, we don't have to be careful with which one is over and under, because they are the same thing.

The examples that come soonest to mind have this symmetric property, for example (->) is a symmetric monoidal category..

1

2

swap::(a,b)->(b,a)

swap(x,y)=(y,x)

1

2

3

4

instance Braided(->)where

over=swap

under=swap

instance Symmetric(->)

Similarly LinOp has an notion of swapping that is just a lifting of swap

1

2

3

4

instance Braided(LinOp)where

over=LinOp(pure.swap)

under=LinOp(pure.swap)

instance Symmetric LinOp

However, FibOp is not symmetric! This is perhaps at the core of what makes FibOp so interesting.

1

2

3

instance Braided FibOp where

over=FibOp braid

under=FibOp braid'

Automating Association

Last time, we spent a lot of time doing weird typelevel programming to automate the pain of manual association moves. We can do something quite similar to make the categorical reassociation less painful, and more like the carefree ideal of the string diagram if we replace composition (.) with a slightly different operator

1

2

(...)::ReAssocbb' => FibOp b'c->FibOpab->FibOpac

(FibOpf)...(FibOpg)=FibOp$f<=<reassoc<=<g

Before defining reassoc, let's define a helper LeftCollect typeclass. Given a typelevel integer n, it will reassociate the tree using a binary search procedure to make sure the left branch l at the root has Count l = n.

fmovex' -- ((l,l''),r'') -- size of (l,l'') = k + (n-k) = n

instance (

l ~ (l',r'),

gte ~ CmpNat n (Count l'),

LeftCollectngtel(l'',r''))=>LeftCollectn'LT (l,r) (l'',(r'',r)) where

leftcollect'x=do

x' <- lmap (leftcollect @n) x -- ((l'',r''),r) -- l'' is of size n

fmove'x' -- (l'',(r'',r)

instance LeftCollect n 'EQ(l,r)(l,r)where

leftcollect'=pure

Once we have LeftCollect, the typeclass ReAssoc is relatively simple to define. Given a pattern tree, we can count the elements in it's left branch and LeftCollect the source tree to match that number. Then we recursively apply reassoc in the left and right branch of the tree. This means that every node has the same number of children in the tree, hence the trees will end up in an identical shape (modulo me mucking something up).

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

classReAssocabwhere

reassoc::FibTreeea->Q(FibTreeeb)

instance(n~Countl',

gte ~ CmpNat n (Count l),

LeftCollect n gte (l,r) (l'',r''),

ReAssoc l'' l',

ReAssocr''r') => ReAssoc (l,r) (l',r') where

reassoc x = do

x'<-leftcollect@nx

x''<-rmap reassocx'

lmap reassoc x''

--instance{-# OVERLAPS #-} ReAssoc a a where

--reassoc=pure

instance ReAssoc Tau Tau where

reassoc=pure

instance ReAssoc Id Id where

reassoc=pure

It seems likely that one could write equivalent instances that would work for an arbitrary monoidal category with a bit more work. We are aided somewhat by the fact that FibOp has a finite universe of possible leaf types to work with.

Closing Thoughts

While our categorical typeclasses are helpful and nice, I should point out that they are not going to cover all the things that can be described as categories, even in Haskell. Just like the Functor typeclass does not describe all the conceptual functors you might meet. One beautiful monoidal category is that of Haskell Functors under the monoidal product of Functor Composition. More on this to come, I think. https://parametricity.com/posts/2015-07-18-braids.html

We never even touched the dot product in this post. This corresponds to another doodle in a string diagram, and another power to add to your category. It is somewhat trickier to work with cleanly in familiar Haskell terms, I think because (->) is at least not super obviously a dagger category?

The Rosetta Stone paper by Baez and Stay is probably the conceptual daddy of this entire post (and more).

Bartosz Milewski's Category Theory for Programmer's blog (online book really) and youtube series are where I learned most of what I know about category theory. I highly recommend them (huge Bartosz fanboy).

There are fancier embeddings of category theory and monoidal categories than I've shown here. Often you want constrained categories and the ability to choose unit objects. I took a rather simplistic approach here.

Applicative bidirectional programming (PDF), by Kazutaka Matsuda and Meng Wang

In it, they use a couple interesting tricks to make Lens programming more palatable. Lens often need to be be programmed in a point free style, which is rough, but by using some combinators, they are able to program lenses in a pointful style (with some pain points still left over). It is a really interesting, well-written paper. Lots ‘o’ Yoneda and laws. I’m not doing it justice. Check it out!

A while back I noted that reverse mode auto-differentiation has a type very similar to a lens and in fact you can build a working reverse mode automatic differentiation DSL out of lenses and lens-combinators. Many things about lenses, but not all, transfer over to automatic differentiation. The techniques of Matsuda and Wang do appear to transfer fairly well.

This is interesting to me for another reason. Their lift2 and unlift2 functions remind me very much of my recent approach to compiling to categories. The lift2 function is fanning a lens pair. This is basically what my FanOutput typeclass automated. unlift2 is building the input for a function function by supplying a tuple of projection lenses. This is what my BuildInput typeclass did. I think their style may extend many monoidal cartesian categories, not just lenses.

One can use the function b -> a in many of the situations one can use a in. You can do elegant things by making a Num typeclass of b -> a for example. This little fact seems to extend to other categories as well. By making a Num typeclass for Lens s a when a is a Num, we can use reasonable looking notation for arithmetic.

1

2

t1::Numa=>Lens(a,a)a

t1=unlift2$\(x,y)->x+y*y+y *7

They spend some time discussing the necessity of a Poset typeclass. For actual lawful lenses, the dup implementation needs a way to recombine multiple adjustments to the same object. In the AD-lens case, dup takes care of this by adding together the differentials. This means that everywhere they needed an Eq typeclass, we can use a Num typeclass. There may be usefulness to building a wrapped type data NZ a = Zero | NonZero a like their Tag type to accelerate the many 0 values that may be propagating through the system.

Unfortunately, as is, the performance of this is abysmal. Maybe there is a way to fix it? Unlifting and lifting destroys a lot of sharing and often necessitates adding many redundant zeros. Why are you doing reverse mode differentiation unless you care about performance? Forward mode is simpler to implement. In the intended use case of Matsuda and Wang, they are working with actual lawful lenses, which have far less computational content than AD-lenses. Good lawful lenses should just be shuffling stuff around a little. Maybe one can hope GHC is insanely intelligent and can optimize these zeros away. One point in favor of that is that our differentiation is completely pure (no mutation). Nevertheless, I suspect it will not without help. Being careful and unlifting and lifting manually may also help. In principle, I think the Lensy approach could be pretty fast (since all it is is just packing together exactly what you need to differentiate into a data type), but how to make it fast while still being easily programmable? It is also nice that it is pretty simple to implement. It is the simplest method that I know of if you needed to port operable reverse mode differentiation to a new library (Massiv?) or another functional language (Futhark?). And a smart compiler really does have a shot at finding optimizations/fusions.

While I was at it, unrelated to the paper above, I think I made a working generic auto differentiable fold lens combinator. Pretty cool. mapAccumL is a hot tip.

For practical Haskell purposes, all of this is a little silly with the good Haskell AD packages around, the most prominent being

It is somewhat interesting to note the similarity of type forall s. Lens s appearing in the Matsuda and Wang approach to those those of the forall s. BVar s monad appearing in the backprop package. In this case I believe that the s type variable plays the same role it does in the ST monad, protecting a mutating Wengert tape state held in the monad, but I haven’t dug much into it. I don’t know enough about backprop to know what to make of this similarity.

Last time we built the basic pieces we need to describe anyons in Haskell. Anyon models describe interesting physical systems where a set of particles (Tau and Id in our case) have certain splitting rules and peculiar quantum properties. The existence of anyons in a system are the core physics necessary to support topological quantum computation. In topological quantum computing, quantum gates are applied by braiding the anyons and measurements performed by fusing anyons together and seeing what particle comes out. Applying gates in this way has inherent error correcting properties.

The tree of particle production with particle labelled leaves picks a basis (think the collection ) for the anyon quantum vector space. An individual basis vector (think ) from this basis is specified by labelling the internal edges of the tree. We built a Haskell data type for a basic free vector space and functions for the basic R-moves for braiding two anyons and reassociating the tree into a new basis with F-moves. In addition, you can move around your focus within the tree by using the function lmap and rmap. The github repo with that and what follows below is here.

Pain Points

We’ve built the atomic operations we need, but they work very locally and are quite manual. You can apply many lmap and rmap to zoom in to the leaves you actually wish to braid, and you can manually perform all the F-moves necessary to bring nodes under the same parent, but it will be rather painful.

The standard paper-and-pencil graphical notation for anyons is really awesome. You get to draw little knotty squiggles to calculate. It does not feel as laborious. The human eye and hand are great at applying a sequence of reasonably optimal moves to untangle the diagram efficiently. Our eye can take the whole thing in and our hand can zip around anywhere.

To try and bridge this gap, we need to build functions that work in some reasonable way on the global anyon tree and that automate simple tasks.

A Couple Useful Functions

Our first useful operation is pullLeftLeaf. This operation will rearrange the tree using F-moves to get the leftmost leaf associated all the way to the root. The leftmost leaf will then have the root as a parent.

Because the tree structure is in the FibTree a b data type, we need the tuple tree type of the pulled tree. This is a slightly non-trivial type computation.

In order to do this, we’ll use a bit of typelevel programming. If this is strange and alarming stuff for you, don’t sweat it too much. I am not the most elegant user of these techniques, but I hope that alongside my prose description you can get the gist of what we’re going for.

(Sandy Maguire has a new book on typelevel programming in Haskell out. Good stuff. Support your fellow Haskeller and toss him some buckos.)

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

classPullLeftLeafab|a-> b where

pullLeftLeaf ::FibTree c a->Q(FibTreecb)

instancePullLeftLeaf(Tau,c)(Tau,c)where

pullLeftLeaf=pure

instancePullLeftLeaf(Id,c)(Id,c)where

pullLeftLeaf=pure

instancePullLeftLeafTauTauwhere

pullLeftLeaf=pure

instancePullLeftLeafIdIdwhere

pullLeftLeaf=pure

instance(PullLeftLeaf(a,b)(a',b'),

r~(a',(b',c)))=>PullLeftLeaf((a,b),c)rwhere

pullLeftLeaft=do

t' <- lmap pullLeftLeaf t

fmove't'

The resulting tree type b is an easily computable function of the starting tree type a. That is what the “functional dependency” notation | a -> b in the typeclass definition tells the compiler.

The first 4 instances are base cases. If you’re all the way at the leaf, you basically want to do nothing. pure is the function that injects the classical tree description into a quantum state vector with coefficient 1.

The meat is in the last instance. In the case that the tree type matches ((a,b),c), we recursively call PullLeftLeaf on (a,b) which returns a new result (a',b'). Because of the recursion, this a' is the leftmost leaf. We can then construct the return type by doing a single reassociation step. The notation ~ forces two types to unify. We can use this conceptually as an assignment statement at the type level. This is very useful for building intermediate names for large expressions, as assert statements to ensure the types are as expected, and also occasionally to force unification of previously unknown types. It’s an interesting operator for sure.

The recursion at the type level is completely reflected in the actual function definition. We focus on the piece (a,b) inside t by using lmap. We do a recursive call to pullLeftLeaf, and finally fmove' performs the final reassociation move. It is all rather verbose, but straightforward I hope.

You can also build a completely similar PullRightLeaf.

A Canonical Right Associated Basis

One common way of dealing with larger trees is to pick a canonical basis of fully right associated trees. The fully right associated tree is a list-like structure. Its uniformity makes it easier to work with.

By recursively applying pullLeftLeaf, we can fully right associate any tree.

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

classRightAssocab|a-> b where

rightAssoc ::FibTree c a->Q(FibTreecb)

instanceRightAssocTauTauwhere

rightAssoc=pure

instanceRightAssocIdIdwhere

rightAssoc=pure

instance(PullLeftLeaf(a,b)(a',b'),

RightAssocb' b'',

r ~ (a',b''))=>RightAssoc(a,b)rwhere

rightAssoct=do

t' <- pullLeftLeaf t

rmap rightAssoc t'

This looks quite similar to the implementation of pullLeftLeaf. It doesn’t actually have much logic to it. We apply pullLeftLeaf, then we recursively apply rightAssoc in the right branch of the tree.

B-Moves: Braiding in the Right Associated Basis

Now we have the means to convert any structure to it’s right associated canonical basis. In this basis, one can apply braiding to neighboring anyons using B-moves, which can be derived from the braiding R-moves and F-moves.

The B-move applies one F-move so that the two neighboring leaves share a parent, uses the regular braiding R-move, then applies the inverse F-move to return back to the canonical basis. Similarly, bmove' is the same thing except applies the under braiding braid' rather that the over braiding braid.

bmove' = fmove'<=<(lmapbraid') <=< fmove -- point-free style for funzies. equivalent to the above except for braid'

Indexing to Leaves

We also may desire just specifying the integer index of where we wish to perform a braid. This can be achieved with another typeclass for iterated rmaping. When the tree is in canonical form, this will enable us to braid two neighboring leaves by an integer index. This index has to be a typelevel number because the output type depends on it.

In fact there is quite a bit of type computation. Given a total tree type s and an index n this function will zoom into the subpart a of the tree at which we want to apply our function. The subpart a is replaced by b, and then the tree is reconstructed into t. t is s with the subpart a mapped into b. I have intentionally made this reminiscent of the type variables of the lens type Lens s t a b .

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

rmapN ::forallngtestabe.(RMapNngtestab,gte~(CmpNatn0))=>(forallr.FibTree r a->Q(FibTreerb))->(FibTreees)->Q(FibTreeet)

rmapNft=rmapN' @n @gte f t

class RMapN n gte s t a b | n gte s b -> a t where

rmapN'::(forallr.FibTree r a->Q(FibTreerb))->(FibTreees)->Q(FibTreeet)

instance(a~s,b~t)=>RMapN0'EQ s t a b where

rmapN'ft=ft

instance(RMapN(n-1)gterr' a b,

gte ~ (CmpNat (n-1) 0),

t ~ (l,r'))=>RMapNn'GT (l,r) t a b where

rmapN'ft=rmap(rmapN@(n-1)f)t

This looks much noisier that it has to because we need to work around some of the unfortunate realities of using the typeclass system to compute types. We can’t just match on the number n in order to pick which instance to use because the patterns 0 and n are overlapping. The pattern n can match the number 0 if n ~ 0. The pattern matching in the type instance is not quite the same as the regular Haskell pattern matching we use to define functions. The order of the definitions does not matter, so you can’t have default cases. The patterns you use cannot be unifiable. In order to fix this, we make the condition if n is greater than 0 an explicit type variable gte. Now the different cases cannot unify. It is a very common trick to need a variable representing some branching condition.

For later convenience, we define rmapN which let’s us not need to supply the necessary comparison type gte.

Parentifying Leaves Lazily

While it is convenient to describe anyon computations in a canonical basis, it can be quite inefficient. Converting an arbitrary anyon tree into the standard basis will often result in a dense vector. A natural thing to do for the sake of economy is only do reassociation on demand.

The algorithm for braiding two neighboring leaves is pretty straightforward. We need to reassociate these leaves so that they have the same parent. First we need the ability to map into the least common ancestor of the two leaves. To reassociate these two leaves to have a common parent we pullrightLeaf the left subtree and then pullLeftLeaf the left subtree. Finally, there is a bit extra bit of shuffling to actually get them to be neighbors.

As a first piece, we need a type level function to count the number of leaves in a tree. In this case, I am inclined to use type families rather than multi parameter typeclasses as before, since I don’t need value level stuff coming along for the ride.

Haskell

1

2

3

4

5

6

7

typefamilyCountawhere

CountTau=1

CountId=1

Count(a,b)=(Counta)+(Countb)

typefamilyLeftCountawhere

LeftCount(a,b)=Counta

Next, we make a typeclass for mapping into the least common ancestor position.

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

lcamap ::forallnstabegte.

(gte~CmpNat(LeftCounts)n,

LCAMapngtestab)

=>(forallr.FibTree r a->Q(FibTreerb))->(FibTreees)->Q(FibTreeet)

lcamapft=lcamap' @n @gte f t

class LCAMap n gte s t a b | n gte s b -> t a where

lcamap'::(forallr.FibTree r a->Q(FibTreerb))->(FibTreees)->Q(FibTreeet)

lc~(LeftCountr),-- dip one level down to order which way we have to go next

gte~(CmpNatlcn'), -- Do we go left, right or have we arrived in the next layer?

LCAMap n'gterr' a b, -- recursive call

t ~ (l,r')-- reconstruct total return type from recursive return type. Left tree is unaffected by lcamapping

)=>LCAMapn'LT (l,r) t a b where

lcamap'fx=rmap(lcamap@n' f) x

instance (lc ~ (LeftCount l),

gte ~ (CmpNat lc n),

LCAMap n gte l l'ab,

t~(l',r)

) => LCAMap n 'GT(l,r)tabwhere

lcamap' f x = lmap (lcamap @n f) x

instance (t ~ b, a ~ s) => LCAMap n 'EQstabwhere-- Base case

lcamap'fx=fx

We find the least common ancestor position by doing a binary search on the size of the left subtrees at each node. Once the size of the left subtree equals n, we’ve found the common ancestor of leaf n and leaf n+1.

Again, this LCAMap typeclass has a typelevel argument gte that directs it which direction to go down the tree.

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

classTwiddlestab|s b-> t a where

twiddle ::(forallr.FibTree r a->Q(FibTreerb))->FibTree e s->Q(FibTreeet)

instanceTwiddle((l,x),(y,r))((l,c),r)(x,y)cwhere

twiddlefx=do

x' <- fmove x -- (((l',x),y),r')

x'' <- lmap fmove'x' -- ((l',(x,y)),r')

lmap (rmap f) x''

instance Twiddle (Tau, (y,r)) (c,r) (Tau, y) c where

twiddle f x = fmove x >>= lmap f

instance Twiddle (Id, (y,r)) (c,r) (Id, y) c where

twiddle f x = fmove x >>= lmap f

instance Twiddle ((l,x), Tau) (l,c) (x,Tau) c where

twiddle f x = fmove'x>>=rmapf

instanceTwiddle((l,x),Id)(l,c)(x,Id)cwhere

twiddlefx=fmove'x>>=rmapf

instanceTwiddle(Tau,Tau)c(Tau,Tau)cwhere

twiddlefx=fx

instanceTwiddle(Id,Id)c(Id,Id)cwhere

twiddlefx=fx

instanceTwiddle(Tau,Id)c(Tau,Id)cwhere

twiddlefx=fx

instanceTwiddle(Id,Tau)c(Id,Tau)cwhere

twiddlefx=fx

The Twiddle typeclass will perform some final cleanup after we’ve done all the leaf pulling. At that point, the leaves still do not have the same parent. They are somewhere between 0 and 2 F-moves off depending on whether the left or right subtrees may be just a leaf or larger trees. twiddle is not a recursive function.

Putting this all together we get the nmap function that can apply a function after parentifying two leaves. By far the hardest part is writing out that type signature.

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

nmap ::forall(n ::Nat)staba' b'll' r r'egte.

(gte~CmpNat(LeftCounts)n,

LCAMapngtesta' b',

a' ~ (l,r),

PullRightLeaf l l',

PullLeftLeafrr',

Twiddle (l',r') b'ab)=>

(forallr.FibTree r a->Q(FibTreerb))->FibTree e s->Q(FibTreeet)

Note that rmapN is 0-indexed but nmap is 1-indexed. This is somewhat horrifying, but that is what was natural in the implementation.

Here is a more extended example showing how to fuse some particles.

Haskell

1

2

3

4

5

6

7

8

9

10

11

12

13

ttt=TTTTLeafTLeaf

example=starttree>>=

nmap@1braid>>=

nmap@2braid>>=

nmap@1(dotttt)>>=

nmap@2braid'>>=

nmap@2(dotttt)>>=

nmap@1(dotttt)where

starttree=pure(TTT(TTTTLeaf

(TTTTLeaf

TLeaf))

TLeaf

)

I started with the tree at the top and traversed downward implementing each braid and fusion. Implicitly all the particles shown in the diagram are Tau particles. The indices refer to particle position, not to the particles “identity” as you would trace it by eye on the page. Since these are identical quantum particles, the particles don’t have identity as we classically think of it anyhow.

The particle pairs are indexed by the number on the left particle. First braid 1 over 2, then 2 over 3, fuse 1 and 2, braid 2 under 3, fuse 2 and 3, and then fuse 1 and 2. I got an amplitude for the process of -0.618, corresponding to a probability of 0.382. I would give myself 70% confidence that I implemented all my signs and conventions correctly. The hexagon and pentagon equations from last time being correct gives me some peace of mind.

Syntax could use a little spit polish, but it is usable. With some readjustment, one could use the Haskell do notation removing the need for explicit >>=.

Next Time

Anyons are often described in categorical terminology. Haskell has a category culture as well. Let’s explore how those mix!