## A Touch of Topological Quantum Computation 3: Categorical Interlude

Welcome back, friend.

In the last two posts, I described the basics of how to build and manipulate the Fibonacci anyon vector space in Haskell.

As a personal anecdote, trying to understand the category theory behind the theory of anyons is one of the reasons I started learning Haskell. These spaces are typically described using the terminology of category theory. I found it very frustrating that anyons were described in an abstract and confusing terminology. I really wondered if people were just making things harder than they have to be. I think Haskell is a perfect playground to clarify these constructions. While the category theory stuff isn’t strictly necessary, it is interesting and useful once you get past the frustration.

but I hope everyone can get something out of it. Give it a shot if you’re interested, and don’t sweat the details.

#### The Aroma of Categories

I think Steve Awodey gives an excellent nutshell of category theory in the introductory section to his book:

“What is category theory? As a first approximation, one could say that category theory is the mathematical study of (abstract) algebras of functions. Just as group theory is the abstraction of the idea of a system of permutations of a set or symmetries of a geometric object, so category theory arises from the idea of a system of functions among some objects.”

For my intuition, a category is any “things” that plug together. The “in” of a thing has to match the “out” of another thing in order to hook them together. In other words, the requirement for something to be a category is having a notion of composition. The things you plug together are called the morphisms of the category and the matching ports are the objects of the category. The additional requirement of always having an identity morphism (a do-nothing connection wire) is usually there once you have composition, although it is good to take especial note of it.

Category theory is an elegant framework for how to think about these composing things in a mathematical way. In my experience, thinking in these terms leads to good abstractions, and useful analogies between disparate things.

It is helpful for any abstract concept to list some examples to expose the threads that connect them. Category theory in particular has a ton of examples connecting to many other fields because it is a science of analogy. These are the examples of categories I usually reach for. Which one feels the most comfortable to you will depend on your background.

• Hask. Objects are types. Morphisms are functions between those types
• Vect. Objects are vector spaces, morphisms are linear maps (roughly matrices).
• Preorders. Objects are values. Morphisms are the inequalities between those values.
• Sets. Objects are Sets. Morphisms are functions between sets.
• Cat. Objects are categories, Morphisms are functors. This is a pretty cool one, although complete categorical narcissism.
• Systems and Processes.
• The Free Category of a directed graphs. Objects are vertices. Morphisms are paths between vertices

#### Generic Programming and Typeclasses

The goal of generic programming is to run programs that you write once in many way.

There are many ways to approach this generic programming goal, but one way this is achieved in Haskell is by using Typeclasses. Typeclasses allow you to overload names, so that they mean different things based upon the types involved. Adding a vector is different than adding a float or int, but there are programs that can be written that reasonably apply in both situations.

Writing your program in a way that it applies to disparate objects requires abstract ways of talking about things. Mathematics is an excellent place to mine for good abstractions. In particular, the category theory abstraction has demonstrated itself to be a very useful unified vocabulary for mathematical topics. I, and others, find it also to be a beautiful aesthetic by which to structure programs.

In the Haskell base library there is a Category typeclass defined in base. In order to use this, you need to import the Prelude in an unusual way.

{-# LANGUAGE NoImplicitPrelude #-}
import Prelude hiding ((.), id) 

The Category typeclass is defined on the type that corresponds to the morphisms of the category. This type has a slot for the input type and a slot for the output type. In order for something to be a category, it has to have an identity morphisms and a notion of composition.

class Category cat where
id :: cat a a
(.) :: cat b c -> cat a b -> cat a c

The most obvious example of this Category typeclass is the instance for the ordinary Haskell function (->). The identity corresponds to the standard Haskell identity function, and composition to ordinary Haskell function composition.

instance Category (->) where
id = \x -> x
f . g = \x -> f (g x)

Another example of a category that we’ve already encountered is that of linear operators which we’ll call LinOp. LinOp is an example of a Kliesli arrow, a category built using monadic composition rather than regular function composition. In this case, the monad Q from my first post takes care of the linear pipework that happens between every application of a LinOp. The fish <=< operator is monadic composition from Control.Monad.

newtype LinOp a b = LinOp {runLin :: a -> Q b}
instance Category LinOp where
id = LinOp pure
(LinOp f) . (LinOp g) = LinOp (f <=< g) 

A related category is the FibOp category. This is the category of operations on Fibonacci anyons, which are also linear operations. It is LinOp specialized to the Fibonacci anyon space. All the operations we've previously discussed (F-moves, braiding) are in this category.

newtype FibOp a b = FibOp {runFib :: (forall c. FibTree c a -> Q (FibTree c b))}
instance Category (FibOp) where
id = FibOp pure
(FibOp f) . (FibOp g) = FibOp (f <=< g)

The "feel" of category theory takes focus away from the objects and tries to place focus on the morphisms. There is a style of functional programming called "point-free" where you avoid ever giving variables explicit names and instead use pipe-work combinators like (.), fst, snd, or (***). This also has a feel of de-emphasizing objects. Many of the combinators that get used in this style have categorical analogs. In order to generically use categorical typeclasses, you have to write your program in this point free style.

It is possible for a program written in the categorical style to be a reinterpreted as a program, a linear algebra operation, a circuit, or a diagram, all without changing the actual text of the program. For more on this, I highly recommend Conal Elliot's  compiling to categories, which also puts forth a methodology to avoid the somewhat unpleasant point-free style using a compiler plug-in. This might be an interesting place to mine for a good quantum programming language. YMMV.

### Monoidal Categories.

Putting two processes in parallel can be considered a kind of product. A category is monoidal if it has this product of this flavor, and has isomorphisms for reassociating objects and producing or consuming a unit object. This will make more sense when you see the examples.

We can sketch out this monoidal category concept as a typeclass, where we use () as the unit object.

class Category k => Monoidal k where
parC :: k a c -> k b d -> k (a,b) (c,d)
assoc :: k ((a,b),c) (a,(b,c))
assoc' :: k (a,(b,c)) ((a,b),c)
leftUnitor :: k ((),a) a
leftUnitor' :: k a ((),a)
rightUnitor :: k (a,()) a
rightUnitor' :: k a (a,()) 

#### Instances

In Haskell, the standard monoidal product for regular Haskell functions is (***) from Control.Arrow. It takes two functions and turns it into a function that does the same stuff, but on a tuple of the original inputs. The associators and unitors are fairly straightforward. We can freely dump unit () and get it back because there is only one possible value for it.

(***) :: (a -> c) -> (b -> d) -> ((a,b) -> (c,d))
f *** g = \(x,y) -> (f x, g y) ﻿
instance Monoidal (->) where
parC f g = f *** g
assoc ((x,y),z) = (x,(y,z))
assoc' (x,(y,z)) = ((x,y),z)
leftUnitor (_, x) = x
leftUnitor' x = ((),x)
rightUnitor (x, _) = x
rightUnitor' x = (x,()) 

The monoidal product we'll choose for LinOp is the tensor/outer/Kronecker product.

kron :: Num b => W b a -> W b c -> W b (a,c)
kron (W x) (W y) = W [((a,c), r1 * r2) | (a,r1) <- x , (c,r2) <- y ]

Otherwise, LinOp is basically a monadically lifted version of (->). The one dimensional vector space Q () is completely isomorphic to just a number. Taking the Kronecker product with it is basically the same thing as scalar multiplying (up to some shuffling).

instance Monoidal LinOp where
parC (LinOp f) (LinOp g) = LinOp $\(a,b) -> kron (f a) (g b) assoc = LinOp (pure . assoc) assoc' = LinOp (pure . unassoc) leftUnitor = LinOp (pure . leftUnitor) leftUnitor' = LinOp (pure .leftUnitor') rightUnitor = LinOp (pure . rightUnitor) rightUnitor' = LinOp (pure . rightUnitor') Now for a confession. I made a misstep in my first post. In order to make our Fibonacci anyons jive nicely with our current definitions, I should have defined our identity particle using type Id = () rather than data Id. We'll do that now. In addition, we need some new primitive operations for absorbing and emitting identity particles that did not feel relevant at that time. rightUnit :: FibTree e (a,Id) -> Q (FibTree e a) rightUnit (TTI t _) = pure t rightUnit (III t _) = pure t rightUnit' :: FibTree e a -> Q (FibTree e (a,Id)) rightUnit' t@(TTT _ _) = pure (TTI t ILeaf) rightUnit' t@(TTI _ _) = pure (TTI t ILeaf) rightUnit' t@(TIT _ _) = pure (TTI t ILeaf) rightUnit' t@(III _ _) = pure (III t ILeaf) rightUnit' t@(ITT _ _) = pure (III t ILeaf) rightUnit' t@(ILeaf) = pure (III t ILeaf) rightUnit' t@(TLeaf) = pure (TTI t ILeaf) leftUnit :: FibTree e (Id,a) -> Q (FibTree e a) leftUnit = rightUnit <=< braid -- braid vs braid' doesn't matter, but it has a nice symettry. leftUnit' :: FibTree e a -> Q (FibTree e (Id,a)) leftUnit' = braid' <=< rightUnit'  With these in place, we can define a monoidal instance for FibOp. The extremely important and intriguing F-move operations are the assoc operators for the category. While other categories have assoc that feel nearly trivial, these F-moves don't feel so trivial. instance Monoidal (FibOp) where parC (FibOp f) (FibOp g) = (FibOp (lmap f)) . (FibOp (rmap g)) assoc = FibOp fmove' assoc' = FibOp fmove leftUnitor = FibOp leftUnit leftUnitor' = FibOp leftUnit' rightUnitor = FibOp rightUnit rightUnitor' = FibOp rightUnit' #### This is actually useful The parC operation is extremely useful to explicitly note in a program. It is an opportunity for optimization. It is possible to inefficiently implement parC in terms of other primitives, but it is very worthwhile to implement it in new primitives (although I haven't here). In the case of (->), parC is an explicit location where actual computational parallelism is available. Once you perform parC, it is not longer obviously apparent that the left and right side of the tuple share no data during the computation. In the case of LinOp and FibOp, parC is a location where you can perform factored linear computations. The matrix vector product $(A \otimes B)(v \otimes w)$ can be performed individually $(Av)\otimes (Bw)$. In the first case, where we densify $A \otimes B$ and then perform the multiplication, it costs $O((N_A N_B)^2)$ time, whereas performing them individually on the factors costs $O( N_A^2 + N_B^2)$ time, a significant savings. Applied category theory indeed. #### Laws Judge Dredd courtesy of David Like many typeclasses, these monoidal morphisms are assumed to follow certain laws. Here is a sketch (for a more thorough discussion check out the wikipedia page): • Functions with a tick at the end like assoc' should be the inverses of the functions without the tick like assoc, e.g. assoc . assoc' = id • The parC operation is (bi)functorial, meaning it obeys the commutation law parC (f . f') (g . g') = (parC f g) . (parC f' g') i.e. it doesn't matter if we perform composition before or after the parC. • The pentagon law for assoc: Applying leftbottom is the same as applying topright leftbottom :: (((a,b),c),d) -> (a,(b,(c,d))) leftbottom = assoc . assoc topright :: (((a,b),c),d) -> (a,(b,(c,d))) topright = (id *** assoc) . assoc . (assoc *** id) • The triangle law for the unitors: topright' :: ((a,()),b) -> (a,b) topright' = (id *** leftUnitor) . assoc leftside :: ((a,()),b) -> (a,b) leftside = rightUnitor *** id #### String Diagrams String diagrams are a diagrammatic notation for monoidal categories. Morphisms are represented by boxes with lines. Composition g . f is made by connecting lines. The identity id is a raw arrow. The monoidal product of morphisms $f \otimes g$ is represented by placing lines next to each other. The diagrammatic notion is so powerful because the laws of monoidal categories are built so deeply into it they can go unnoticed. Identities can be put in or taken away. Association doesn't even appear in the diagram. The boxes in the notation can naturally be pushed around and commuted past each other. This corresponds to the property $(id \otimes g) \circ (f \otimes id) = (f \otimes id) \circ (id \otimes g)$ What expression does the following diagram represent? Is it $(f \circ f') \otimes (g \circ g')$ (in Haskell notation parC (f . f') (g . g') )? Or is it $(f \otimes g) \circ (f' \otimes g')$ (in Haskell notation (parC f g) . (parC f' g')? Answer: It doesn't matter because the functorial requirement of parC means the two expressions are identical. There are a number of notations you might meet in the world that can be interpreted as String diagrams. Three that seem particular pertinent are: • Quantum circuits • Anyon Diagrams! #### Braided and Symmetric Monoidal Categories: Categories That Braid and Swap Some monoidal categories have a notion of being able to braid morphisms. If so, it is called a braided monoidal category (go figure). class Monoidal k => Braided k where over :: k (a,b) (b,a) under :: k (a,b) (b,a) The over and under morphisms are inverse of each other over . under = id. The over morphism pulls the left morphism over the right, whereas the under pulls the left under the right. The diagram definitely helps to understand this definition. These over and under morphisms need to play nice with the associator of the monoidal category. These are laws that valid instance of the typeclass should follow. We actually already met them in the very first post. If the over and under of the braiding are the same the category is a symmetric monoidal category. This typeclass needs no extra functions, but it is now intended that the law over . over = id is obeyed. class Braided k => Symmetric k where When we draw a braid in a symmetric monoidal category, we don't have to be careful with which one is over and under, because they are the same thing. The examples that come soonest to mind have this symmetric property, for example (->) is a symmetric monoidal category.. swap :: (a, b) -> (b, a) swap (x,y) = (y,x) instance Braided (->) where over = swap under = swap instance Symmetric (->) Similarly LinOp has an notion of swapping that is just a lifting of swap instance Braided (LinOp) where over = LinOp (pure . swap) under = LinOp (pure . swap) instance Symmetric LinOp  However, FibOp is not symmetric! This is perhaps at the core of what makes FibOp so interesting. instance Braided FibOp where over = FibOp braid under = FibOp braid' #### Automating Association Last time, we spent a lot of time doing weird typelevel programming to automate the pain of manual association moves. We can do something quite similar to make the categorical reassociation less painful, and more like the carefree ideal of the string diagram if we replace composition (.) with a slightly different operator (...) :: ReAssoc b b' => FibOp b' c -> FibOp a b -> FibOp a c (FibOp f) ... (FibOp g) = FibOp$ f <=< reassoc <=< g

Before defining reassoc, let's define a helper LeftCollect typeclass. Given a typelevel integer n, it will reassociate the tree using a binary search procedure to make sure the left branch l at the root has Count l = n.

leftcollect :: forall n gte l r o e. (gte ~ CmpNat n (Count l), LeftCollect n gte (l,r) o) => FibTree e (l,r) -> Q (FibTree e o)
leftcollect x = leftcollect' @n @gte x

class LeftCollect n gte a b | n gte a -> b where
leftcollect' :: FibTree e a -> Q (FibTree e b)

-- The process is like a binary search.
-- LeftCollect pulls n leaves into the left branch of the tuple

-- If n is greater than the size of l, we recurse into the right branch with a new number of leaves to collect
-- then we do a final reshuffle to put those all into the left tree.
instance (
k ~ Count l,
r ~ (l',r'),
n' ~ (n - k),
gte ~ CmpNat n' (Count l'),
LeftCollect n' gte r (l'',r'')) => LeftCollect n 'GT (l,r) ((l,l''),r'') where
leftcollect' x = do
x' <- rmap (leftcollect @n') x -- (l,(l'',r'')) -- l'' is size n - k
fmove x'  -- ((l,l''),r'') -- size of (l,l'') = k + (n-k) = n
instance (
l ~ (l',r'),
gte ~ CmpNat n (Count l'),
LeftCollect n gte l (l'',r'')) => LeftCollect n 'LT (l,r) (l'',(r'',r)) where
leftcollect' x = do
x' <- lmap (leftcollect @n) x -- ((l'',r''),r) -- l'' is of size n
fmove' x'  -- (l'',(r'',r)

instance LeftCollect n 'EQ (l,r) (l,r) where
leftcollect' = pure

Once we have LeftCollect, the typeclass ReAssoc is relatively simple to define. Given a pattern tree, we can count the elements in it's left branch and LeftCollect the source tree to match that number. Then we recursively apply reassoc in the left and right branch of the tree. This means that every node has the same number of children in the tree, hence the trees will end up in an identical shape (modulo me mucking something up).

class ReAssoc a b where
reassoc :: FibTree e a -> Q (FibTree e b)
instance (n ~ Count l',
gte ~ CmpNat n (Count l),
LeftCollect n gte (l,r) (l'',r''),
ReAssoc l'' l',
ReAssoc r'' r') => ReAssoc (l,r) (l',r') where
reassoc x = do
x' <- leftcollect @n x
x'' <- rmap reassoc x'
lmap reassoc x''

--instance {-# OVERLAPS #-} ReAssoc a a where
--   reassoc = pure

instance ReAssoc Tau Tau where
reassoc = pure
instance ReAssoc Id Id where
reassoc = pure

It seems likely that one could write equivalent instances that would work for an arbitrary monoidal category with a bit more work. We are aided somewhat by the fact that FibOp has a finite universe of possible leaf types to work with.

### Closing Thoughts

While our categorical typeclasses are helpful and nice, I should point out that they are not going to cover all the things that can be described as categories, even in Haskell. Just like the Functor typeclass does not describe all the conceptual functors you might meet. One beautiful monoidal category is that of Haskell Functors under the monoidal product of Functor Composition. More on this to come, I think. https://parametricity.com/posts/2015-07-18-braids.html

We never even touched the dot product in this post. This corresponds to another doodle in a string diagram, and another power to add to your category. It is somewhat trickier to work with cleanly in familiar Haskell terms, I think because (->) is at least not super obviously a dagger category?

You can find a hopefully compiling version of all my snippets and more in my chaotic mutating Github repo https://github.com/philzook58/fib-anyon

See you next time.

#### References

The Rosetta Stone paper by Baez and Stay is probably the conceptual daddy of this entire post (and more).

Bartosz Milewski's Category Theory for Programmer's blog (online book really) and youtube series are where I learned most of what I know about category theory. I highly recommend them (huge Bartosz fanboy).

https://www.math3ma.com/blog/what-is-category-theory-anyway

There are fancier embeddings of category theory and monoidal categories than I've shown here. Often you want constrained categories and the ability to choose unit objects. I took a rather simplistic approach here.

https://parametricity.com/posts/2015-07-18-braids.html

## A Touch of Topological Quantum Computation in Haskell Pt. II: Automating Drudgery

Last time we built the basic pieces we need to describe anyons in Haskell. Anyon models describe interesting physical systems where a set of particles (Tau and Id in our case) have certain splitting rules and peculiar quantum properties. The existence of anyons in a system are the core physics necessary to support topological quantum computation. In topological quantum computing, quantum gates are applied by braiding the anyons and measurements performed by fusing anyons together and seeing what particle comes out. Applying gates in this way has inherent error correcting properties.

The tree of particle production with particle labelled leaves picks a basis (think the collection $\{\hat{x}, \hat{y}, \hat{z}\}$ ) for the anyon quantum vector space. An individual basis vector (think $\hat{x}$ ) from this basis is specified by labelling the internal edges of the tree. We built a Haskell data type for a basic free vector space and functions for the basic R-moves for braiding two anyons and reassociating the tree into a new basis with F-moves. In addition, you can move around your focus within the tree by using the function lmap and rmap. The github repo with that and what follows below is here.

#### Pain Points

We’ve built the atomic operations we need, but they work very locally and are quite manual. You can apply many lmap and rmap to zoom in to the leaves you actually wish to braid, and you can manually perform all the F-moves necessary to bring nodes under the same parent, but it will be rather painful.

The standard paper-and-pencil graphical notation for anyons is really awesome. You get to draw little knotty squiggles to calculate. It does not feel as laborious. The human eye and hand are great at applying a sequence of reasonably optimal moves to untangle the diagram efficiently. Our eye can take the whole thing in and our hand can zip around anywhere.

To try and bridge this gap, we need to build functions that work in some reasonable way on the global anyon tree and that automate simple tasks.

#### A Couple Useful Functions

Our first useful operation is pullLeftLeaf. This operation will rearrange the tree using F-moves to get the leftmost leaf associated all the way to the root. The leftmost leaf will then have the root as a parent. Because the tree structure is in the FibTree a b data type, we need the tuple tree type of the pulled tree. This is a slightly non-trivial type computation.

In order to do this, we’ll use a bit of typelevel programming. If this is strange and alarming stuff for you, don’t sweat it too much. I am not the most elegant user of these techniques, but I hope that alongside my prose description you can get the gist of what we’re going for.

(Sandy Maguire has a new book on typelevel programming in Haskell out. Good stuff. Support your fellow Haskeller and toss him some buckos.)

class PullLeftLeaf a b | a -> b where
pullLeftLeaf :: FibTree c a -> Q (FibTree c b)
instance PullLeftLeaf (Tau,c) (Tau,c) where
pullLeftLeaf = pure
instance PullLeftLeaf (Id,c) (Id,c) where
pullLeftLeaf = pure
instance PullLeftLeaf Tau Tau where
pullLeftLeaf = pure
instance PullLeftLeaf Id Id where
pullLeftLeaf = pure
instance (PullLeftLeaf (a,b) (a',b'),
r ~ (a',(b',c))) => PullLeftLeaf ((a, b),c) r where
pullLeftLeaf t = do
t' <- lmap pullLeftLeaf t
fmove' t'

The resulting tree type b is an easily computable function of the starting tree type a. That is what the “functional dependency” notation | a -> b in the typeclass definition tells the compiler.

The first 4 instances are base cases. If you’re all the way at the leaf, you basically want to do nothing. pure is the function that injects the classical tree description into a quantum state vector with coefficient 1.

The meat is in the last instance. In the case that the tree type matches ((a,b),c), we recursively call PullLeftLeaf on (a,b) which returns a new result (a',b'). Because of the recursion, this a' is the leftmost leaf. We can then construct the return type by doing a single reassociation step. The notation ~ forces two types to unify. We can use this conceptually as an assignment statement at the type level. This is very useful for building intermediate names for large expressions, as assert statements to ensure the types are as expected, and also occasionally to force unification of previously unknown types. It’s an interesting operator for sure.

The recursion at the type level is completely reflected in the actual function definition. We focus on the piece (a,b) inside t by using lmap. We do a recursive call to pullLeftLeaf, and finally fmove' performs the final reassociation move. It is all rather verbose, but straightforward I hope.

You can also build a completely similar PullRightLeaf.

#### A Canonical Right Associated Basis

One common way of dealing with larger trees is to pick a canonical basis of fully right associated trees. The fully right associated tree is a list-like structure. Its uniformity makes it easier to work with.

By recursively applying pullLeftLeaf, we can fully right associate any tree. class RightAssoc a b | a -> b where
rightAssoc :: FibTree c a -> Q (FibTree c b)
instance RightAssoc Tau Tau where
rightAssoc = pure
instance RightAssoc Id Id where
rightAssoc = pure
instance (PullLeftLeaf (a,b) (a',b'),
RightAssoc b' b'',
r ~ (a', b'')) => RightAssoc (a,b) r where
rightAssoc t = do
t' <- pullLeftLeaf t
rmap rightAssoc t'

This looks quite similar to the implementation of pullLeftLeaf. It doesn’t actually have much logic to it. We apply pullLeftLeaf, then we recursively apply rightAssoc in the right branch of the tree.

#### B-Moves: Braiding in the Right Associated Basis

Now we have the means to convert any structure to it’s right associated canonical basis. In this basis, one can apply braiding to neighboring anyons using B-moves, which can be derived from the braiding R-moves and F-moves.

The B-move applies one F-move so that the two neighboring leaves share a parent, uses the regular braiding R-move, then applies the inverse F-move to return back to the canonical basis. Similarly, bmove'  is the same thing except applies the under braiding braid' rather that the over braiding braid. (Image Source : Preskill’s notes)

bmove :: forall b c d a. FibTree a (b,(c,d)) -> Q (FibTree a (c,(b,d)))
bmove t = do
t'  :: FibTree a ((b,c),d) <- fmove t
t'' :: FibTree a ((c,b),d) <-  lmap braid t'
fmove' t''
bmove' :: forall b c d a. FibTree a (b,(c,d)) -> Q (FibTree a (c,(b,d)))
bmove' = fmove' <=< (lmap braid') <=< fmove -- point-free style for funzies. equivalent to the above except for braid'

#### Indexing to Leaves

We also may desire just specifying the integer index of where we wish to perform a braid. This can be achieved with another typeclass for iterated rmaping. When the tree is in canonical form, this will enable us to braid two neighboring leaves by an integer index. This index has to be a typelevel number because the output type depends on it.

In fact there is quite a bit of type computation. Given a total tree type s and an index n this function will zoom into the subpart a of the tree at which we want to apply our function. The subpart a is replaced by b, and then the tree is reconstructed into t. t is s with the subpart a mapped into b.  I have intentionally made this reminiscent of the type variables of the lens type Lens s t a b .

rmapN :: forall n gte s t a b e. (RMapN n gte s t a b, gte ~ (CmpNat n 0)) => (forall r. FibTree r a -> Q (FibTree r b)) -> (FibTree e s) -> Q (FibTree e t)
rmapN f t = rmapN' @n @gte f t

class RMapN n gte s t a b | n gte s b -> a t where
rmapN' :: (forall r. FibTree r a -> Q (FibTree r b)) -> (FibTree e s) -> Q (FibTree e t)

instance (a ~ s, b ~ t) => RMapN 0 'EQ s t a b where
rmapN' f t = f t
instance (RMapN (n-1) gte r r' a b,
gte ~ (CmpNat (n-1) 0),
t ~ (l,r')) => RMapN n 'GT (l,r) t a b where
rmapN' f t = rmap (rmapN @(n-1) f) t

This looks much noisier that it has to because we need to work around some of the unfortunate realities of using the typeclass system to compute types. We can’t just match on the number n in order to pick which instance to use because the patterns 0 and n are overlapping. The pattern n can match the number 0 if n ~ 0. The pattern matching in the type instance is not quite the same as the regular Haskell pattern matching we use to define functions. The order of the definitions does not matter, so you can’t have default cases. The patterns you use cannot be unifiable. In order to fix this, we make the condition if n is greater than 0 an explicit type variable gte. Now the different cases cannot unify. It is a very common trick to need a variable representing some branching condition.

For later convenience, we define rmapN which let’s us not need to supply the necessary comparison type gte.

#### Parentifying Leaves Lazily

While it is convenient to describe anyon computations in a canonical basis, it can be quite inefficient. Converting an arbitrary  anyon tree into the standard basis will often result in a dense vector. A natural thing to do for the sake of economy is only do reassociation on demand.

The algorithm for braiding two neighboring leaves is pretty straightforward. We need to reassociate these leaves so that they have the same parent. First we need the ability to map into the least common ancestor of the two leaves. To reassociate these two leaves to have a common parent we pullrightLeaf the left subtree and then pullLeftLeaf the left subtree. Finally, there is a bit extra bit of shuffling to actually get them to be neighbors.

As a first piece, we need a type level function to count the number of leaves in a tree. In this case, I am inclined to use type families rather than multi parameter typeclasses as before, since I don’t need value level stuff coming along for the ride.

type family Count a where
Count Tau = 1
Count Id = 1
Count (a,b) = (Count a) + (Count b)

type family LeftCount a where
LeftCount (a,b) = Count a

Next, we make a typeclass for mapping into the least common ancestor position. lcamap ::  forall n s t a b e gte .
(gte ~ CmpNat (LeftCount s) n,
LCAMap n gte s t a b)
=> (forall r. FibTree r a -> Q (FibTree r b)) -> (FibTree e s) -> Q (FibTree e t)
lcamap f t = lcamap' @n @gte f t

class LCAMap n gte s t a b | n gte s b -> t a where
lcamap' :: (forall r. FibTree r a -> Q (FibTree r b)) -> (FibTree e s) -> Q (FibTree e t)

instance (n' ~ (n - Count l), -- We're searching in the right subtree. Subtract the leaf number in the left subtree
lc ~ (LeftCount r), -- dip one level down to order which way we have to go next
gte ~ (CmpNat lc n'), -- Do we go left, right or have we arrived in the next layer?
LCAMap n' gte r r' a b,  -- recursive call
t ~ (l,r') -- reconstruct total return type from recursive return type. Left tree is unaffected by lcamapping
) => LCAMap n 'LT (l,r) t a b where
lcamap' f x = rmap (lcamap @n' f) x

instance (lc ~ (LeftCount l),
gte ~ (CmpNat lc n),
LCAMap n gte l l' a b,
t ~ (l',r)
) => LCAMap n 'GT (l,r) t a b where
lcamap' f x = lmap (lcamap @n f) x

instance (t ~ b, a ~ s) => LCAMap n 'EQ s t a b where -- Base case
lcamap' f x = f x

We find the least common ancestor position by doing a binary search on the size of the left subtrees at each node. Once the size of the left subtree equals n, we’ve found the common ancestor of leaf n and leaf n+1.

Again, this LCAMap typeclass has a typelevel argument gte that directs it which direction to go down the tree.

class Twiddle s t a b | s b -> t a where
twiddle :: (forall r. FibTree r a -> Q (FibTree r b)) -> FibTree e s -> Q (FibTree e t)

instance Twiddle ((l,x),(y,r)) ((l,c),r) (x,y) c where
twiddle f x = do
x'  <- fmove x -- (((l',x),y),r')
x'' <- lmap fmove' x' -- ((l',(x,y)),r')
lmap (rmap f) x''
instance Twiddle (Tau, (y,r)) (c,r) (Tau, y) c where
twiddle f x = fmove x >>= lmap f
instance Twiddle (Id, (y,r)) (c,r)  (Id, y) c where
twiddle f x = fmove x >>= lmap f
instance Twiddle ((l,x), Tau) (l,c) (x,Tau) c where
twiddle f x = fmove' x >>= rmap f
instance Twiddle ((l,x), Id) (l,c) (x,Id) c where
twiddle f x = fmove' x >>= rmap f
instance Twiddle (Tau, Tau) c (Tau,Tau) c where
twiddle f x = f x
instance Twiddle (Id, Id) c (Id,Id)  c where
twiddle f x = f x
instance Twiddle (Tau, Id) c (Tau,Id)  c where
twiddle f x = f x
instance Twiddle (Id, Tau) c (Id,Tau) c where
twiddle f x = f x

The Twiddle typeclass will perform some final cleanup after we’ve done all the leaf pulling. At that point, the leaves still do not have the same parent. They are somewhere between 0 and 2 F-moves off depending on whether the left or right subtrees may be just a leaf or larger trees. twiddle is not a recursive function.

Putting this all together we get the nmap function that can apply a function after parentifying two leaves. By far the hardest part is writing out that type signature.

nmap :: forall (n :: Nat) s t a b a' b' l l' r r' e gte.
(gte ~ CmpNat (LeftCount s) n,
LCAMap n gte s t a' b',
a' ~ (l,r),
PullRightLeaf l l',
PullLeftLeaf r r',
Twiddle (l',r') b' a b) =>
(forall r. FibTree r a -> Q (FibTree r b)) -> FibTree e s -> Q (FibTree e t)
nmap f z = lcamap @n @s @t @a' @b' (\x -> do
x'  <- lmap pullRightLeaf x
x'' <- rmap pullLeftLeaf x'
twiddle f x'') z

#### Usage Example

Here’s some simple usage:

t1 = nmap @2 braid (TTT (TTI TLeaf ILeaf) (TTT TLeaf TLeaf))
t5 = nmap @2 pure (TTT (TTI TLeaf ILeaf) (TTT TLeaf TLeaf)) >>= nmap @3 pure
t2 = nmap @1 braid (TTT (TTI TLeaf ILeaf) (TTT TLeaf TLeaf))
t4 = nmap @1 braid (TTT TLeaf (TTT TLeaf TLeaf))
t3 = nmap @2 braid (TTT (TTT (TTT TLeaf TLeaf) TLeaf) (TTT TLeaf TLeaf))
t6 = rightAssoc (TTT (TTT (TTT TLeaf TLeaf) TLeaf) (TTT TLeaf TLeaf))
t7 = t6 >>= bmove
t8 = t6 >>= rmapN @0 bmove

Note that rmapN is 0-indexed but nmap is 1-indexed. This is somewhat horrifying, but that is what was natural in the implementation.

Here is a more extended example showing how to fuse some particles. ttt = TTT TLeaf TLeaf
example = starttree >>=
nmap @1 braid >>=
nmap @2 braid >>=
nmap @1 (dot ttt) >>=
nmap @2 braid' >>=
nmap @2 (dot ttt) >>=
nmap @1 (dot ttt) where
starttree = pure (TTT (TTT TLeaf
(TTT TLeaf
TLeaf))
TLeaf
)

I started with the tree at the top and traversed downward implementing each braid and fusion. Implicitly all the particles shown in the diagram are Tau particles. The indices refer to particle position, not to the particles “identity” as you would trace it by eye on the page. Since these are identical quantum  particles, the particles don’t have identity as we classically think of it anyhow.

The particle pairs are indexed by the number on the left particle. First braid 1 over 2, then 2 over 3, fuse 1 and 2, braid 2 under 3, fuse 2 and 3, and then fuse 1 and 2. I got an amplitude for the process of -0.618, corresponding to a probability of 0.382. I would give myself 70% confidence that I implemented all my signs and conventions correctly. The hexagon and pentagon equations from last time being correct gives me some peace of mind.

Syntax could use a little spit polish, but it is usable. With some readjustment, one could use the Haskell do notation removing the need for explicit >>=.

#### Next Time

Anyons are often described in categorical terminology. Haskell has a category culture as well. Let’s explore how those mix!

## A Touch of Topological Quantum Computation in Haskell Pt. I

Quantum computing exploits the massive vector spaces nature uses to describe quantum phenomenon.

The evolution of a quantum system is described by the application of matrices on a vector describing the quantum state of the system. The vector has one entry for every possible state of the system, so the number of entries can get very, very large. Every time you add a new degree of freedom to a system, the size of the total state space gets multiplied by the size of the new DOF, so you have a vector exponentially sized in the  number  of degrees of freedom.

Now, a couple caveats. We could have described probabilistic dynamics similarly, with a probability associated with each state. The subtle difference is that quantum amplitudes are complex numbers whereas probabilities are positive real numbers. This allows for interference. Another caveat is that when you perform a measurement, you only get a single state, so you are hamstrung by the tiny amount of information you can actually extract out of this huge vector. Nevertheless, there are a handful of situations where, to everyone’s best guess, you get a genuine quantum advantage over classical or probabilistic computation.

Topological quantum computing is based around the braiding of particles called anyons. These particles have a peculiar vector space associated with them and the braiding applies a matrix to this space. In fact, the space associated with the particles can basically only be manipulated by braiding and other states require more energy or very large scale perturbations to access. Computing using anyons has a robustness compared to a traditional quantum computing systems. It can be made extremely unlikely that unwanted states are accessed or unwanted gates applied. The physical nature of the topological quantum system has an intrinsic error correcting power. This situation is schematically similar in some ways to classical error correction on a magnetic hard disk. Suppose some cosmic ray comes down and flips a spin in your hard disk. The physics of magnets makes the spin tend to realign with it’s neighbors, so the physics supplies an intrinsic classical error correction in this case.

The typical descriptions of how the vector spaces associated with anyons work I have found rather confusing. What we’re going to do is implement these vector spaces in the functional programming language Haskell for concreteness and play around with them a bit.

###### Anyons

In many systems, the splitting and joining of particles obey rules. Charge has to be conserved. In chemistry, the total number of each individual atom on each side of a reaction must be the same. Or in particle accelerators, lepton number and other junk has to be conserved.

Anyonic particles have their own system of combination rules. Particle A can combine with B to make C or D. Particle B combined with C always make A. That kind of stuff. These rules are called fusion rules and there are many choices, although they are not arbitrary. They can be described by a table $N_{ab}^{c}$ that holds counts of the ways to combine a and b into c. This table has to be consistent with some algebraic conditions, the hexagon and pentagon equations, which we’ll get to later.

We need to describe particle production trees following these rules in order to describe the anyon vector space.

Fibonacci anyons are one of the simplest anyon systems, and yet sufficiently complicated to support universal quantum computation. There are only two particles types in the Fibonacci system, the $I$ particle and the $\tau$  particle. The $I$ particle is an identity particle (kind of like an electrically neutral particle). It combines with $\tau$ to make a $\tau$. However, two $\tau$ particle can combine in two different ways, to make another $\tau$ particle or to make an $I$ particle.

So we make a datatype for the tree structure that has one constructor for each possible particle split and one constructor (TLeaf, ILeaf) for each final particle type. We can use GADTs (Generalize Algebraic Data Types) to make only good particle production history trees constructible. The type has two type parameters carried along with it, the particle at the root of the tree and the leaf-labelled tree structure, represented with nested tuples.

data Tau
data Id
data FibTree root leaves where
TTT :: FibTree Tau l -> FibTree Tau r -> FibTree Tau (l,r)
ITT :: FibTree Tau l -> FibTree Tau r -> FibTree Id (l,r)
TIT :: FibTree Id l -> FibTree Tau r -> FibTree Tau (l,r)
TTI :: FibTree Tau l -> FibTree Id r -> FibTree Tau (l,r)
III :: FibTree Id l -> FibTree Id r -> FibTree Id (l,r)
TLeaf :: FibTree Tau Tau
ILeaf :: FibTree Id Id
###### Free Vector Spaces

We need to describe quantum superpositions of these anyon trees. We’ll consider the particles at the leaves of the tree to be the set of particles that you have at the current moment in a time. This is a classical quantity. You will not have a superposition of these leaf particles. However, there are some quantum remnants of the history of how these particles were made. The exact history can never be determined, kind of like how the exact history of a particle going through a double slit cannot be determined. However, there is still a quantum interference effect left over. When you bring particles together to combine them, depending on the quantum connections, you can have different possible resulting particles left over with different probabilities. Recombining anyons and seeing what results is a measurement of the system .

Vectors can be described in different basis sets. The bases for anyon trees are labelled by both a tree structure and what particles are at the root and leaves. Different tree associations are the analog of using some basis x vs some other rotated basis x’. The way we’ve built the type level tags in the FibTree reflects this perspective. The labelling of inner edges of the tree with particles varies depending on which basis vector we’re talking about. A different inner particle is the analog of $\hat{x}$ vs $\hat{y}$. To work with these bases we need to break out of the mindset that a vector put on a computer is the same as an array. While for big iron purposes this is close to true, there are more flexible options. The array style forces you to use integers to index your space, but what if your basis does not very naturally map to integers?

A free vector space over some objects is the linear combination of those objects. This doesn’t have the make any sense. We can form the formal sum (3.7💋+2.3i👩‍🎨) over the emoji basis for example. Until we attach more meaning to it, all it really means is a mapping between emojis and numerical coefficients. We’re also implying by the word vector that we can add two of the combinations coefficient wise and multiply scalars onto them.

We are going to import free vectors as described by the legendary Dan Piponi as described here: http://blog.sigfpe.com/2007/03/monads-vector-spaces-and-quantum.html

What he does is implement the Free vector space pretty directly. We represent a Vector space using a list of tuples [(a,b)]. The a are basis objects and b are the coefficients attached to them.

data W b a = W { runW :: [(a,b)] } deriving (Eq,Show,Ord)
instance Semigroup (W b a) where
(W x) <> (W y) = W (x <> y)
instance Monoid (W b a) where
mempty = W mempty
mapW f (W l) = W $map (\(a,b) -> (a,f b)) l instance Functor (W b) where fmap f (W a) = W$ map (\(a,p) -> (f a,p)) a

instance Num b => Applicative (W b) where
pure x = W [(x,1)]
(W fs) <*> (W xs) = W [(f x, a * b) | (f, a) <- fs, (x, b) <- xs]

instance Num b => Monad (W b) where
return x = W [(x,1)]
l >>= f = W $concatMap (\(W d,p) -> map (\(x,q)->(x,p*q)) d) (runW$ fmap f l)

a .* b = mapW (a*) b

instance (Eq a,Show a,Num b) => Num (W b a) where
W a + W b = W $(a ++ b) a - b = a + (-1) .* b _ * _ = error "Num is annoying" abs _ = error "Num is annoying" signum _ = error "Num is annoying" fromInteger a = if a==0 then W [] else error "fromInteger can only take zero argument" collect :: (Ord a,Num b) => W b a -> W b a collect = W . Map.toList . Map.fromListWith (+) . runW trimZero = W . filter (\(k,v) -> not$ nearZero v) . runW
simplify = trimZero . collect
-- filter (not . nearZero . snd)

type P a = W Double a

type Q a = W (Complex Double) a

The Vector monad factors out the linear piece of a computation. Because of this factoring, the type constrains the mapping to be linear, in a similar way that monads in other contexts might guarantee no leaking of impure computations. This is pretty handy. The function you give to bind correspond to a selector columns of the matrix.

We need some way to zoom into a subtrees and then apply operations. We define the operations lmap and rmap.

lmap :: (forall a. FibTree a b -> Q (FibTree a c)) -> (FibTree e (b,d) -> Q (FibTree e (c,d)))
lmap f (ITT l r) = fmap (\l' -> ITT l' r) (f l)
lmap f (TTI l r) = fmap (\l' -> TTI l' r) (f l)
lmap f (TIT l r) = fmap (\l' -> TIT l' r) (f l)
lmap f (TTT l r) = fmap (\l' -> TTT l' r) (f l)
lmap f (III l r) = fmap (\l' -> III l' r) (f l)

rmap :: (forall a. FibTree a b -> Q (FibTree a c)) -> (FibTree e (d,b) -> Q (FibTree e (d,c)))
rmap f (ITT l r) = fmap (\r' -> ITT l r') (f r)
rmap f (TTI l r) = fmap (\r' -> TTI l r') (f r)
rmap f (TIT l r) = fmap (\r' -> TIT l r') (f r)
rmap f (TTT l r) = fmap (\r' -> TTT l r') (f r)
rmap f (III l r) = fmap (\r' -> III l r') (f r)

You reference a node by the path it takes to get there from the root. For example,  (rmap . lmap . rmap) f applies f at the node that is at the right-left-right position down from the root.

###### Braiding

For Fibonacci anyons, the only two non trivial braidings happen when you braid two $\tau$ particles. braid :: FibTree a (l,r) -> Q (FibTree a (r,l))
braid (ITT l r) = W [(ITT r l,  cis $4 * pi / 5)] braid (TTT l r) = W [(TTT r l, (cis$ - 3 * pi / 5))]
braid (TTI l r) = pure $TIT r l braid (TIT l r) = pure$ TTI r l
braid (III l r) = pure $III r l -- The inverse of braid braid' :: FibTree a (l,r) -> Q (FibTree a (r,l)) braid' = star . braid We only have defined how to braid two particles that were split directly from the same particle. How do we describe the braiding for the other cases? Well we need to give the linear transformation for how to change basis into other tree structures. Then we have defined braiding for particles without the same immediate parent also. ###### F-Moves We can transform to a new basis. where the histories differs by association. We can braid two particles by associating the tree until they are together. An association move does not change any of the outgoing leaf positions. It can, however, change a particle in an interior position. We can apply an F-move anywhere inside the tree, not only at the final leaves. fmove :: FibTree a (c,(d,e)) -> Q (FibTree a ((c,d),e)) fmove (ITT a (TIT b c)) = pure$ ITT ( TTI  a b) c
fmove (ITT  a  (TTT b c)) = pure $ITT ( TTT a b) c fmove (ITT a (TTI b c)) = pure$ III ( ITT  a b) c

fmove (TIT  a  (TTT b c)) = pure $TTT ( TIT a b) c fmove (TIT a (TTI b c)) = pure$ TTI ( TIT  a b) c
fmove (TIT  a  (TIT b c)) = pure $TIT ( III a b) c fmove (TTI a (III b c)) = pure$ TTI ( TTI  a b) c
fmove (TTI  a  (ITT b c)) = W [(TIT ( ITT  a b) c, tau)         , (TTT ( TTT  a b) c, sqrt tau)]

fmove (TTT  a  (TTT b c)) = W [(TIT ( ITT  a b) c, sqrt tau)  ,   (TTT ( TTT  a b) c, - tau   )]
fmove (TTT  a  (TTI b c)) = pure $TTI ( TTT a b) c fmove (TTT a (TIT b c)) = pure$ TTT ( TTI  a b) c

fmove (III  a  (ITT b c)) = pure $ITT ( TIT a b) c fmove (III a (III b c)) = pure$ III ( III  a b) c

fmove' :: FibTree a ((c,d),e) -> Q (FibTree a (c,(d,e)))
fmove' (ITT ( TTI  a b) c) = pure $(ITT a (TIT b c)) fmove' (ITT ( TTT a b) c) = pure$  (ITT  a  (TTT b c))
fmove' (ITT ( TIT  a b) c) = pure $(III a (ITT b c)) fmove' (TTI ( TTT a b) c) = pure$ (TTT  a  (TTI b c))
fmove' (TTI ( TTI  a b) c) = pure $(TTI a (III b c)) fmove' (TTI ( TIT a b) c) = pure$ TIT  a  (TTI b c)

fmove' (TTT ( TTI  a b) c ) = pure $TTT a (TIT b c) fmove' (TTT ( TIT a b) c ) = pure$ TIT  a  (TTT b c)
fmove' (TTT ( TTT  a b) c) = W [(TTI  a  (ITT b c), sqrt tau)  , (TTT  a  (TTT b c),   - tau  )]

fmove' (TIT ( ITT  a b) c) = W [(TTI  a  (ITT b c), tau)         , (TTT  a  (TTT b c) , sqrt tau)]
fmove' (TIT ( III  a b) c ) = pure $TIT a (TIT b c) fmove' (III ( III a b) c ) = pure$ III  a  (III b c)
fmove' (III ( ITT  a b) c

###### Fusion / Dot product

Two particles that split can only fuse back into themselves. So the definition is pretty trivial. This is like $\hat{e}_i \cdot \hat{e}_j = \delta_{ij}$.

dot :: FibTree a (b, c) -> FibTree a' (b, c) -> Q (FibTree a' a)
dot x@(TTI _ _) y@(TTI _ _) | x == y = pure TLeaf
| otherwise = mempty
dot x@(TIT _ _) y@(TIT _ _) | x == y = pure TLeaf
| otherwise = mempty
dot x@(TTT _ _) y@(TTT _ _) | x == y = pure TLeaf
| otherwise = mempty
dot x@(III _ _) y@(III _ _) | x == y = pure ILeaf
| otherwise = mempty
dot x@(ITT _ _) y@(ITT _ _) | x == y = pure ILeaf
| otherwise = mempty
dot _ _ = mempty ###### Hexagon and Pentagon equations

The F and R matrices and the fusion rules need to obey consistency conditions called the hexagon and pentagon equations. Certain simple rearrangements have alternate ways of being achieved. The alternative paths need to agree.

pentagon1 ::  FibTree a (e,(d,(c,b))) -> Q (FibTree a (((e,d),c),b))
pentagon1 v =  do
v1 <- fmove v
fmove v1

pentagon2 :: FibTree a (b,(c,(d,e))) -> Q (FibTree a (((b,c),d),e))
pentagon2 v = do
v1 :: FibTree a (b,((c,d),e)) <- rmap fmove v
v2 :: FibTree a ((b,(c,d)),e) <- fmove v1
lmap fmove v2

ex1 = TTT TLeaf (TTT TLeaf (TTT TLeaf TLeaf))
-- returns empty
pentagon =  simplify $((pentagon1 ex1) - (pentagon2 ex1)) hexagon1 :: FibTree a (b,(c,d)) -> Q (FibTree a ((d,b),c)) hexagon1 v = do v1 :: FibTree a ((b,c),d) <- fmove v v2 :: FibTree a (d,(b,c)) <- braid v1 fmove v2 hexagon2 :: FibTree a (b,(c,d)) -> Q (FibTree a ((d,b),c)) hexagon2 v = do v1 :: FibTree a (b,(d,c)) <- rmap braid v v2 :: FibTree a ((b,d),c) <- fmove v1 lmap braid v2 ex2 = (TTT TLeaf (TTT TLeaf TLeaf)) --returns empty hexagon = simplify$ ((hexagon1 ex2) - (hexagon2 ex2))
###### Next Time:

With this, we have the rudiments of what we need to describe manipulation of anyon spaces. However, applying F-moves manually is rather laborious. Next time we’ll look into automating this using arcane type-level programming. You can take a peek at my trash WIP repo here

###### RefErences:
A big ole review on topological quantum computation: https://arxiv.org/abs/0707.1889
Ady Stern on The fractional quantum hall effect and anyons: https://www.sciencedirect.com/science/article/pii/S0003491607001674

Another good anyon tutorial: https://arxiv.org/abs/0902.3275

Mathematica program that I still don’t get, but is very interesting: http://www.cs.ox.ac.uk/people/jamie.vicary/twovect/guide.pdf

Kitaev huge Paper: https://arxiv.org/abs/cond-mat/0506438

Bonderson thesis: https://thesis.library.caltech.edu/2447/2/thesis.pdf

Bernevig review: https://arxiv.org/abs/1506.05805

More food for thought:

The Rosetta Stone: http://math.ucr.edu/home/baez/rosetta.pdf

## Variational Method of the Quantum Simple Harmonic Oscillator using PyTorch

A fun method (and useful!) for solving the ground state of the Schrodinger equation is to minimize the energy integral $dx \psi^\dagger H \psi$ while keeping the total probability 1. Pytorch is a big ole optimization library, so let’s give it a go.

I’ve tried two versions, using a stock neural network with relus and making it a bit easier by giving a gaussian with variable width and shift.

We can mimic the probability constraint by dividing by to total normalization $\int dx \psi^\dagger \psi$. A Lagrange multiplier or penalty method may allows us to access higher wavefunctions.

SGD seems to do a better job getting a rounder gaussian, while Adam is less finicky but makes a harsh triangular wavefunction.

The ground state solution of $-\frac{d^2\psi}{dx^2} + x^2\psi=E\psi$ is $e^{-x^2/2}$, with an energy of 1/2 (unless I botched up my head math). We may not get it, because we’re not sampling a very good total domain. Whatever, for further investigation.

Very intriguing is that pytorch has a determinant in it, I believe. That opens up the possibility of doing a Hartree-Fock style variational solution.

Here is my garbage

import torch
import matplotlib.pyplot as plt
import numpy as np
import torch.optim
from scipy import linalg
import time

import torch.nn as nn
import torch.nn.functional as F

class Psi(nn.Module):
def __init__(self):
super(Psi, self).__init__()

# an affine operation: y = Wx + b
self.lin1 = nn.Linear(1, 10) #Takes x to the 10 different hats
self.lin2 = nn.Linear(10, 1) #add up the hats.
#self.lin1 = nn.Linear(1, 1) #Takes x to the 10 different hats
#self.lin2 = nn.Linear(2, 1) #add up the hats.
def forward(self, x):
# Max pooling over a (2, 2) window
shifts = self.lin1(x)
hats =  F.relu(shifts) #hat(shifts)hat(shifts)
y = self.lin2(hats)
#y = torch.exp(- shifts ** 2 / 4)
return y

#z = torch.linspace(0, 1, steps=10)

# batch variable for monte carlo integration
x = torch.randn(10000,1, requires_grad=True) # a hundred random points between 0 and 1

psi = Psi()
y = psi(x)
import torch.optim as optim

optimizer =  optim.SGD(psi.parameters(), lr=0.0001, momentum=0.9, nesterov=True)
#y2 = torch.sin(np.pi*x)
#print(y)
#x2 = x.clone()
plt.scatter(x.detach().numpy(),y.detach().numpy(), label="original")
scalar = torch.ones(1,1)
for i in range(4000):
#y.backward(torch.ones(100,1), create_graph=True)
x = torch.randn(1000,1, requires_grad=True) # a hundred random points between 0 and 1

y = psi(x)

y.backward(torch.ones(1000,1), create_graph=True)
E = torch.sum(x.grad ** 2 + x**2 * y**2)#+ 10*(psi(scalar*0)**2 + psi(scalar)**2)
N = torch.sum(y ** 2)
L = E/N
print(L)
L.backward()
optimizer.step()
for param in psi.parameters():
print(param)
plt.scatter(x.detach().numpy(),y.detach().numpy(), label="new")
plt.legend()
plt.show()

# may want to use the current wavefunction for gibbs style sampling
# we need to differentiate with respect to x for the kinetic energy


Edit: Hmm I didn’t compensate for the fact I was using randn sampling. That was a goof. I started using unifrom sampling, which doesn’t need compensation ## Attaching the Jordan Wigner String in Numpy

Just a fast (fast to write, not fast to run) little jordan wigner string code

import numpy as np
from numpy import kron, identity
import numpy.linalg as la

# sigma functions

sigma_x = np.array([[0, 1],[1, 0]])
sigma_y = np.array([[0, -1j],[1j, 0]])
sigma_z = np.array([[1, 0],[0, -1]])

# standard basis

spin_up = np.array([,])
spin_down = np.array([,])

sigma_plus = sigma_x + 1j * sigma_y
sigma_minus = sigma_x - 1j * sigma_y

# pauli spin

N = 3
def chainify(mat, pos):
if pos == 0:
newmat = mat
else:
newmat = identity(2)
for j in range(1,N):
if j == pos:
newmat = kron(newmat,mat)
else:
newmat = kron(newmat,identity(2))
return newmat

def sx(i):
return chainify(sigma_x,i)
def sy(i):
return chainify(sigma_y,i)
def sz(i):
return chainify(sigma_z,i)
def sp(i):
return chainify(sigma_plus,i)
def sm(i):
return chainify(sigma_minus,i)

#print sz(0)
#print sz(1)
#print sz(2)

#print np.dot(sp(0),sp(0))
# sp sm = 2 + 2 sz
#print np.dot(sp(0),sm(0))- 2*identity(2**N) - 2*sz(0)

I = identity(2**N)

fdag = lambda i: sp(i)/2
f = lambda i: sm(i)/2

def stringify(mat, pos):
if pos == 0:
newmat = mat
else:
newmat = sigma_z
for j in range(1,N):
if j == pos:
newmat = kron(newmat,mat)
elif j<pos:
newmat = kron(newmat,sigma_z)
else:
newmat = kron(newmat,identity(2))
return newmat

def cdag(i):
return np.mat(stringify(sigma_plus/2, i))

def c(i):
return np.mat(stringify(sigma_minus/2, i))

#print np.dot(cdag(1),c(1)) + np.dot(c(1),cdag(1)) # This is 1
#print np.dot(cdag(1),c(2)) + np.dot(c(2),cdag(1)) # This is 0

#It does appear to work.

print cdag(1)*c(1) + c(1)*cdag(1) # This is 1
print cdag(1)*c(2) + c(2)*cdag(1) # This anticommutator is 0.


What fun!