Categorical LQR Control with Linear Relations

Last time, I described and built an interesting expansion of describing systems via linear functions, that of using linear relations. We remove the requirement that mappings be functional (exactly one output vector for any given input vector), which extends our descriptive capability. This is useful for describing “open” pieces of a linear system like electric circuits. This blog post is about another application: linear dynamical systems and control.

Linear relations are an excellent tool for reachability/controllability. In a control system, it is important to know where it is possible to get the system to. With linear dynamics $x_{t+1} = Ax_{t} + B u_{t}$, an easy controllability question is “can my system reach into some subspace”. This isn’t quite the most natural question physically, but it is natural mathematically. My first impulse would be to ask “given the state starts in this little region over here, can it get to this little region over here”, but the subspace question is a bit easier to answer. It’s a little funky but it isn’t useless. It can detect if the control parameter only touches 1 of 2 independent dynamical systems for example. We can write the equations of motion as a linear relation and iterate composition on them. See Erbele for more.

There is a set of different things (and kind of more interesting things) that are under the purview of linear control theory, LQR Controllers and Kalman filters. The LQR controller and Kalman filter are roughly (exactly?) the same thing mathematically. At an abstract mathematical level, they both rely on the fact that optimization of a quadratic objective $x^T Q x + r^T x + c$ with a linear constraints $Ax=b$ is solvable in closed form via linear algebra. The cost of the LQR controller could be the squared distance to some goal position for example. When you optimize a function, you set the derivative to 0. This is a linear equation for quadratic objectives. It is a useful framework because it has such powerful computational teeth.

The Kalman filter does a similar thing, except for the problem of state estimation. There are measurements linear related to the state that you want to match with the prior knowledge of the linear dynamics of the system and expected errors of measurement and environment perturbations to the dynamics.

There are a couple different related species of these filters. We could consider discrete time or continuous time. We can also consider infinite horizon or finite horizon. I feel that discrete time and finite horizon are the most simple and fundamental so that is what we’ll stick with. The infinite horizon and continuous time are limiting cases.

We also can consider the dynamics to be fixed for all time steps, or varying with time. Varying with time is useful for the approximation of nonlinear systems, where there are different good nonlinear approximation (taylor series of the dynamics) depending on the current state.

There are a couple ways to solve a constrained quadratic optimization problem. In some sense, the most conceptually straightforward is just to solve for a span of the basis (the “V-Rep” of my previous post) and plug that in to the quadratic objective and optimize over your now unconstrained problem. However, it is commonplace and elegant in many respects to use the Lagrange multiplier method. You introduce a new parameter for every linear equality and add a term to the objective $\lambda^T (A x - b)$. Hypothetically this term is 0 for the constrained solution so we haven’t changed the objective in a sense. It’s a funny slight of hand. If you optimize over x and $\lambda$, the equality constraint comes out as your gradient conditions on $\lambda$. Via this method, we convert a linearly constrained quadratic optimization problem into an unconstrained quadratic optimization problem with more variables.

Despite feeling very abstract, the value these Lagrangian variables take on has an interpretation. They are the “cost” or “price” of the equality they correspond to. If you moved the constraint by an amount $Ax - b + \epsilon$, you would change the optimal cost by an amount $\lambda \epsilon$. (Pretty sure I have that right. Units check out. See Boyd https://www.youtube.com/watch?v=FJVmflArCXc)

The Lagrange multipliers enforcing the linear dynamics are called the co-state variables. They represent the “price” that the dynamics cost, and are derivatives of the optimal value function V(s) (The best value that can be achieved from state s) that may be familiar to you from dynamic programming or reinforcement learning. See references below for more.

Let’s get down to some brass tacks. I’ve suppressed some terms for simplicity. You can also add offsets to the dynamics and costs.

A quadratic cost function with lagrange multipliers. Q is a cost matrix associated with the x state variables, R is a cost matrix for the u control variables.

$C = \lambda_0 (x_0 - \tilde{x}_0) + \sum_t x_t^T Q x_t + u_t^T R u_t + \lambda_{t+1}^T (x_{t+1} - A x_t - B u_t )$

Equations of motion results from optimizing with respect to $\lambda$ by design.

$\nabla_{\lambda_{t+1}} C = x_{t+1} - A x_t - B u_t = 0$.

The initial conditions are enforced by the zeroth multiplier.

$\nabla_{\lambda_0} C = x_i - x_{0} = 0$

Differentiation with respect to the state x has the interpretation of backwards iteration of value derivative, somewhat related to what one finds in the Bellman equation.

$\nabla_{x_t} C = Q x_{t} + A^T \lambda_{t+1} - \lambda_{t} = 0 \Longrightarrow \lambda_{t} = A^T \lambda_{t+1} + Q x_{t}$

The final condition on value derivative is the one that makes it the most clear that the Lagrange multiplier has the interpretation of the derivative of the value function, as it sets it to that.

$\nabla_{x_N} C = Q x_N - \lambda_{N} = 0$

Finally, differentiation with respect to the control picks the best action given knowledge of the value function at that time step.

$\nabla_{u_t} C = R u_{t} - B^T \lambda_t = 0$

Ok. Let’s code it up using linear relations. Each of these conditions is a separate little conceptual box. We can build the optimal control step relation by combining these updates together

The following is a bit sketchy. I am confident it makes sense, but I’m not confident that I have all of the signs and other details correct. I also still need to add the linear terms to the objective and offset terms to the dynamics. These details are all kind of actually important. Still, I think it’s nice to set down in preliminary blog form.

Bits and Bobbles

• The code in context https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs
• Some of the juicier stuff is nonlinear control. Gotta walk before we can run though. I have some suspicions that a streaming library may be useful here, or a lens-related approach. Also ADMM.
• Reminiscent of Keldysh contour. Probably a meaningless coincidence.
• In some sense H-Rep is the analog of (a -> b -> Bool) and V-rep [(a,b)]
• Note that the equations of motion are relational rather than functional for a control systems. The control parameters $u$ describe undetermined variables under our control.
• Loopback (trace) the value derivative for infinite horizon control.
• Can solve the Laplace equation with a Poincare Steklov operator. That should be my next post.
• There is something a touch unsatisfying even with the reduced goals of this post. Why did I have to write down the quadratic optimization problem and then hand translate it to linear relations? It’s kind of unacceptable. The quadratic optimization problem is really the most natural statement, but I haven’t figured out to make it compositional. The last chapter of Rockafellar?

References

https://arxiv.org/abs/1405.6881 Baez and Erbele – Categories in Control

http://www.scholarpedia.org/article/Optimal_control

http://www.argmin.net/2016/05/18/mates-of-costate/ – interesting analogy to backpropagation. Lens connection?

https://courses.cs.washington.edu/courses/cse528/09sp/optimality_chapter.pdf

Linear Relation Algebra of Circuits with HMatrix

Oooh this is a fun one.

I’ve talked before about relation algebra and I think it is pretty neat. http://www.philipzucker.com/a-short-skinny-on-relations-towards-the-algebra-of-programming/. In that blog post, I used finite relations. In principle, they are simple to work with. We can perform relation algebra operations like composition, meet, and join by brute force enumeration.

Unfortunately, brute force may not always be an option. First off, the finite relations grow so enormous as to be make this infeasible. Secondly, it is not insane to talk about relations or regions with an infinite number of elements, such as some continuous blob in 2D space. In that case, we can’t even in principle enumerate all the points in the region. What are we to do? We need to develop some kind of finite parametrization of regions to manipulate. This parametrization basically can’t possibly be complete in some sense, and we may choose more or less powerful systems of description for computational reasons.

In this post, we are going to be talking about linear or affine subspaces of a continuous space. These subspaces are hyperplanes. Linear subspaces have to go through the origin, while affine spaces can have an offset from the origin.

In the previous post, I mentioned that the finite relations formed a lattice, with operations meet and join. These operations were the same as set intersection and union so the introduction of the extra terminology meet and join felt a bit unwarranted. Now the meet and join aren’t union and intersection anymore. We have chosen to not have the capability to represent the union of two vectors, instead we can only represent the smallest subspace that contains them both, which is the union closed under vector addition. For example, the join of a line and point will be the plane that goes through both.

Linear/Affine stuff is great because it is so computational. Most questions you cant to ask are answerable by readily available numerical linear algebra packages. In this case, we’ll use the Haskell package HMatrix, which is something like a numpy/scipy equivalent for Haskell. We’re going to use type-level indices to denote the sizes and partitioning of these spaces so we’ll need some helper functions.

In case I miss any extensions, make typos, etc, you can find a complete compiling version here https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs

type BEnum a = (Enum a, Bounded a)

-- cardinality. size was already taken by HMatrix :(
card :: forall a. (BEnum a) => Int
card = (fromEnum (maxBound @a)) - (fromEnum (minBound @a)) + 1

In analogy with sets of tuples for defining finite relations, we partition the components of the linear spaces to be “input” and “output” indices/variables $\begin{bmatrix} x_1 & x_2 & x_3 & ... & y_1 & y_2 & y_3 & ... \end{bmatrix}$. This partition is somewhat arbitrary and easily moved around, but the weakening of strict notions of input and output as compared to functions is the source of the greater descriptive power of relations.

Relations are extensions of functions, so linear relations are an extension of linear maps. A linear map has the form $y = Ax$. A linear relation has the form $Ax + By = 0$. An affine map has the form $y = Ax + b$ and an affine relation has the form $Ax + By = b$.

There are at least two useful concrete representation for subspaces.

1. We can write a matrix $A$ and vector $b$ down that corresponds to affine constraints. $Ax = b$. The subspace described is the nullspace of $A$ plus a solution of the equation. The rows of A are orthogonal to the space.
2. We can hold onto generators of subspace. $x = A' l+b$ where l parametrizes the subspace. In other words, the subspace is generated by / is the span of the columns of $A'$. It is the range of $A'$.

We’ll call these two representations the H-Rep and V-Rep, borrowing terminology from similar representations in polytopes (describing a polytope by the inequalities that define it’s faces or as the convex combination of it’s vertices). https://inf.ethz.ch/personal/fukudak/lect/pclect/notes2015/PolyComp2015.pdf These two representations are dual in many respects.

-- HLinRel holds A x = b constraint
data HLinRel a b = HLinRel (Matrix Double) (Vector Double) deriving Show

-- x = A l + b. Generator constraint.
data VLinRel a b = VLinRel (Matrix Double) (Vector Double) deriving Show

It is useful to have both reps and interconversion routines, because different operations are easy in the two representations. Any operations defined on one can be defined on the other by sandwiching between these conversion functions. Hence, we basically only need to define operations for one of the reps (if we don’t care too much about efficiency loss which, fair warning, is out the window for today). The bulk of computation will actually be performed by these interconversion routines. The HMatrix function nullspace performs an SVD under the hood and gathers up the space with 0 singular values.

-- if A x = b then x is in the nullspace + a vector b' solves the equation
h2v :: HLinRel a b -> VLinRel a b
h2v (HLinRel a b) = VLinRel a' b' where
b' = a <\> b -- least squares solution
a' = nullspace a

-- if x = A l + b, then A' . x = A' A l + A' b = A' b because A' A = 0
v2h :: VLinRel a b -> HLinRel a b
v2h (VLinRel a' b') = HLinRel a b where
b = a #> b' -- matrix multiply
a = tr $nullspace (tr a') -- orthogonal space to range of a. -- tr is transpose and not trace? A little bit odd, HMatrix. These linear relations form a category. I’m not using the Category typeclass because I need BEnum constraints hanging around. The identity relations is $x = y$ aka $Ix - Iy = 0$. hid :: forall a. BEnum a => HLinRel a a hid = HLinRel (i ||| (- i)) (vzero s) where s = card @a i = ident s Composing relations is done by combining the constraints of the two relations and then projecting out the interior variables. Taking the conjunction of constraints is easiest in the H-Rep, where we just need to vertically stack the individual constraints. Projection easily done in the V-rep, where you just need to drop the appropriate section of the generator vectors. So we implement this operation by flipping between the two. hcompose :: forall a b c. (BEnum a, BEnum b, BEnum c) => HLinRel b c -> HLinRel a b -> HLinRel a c hcompose (HLinRel m b) (HLinRel m' b') = let a'' = fromBlocks [[ ma', mb' , 0 ], [ 0 , mb, mc ]] in let b'' = vjoin [b', b] in let (VLinRel q p) = h2v (HLinRel a'' b'') in -- kind of a misuse let q' = (takeRows ca q) -- drop rows belonging to @b === (dropRows (ca + cb) q) in let [x,y,z] = takesV [ca,cb,cc] p in let p'= vjoin [x,z] in -- rebuild without rows for @b v2h (VLinRel q' p') -- reconstruct HLinRel where ca = card @a cb = card @b cc = card @c sb = size b -- number of constraints in first relation sb' = size b' -- number of constraints in second relation ma' = takeColumns ca m' mb' = dropColumns ca m' mb = takeColumns cb m mc = dropColumns cb m (<<<) :: forall a b c. (BEnum a, BEnum b, BEnum c) => HLinRel b c -> HLinRel a b -> HLinRel a c (<<<) = hcompose We can implement the general cadre of relation operators, meet, join, converse. I feel the converse is the most relational thing of all. It makes inverting a function nearly a no-op. hjoin :: HLinRel a b -> HLinRel a b -> HLinRel a b hjoin v w = v2h$ vjoin' (h2v v) (h2v w)

-- hmatrix took vjoin from me :(
-- joining means combining generators and adding a new generator
-- Closed under affine combination l * x1 + (1 - l) * x2
vjoin' :: VLinRel a b -> VLinRel a b -> VLinRel a b
vjoin' (VLinRel a b) (VLinRel a' b') = VLinRel (a ||| a' ||| (asColumn (b - b'))) b

-- no constraints, everything
-- trivially true
htop :: forall a b. (BEnum a, BEnum b) => HLinRel a b
htop = HLinRel (vzero (1,ca + cb)) (konst 0 1) where
ca = card @a
cb = card @b

-- hbottom?

hconverse :: forall a b. (BEnum a, BEnum b) => HLinRel a b -> HLinRel b a
hconverse (HLinRel a b) = HLinRel ( (dropColumns ca a) |||  (takeColumns ca a)) b where
ca = card @a
cb = card @b



Relational inclusion is the question of subspace inclusion. It is fairly easy to check if a VRep is in an HRep (just see plug the generators into the constraints and see if they obey them) and by using the conversion functions we can define it for arbitrary combos of H and V.


-- forall l. A' ( A l + b) == b'
-- is this numerically ok? I'm open to suggestions.
vhsub :: VLinRel a b -> HLinRel a b -> Bool
vhsub (VLinRel a b) (HLinRel a' b') = (naa' <=  1e-10 * (norm_2 a') * (norm_2 a)  ) && ((norm_2 ((a' #> b) - b')) <= 1e-10 * (norm_2 b')  ) where
naa' = norm_2 (a' <> a)

hsub :: HLinRel a b -> HLinRel a b -> Bool
hsub h1 h2 = vhsub (h2v h1) h2

heq :: HLinRel a b -> HLinRel a b -> Bool
heq a b = (hsub a b) && (hsub b a)

instance Ord (HLinRel a b) where
(<=) = hsub
(>=) = flip hsub

instance Eq (HLinRel a b) where
(==) = heq

It is useful the use the direct sum of the spaces as a monoidal product.

hpar :: HLinRel a b -> HLinRel c d -> HLinRel (Either a c) (Either b d)
hpar (HLinRel mab v) (HLinRel mcd v') = HLinRel (fromBlocks [ [mab, 0], [0 , mcd]]) (vjoin [v, v']) where

hleft :: forall a b. (BEnum a, BEnum b) => HLinRel a (Either a b)
hleft = HLinRel ( i ||| (- i) ||| (konst 0 (ca,cb))) (konst 0 ca) where
ca = card @a
cb = card @b
i = ident ca

hright :: forall a b. (BEnum a, BEnum b) => HLinRel b (Either a b)
hright = HLinRel ( i ||| (konst 0 (cb,ca)) ||| (- i) ) (konst 0 cb) where
ca = card @a
cb = card @b
i = ident cb

htrans :: HLinRel a (Either b c) -> HLinRel (Either a b) c
htrans (HLinRel m v) = HLinRel m v

hswap :: forall a b. (BEnum a, BEnum b) => HLinRel (Either a b) (Either b a)
hswap = HLinRel (fromBlocks [[ia ,0,0 ,-ia], [0, ib,-ib,0]]) (konst 0 (ca + cb)) where
ca = card @a
cb = card @b
ia = ident ca
ib = ident cb

hsum :: forall a. BEnum a => HLinRel (Either a a) a
hsum = HLinRel ( i ||| i ||| - i ) (konst 0 ca)  where
ca = card @a
i= ident ca

hdup :: forall a. BEnum a => HLinRel a (Either a a)
hdup = HLinRel (fromBlocks [[i, -i,0 ], [i, 0, -i]]) (konst 0 (ca + ca))  where
ca = card @a
i= ident ca

hdump :: HLinRel a Void
hdump = HLinRel 0 0

hlabsorb ::forall a. BEnum a => HLinRel (Either Void a) a
hlabsorb = HLinRel m v where (HLinRel m v) = hid @a 

A side note: Void causes some consternation. Void is the type with no elements and is the index type of a 0 dimensional space. It is the unit object of the monoidal product. Unfortunately by an accident of the standard Haskell definitions, actual Void is not a BEnum. So, I did a disgusting hack. Let us not discuss it more.

Circuits

Baez and Fong have an interesting paper where they describe building circuits using a categorical graphical calculus. We have the pieces to go about something similar. What we have here is a precise way in which circuit diagrams can be though of as string diagrams in a monoidal category of linear relations.

An idealized wire has two quantities associated with it, the current flowing through it and the voltage it is at.

-- a 2d space at every wire or current and voltage.
data IV = I | V deriving (Show, Enum, Bounded, Eq, Ord)

When we connect wires, the currents must be conserved and the voltages must be equal. hid and hcompose from above still achieve that. Composing two independent circuits in parallel is achieve by hpar.

We will want some basic tinker toys to work with.

A resistor in series has the same current at both ends and a voltage drop proportional to the current

resistor :: Double -> HLinRel IV IV
resistor r = HLinRel ( (2><4)  [ 1,0,-1,   0,
r, 1, 0, -1]) (konst 0 2) 

Composing two resistors in parallel adds the resistance. (resistor r1) <<< (resistor r2) == resistor (r1 + r2))

A bridging resistor allows current to flow between the two branches

bridge :: Double -> HLinRel (Either IV IV) (Either IV IV)
bridge r = HLinRel (  (4><8) [ 1,0, 1,  0, -1, 0, -1,  0, -- current conservation
0, 1, 0, 0, 0, -1 , 0,  0, --voltage maintained left
0, 0, 0, 1, 0,  0,  0, -1, -- voltage maintained right
r, 1, 0,-1, -r,  0,  0, 0  ]) (konst 0 4)


Composing two bridge circuits is putting the bridge resistors in parallel. The conductance $G=\frac{1}{R}$ of resistors in parallel adds. hcompose (bridge r1) (bridge r2) == bridge 1 / (1/r1 + 1/r2).

An open circuit allows no current to flow and ends a wire. open ~ resistor infinity

open :: HLinRel IV Void
open = HLinRel (fromList [[1,0]]) (konst 0 1)

At branching points, the voltage is maintained, but the current splits.

cmerge :: HLinRel (Either IV IV) IV
cmerge = HLinRel (fromList [[1, 0, 1, 0, -1, 0],
[0,1,0,0,0 ,-1  ],
[0,0,0,1, 0, -1]])  (konst 0 3)

This cmerge combinator could also be built using a short == bridge 0 , composing a branch with open, and then absorbing the Void away.

We can bend wires up or down by using a composition of cmerge and open.

cap :: HLinRel  (Either IV IV) Void
cap  = hcompose open cmerge

cup :: HLinRel Void (Either IV IV)
cup = hconverse cap

ground :: HLinRel IV Void
ground = HLinRel ( (1><2) [ 0 , 1 ]) (vzero 1) 

Voltage and current sources enforce current and voltage to be certain values

vsource :: Double -> HLinRel IV IV
vsource v = HLinRel ( (2><4) [ 1,0,-1,   0,
0, 1, 0, -1]) (fromList [0,v])

isource :: Double -> HLinRel IV IV
isource i = HLinRel (fromList [ [1,0, -1,   0], -- current conservation
[1, 0, 0,  0]]) (fromList [0,i])  

Measurements of circuits proceed by probes.

type VProbe = ()
vprobe :: HLinRel IV VProbe
vprobe = HLinRel ( (2><3)  [1,0,0,
0,1,-1]) (konst 0 2)  

Inductors and capacitors could be included easily, but would require the entries of the HMatrix values to be polynomials in the frequency $\omega$, which it does not support (but it could!). We'll leave those off for another day.

We actually can determine that the rules suggested above are being followed by computation.

r20 :: HLinRel IV IV
r20 = resistor 20

main :: IO ()
main = do
print (r20 == (hid <<< r20))
print (r20 == r20 <<< hid)
print (r20 == (hmeet r20 r20))
print $resistor 50 == r20 <<< (resistor 30) print$ (bridge 10) <<< (bridge 10) == (bridge 5)
print $v2h (h2v r20) == r20 print$ r20 <= htop
print $hconverse (hconverse r20) == r20 print$ (open <<< r20) == open

Bits and Bobbles

• Homogenous systems are usually a bit more elegant to deal with, although a bit more unfamiliar and abstract.
• Could make a pandas like interface for linear relations that uses numpy/scipy.sparse for the computation. All the swapping and associating is kind of fun to design, not so much to use. Labelled n-way relations are nice for users.
• Implicit/Lazy evaluation. We should let the good solvers do the work when possible. We implemented our operations eagerly. We don't have to. By allowing hidden variables inside our relations, we can avoid the expensive linear operations until it is useful to actually compute on them.
• Relational division = quotient spaces?
• DSL. One of the beauties of the pointfree/categorical approach is that you avoid the need for binding forms. This makes for a very easily manipulated DSL. The transformations feel like those of ordinary algebra and you don't have to worry about the subtleties of index renaming or substitution under binders.
• Sparse is probably really good. We have lots of identity matrices and simple rearrangements. It is very wasteful to use dense operations on these.
• Schur complement https://en.wikipedia.org/wiki/Schur_complement are the name in the game for projecting out pieces of linear problems. We have some overlap.
• Linear relations -> Polyhedral relations -> Convex Relations. Linear is super computable, polyhedral can blow up. Rearrange a DSL to abuse Linear programming as much as possible for queries.
• Network circuits. There is an interesting subclass of circuits that is designed to be pretty composable.

https://en.wikipedia.org/wiki/Two-port_network Two port networks are a very useful subclass of electrical circuits. They model transmission lines fairly well, and easily composable for filter construction.

It is standard to describe these networks by giving a linear function between two variables and the other two variables. Depending on your choice of which variables depend on which, these are called the z-parameters, y-parameters, h-parameters, scattering parameters, abcd parameters. There are tables of formula for converting from one form to the others. The different parameters hold different use cases for composition and combining in parallel or series. From the perspective of linear relations this all seems rather silly. The necessity for so many descriptions and the confusing relationship between them comes from the unnecessary and overly rigid requirement of have a linear function-like relationship rather than just a general relation, which depending of the circuit may not even be available (there are degenerate configurations where two of the variables do not imply the values of the other two). A function relationship is always a lie (although a sometimes useful one), as there is always back-reaction of new connections.

-- voltage divider
divider :: Double -> Double -> HLinRel (Either IV IV) (Either IV IV)
divider r1 r2 = hcompose (bridge r2) (hpar (resistor r1) hid) 

The relation model also makes clearer how to build lumped models out of continuous ones. https://en.wikipedia.org/wiki/Lumped-element_model

null
• Because the type indices have no connection to the actual data types (they are phantom) it is a wise idea to use smart constructors that check that the sizes of the matrices makes sense.

-- smart constructors
hLinRel :: forall a b. (BEnum a, BEnum b) => Matrix Double -> Vector Double -> Maybe (HLinRel a b)
hLinRel m v | cols m == (ca + cb) &&  (size v == rows m)  = Just (HLinRel m v)
|  otherwise = Nothing  where
ca = card @a
cb = card @b

• Nonlinear circuits. Grobner Bases and polynomial relations?
• Quadratic optimization under linear constraints. Can't get it to come out right yet. Clutch for Kalman filters. Nice for many formulations like least power, least action, minimum energy principles. Edit: I did more in this direction here http://www.philipzucker.com/categorical-lqr-control-with-linear-relations/
• Quadratic Operators -> Convex operators. See last chapter of Rockafellar.
• Duality of controllers and filters. It is well known (I think) that for ever controller algorithm there is a filter algorithm that is basically the same thing.
• LQR - Kalman
• Viterbi filter - Value function table
• particle filter - Monte Carlo control
• Extended Kalman - iLQR-ish? Use local approximation of dynamics
• unscented kalman - ?

Functors, Vectors, and Quantum Circuits

Vectors are dang useful things, and any energy you put into them seems to pay off massive dividends.

Vectors and Linear Algebra are useful for:

• 2D, 3D, 4D geometry stuff. Computer graphics, physics etc.
• Least Squares Fitting
• Solving discretized PDEs
• Quantum Mechanics
• Analysis of Linear Dynamical Systems
• Probabilistic Transition Matrices

There are certain analogies between Haskell Functors and Vectors that corresponds to a style of computational vector mathematics that I think is pretty cool and don’t see talked about much.

Due to the expressivity of its type system, Haskell has a first class notion of container that many other languages don’t. In particular, I’m referring to the fact that Haskell has higher kinded types * -> * (types parametrized on other types) that you can refer to directly without filling them first. Examples in the standard library include Maybe, [], Identity, Const b, and Either b. Much more vector-y feeling examples can be found in Kmett’s linear package V0, V1, V2, V3, V4. For example, the 4 dimensional vector type V4

data V4 a = V4 a a a a

This really isn’t such a strange, esoteric thing as it may appear. You wouldn’t blink an eye at the type

struct V4 {
double x, y, z, w;
}

in some other language. What makes Haskell special is how compositional and generic it is. We can build thousand element structs with ease via composition. What we have here is an alternative to the paradigm of computational vectors ~ arrays. Instead we have computational vectors ~ structs. In principle, I see no reason why this couldn’t be as fast as arrays, although with current compiler expectations it probably isn’t.

Monoidal categories are a mathematical structure that models this analogy well. It has been designed by mathematicians for aesthetic elegance, and it seems plausible that following its example leads us to interesting, useful, and pleasant vector combinators. And I personally find something that tickles me about category theory.

So to get started, let’s talk a bit about functors.

The Algebra of Functors

Functors in Haskell are a typeclass for containers. They allow you to map functions over all the items in the container. They are related to the categorical notion of functor, which is a mapping between categories.

type Container = * -> * -- Note: This is actually a kind signature.
-- Kinds and types are the same thing in Haskell.

You can lift the product and sum of types to the product and sum of Functors which you may find in Data.Functor.Product and Data.Functor.Sum. This is analogous to the lifting of ordinary addition and multiplication to the addition and multiplication of polynomials, which are kind of like numbers with a “hole”.

newtype Product f g a = Pair (f a , g a)
newtype Sum f g a = Sum (Either (f a) (g a))

Functors also compose. A container of containers of a is still a container of a. We can form composite containers by using the Compose newtype wrapper.

newtype Compose f g a = Compose (f (g a))

When you use this Compose newtype, instead of having to address the individual elements by using fmap twice, a single application of fmap will teleport you through both layers of the container.

Product, Sum, and Compose are all binary operator on functors. The type constructor has kind

{- Enter into ghci -}
:kind Compose ==> (* -> *) -> (* -> *) -> (* -> *)
:kind Product ==> (* -> *) -> (* -> *) -> (* -> *)
:kind Sum ==> (* -> *) -> (* -> *) -> (* -> *)

Some important other functors from the algebra of types perspective are Const Void a, Const () a, and Identity a. These are identity elements for Sum, Product, and Compose respectively.

You can define mappings between containers that don’t depend on the specifics of their contents. These mappings can only rearrange, copy and forget items of their contained type. This can be enforced at the type level by the polymorphic type signature

type f ~> g = forall a. f a -> g a

These mappings correspond in categorical terminology to natural transformations between the functors f and g. There is a category where objects are Functors and morphisms are natural transformations. Sum, Product, and Compose all obeys the laws necessary to be a monoidal product on this category.

How the lifting of functions works for Compose is kind of neat.

mon_prod :: Functor f' => (f ~> f') -> (g ~> g') -> (Compose f g ~> Compose f' g')
mon_prod ntf ntg (Compose fg) = Compose (fmap ntg (ntf fg))
-- or equvalently Compose (ntf (fmap ntg fg)) with a (Functor f) typeclass requirement.

Because the natural transformations require polymorphic types, when you apply ntf to fg the polymorphic variable a in the type of ntf restricts to a ~ g a'.

Product and Sum have a straight forward notion of commutativity ( (a,b) is isomorphic to (b,a)) . Compose is more subtle. sequenceA from the Traversable typeclass can swap the ordering of composition. sequenceA . sequenceA may or may not be the identity depending on the functors in question, so it has some flavor of a braiding operation. This is an interesting post on that topic https://parametricity.com/posts/2015-07-18-braids.html

Combinators of these sorts are used arise in at least the following contexts

• Data types a la carte – A systematic way of building extensible data types
• GHC Generics – A system for building generic functions that operate on data types that can be described with sums, products, recursion, and holes.
• In and around the Lens ecosystem

Also see the interesting post by Russell O’Connor and functor oriented programming http://r6.ca/blog/20171010T001746Z.html. I think the above is part of that to which he is referring.

Vector Spaces as Shape

Vector spaces are made of two parts, the shape (dimension) of the vector space and the scalar.

Just as a type of kind * -> * can be thought of as a container modulo it’s held type, it can also be a vector modulo its held scalar type. The higher kinded type for vector gives an explicit slot to place the scalar type.

type Vect = * -> *

The standard Haskell typeclass hierarchy gives you some of the natural operations on vectors if you so choose to abuse it in that way.

• Functor ~> Scalar Multiplication: smul s = fmap (* s)
• Applicative ~> Vector Addition: vadd x y = (+) <$> x <*> y • Traversable ~> Tranposition. sequenceA has the type of transposition and works correctly for the linear style containers like V4. The linear library does use Functor for scalar multiplication, but defines a special typeclass for addition, Additive. I think this is largely for the purposes for bringing Map like vectors into the fold, but I’m not sure. Once we’ve got the basics down of addition and scalar multiplication, the next thing I want is operations for combining vector spaces. Two important ones are the Kronecker product and direct sum. In terms of indices, the Kronecker product is a space that is indexed by the cartesian product (,) of its input space indices and the direct sum is a space indexed by the Either of its input space indices. Both are very useful constructs. I use the Kronecker product all the time when I want to work on 2D or 3D grids for example. If you’ll excuse my python, here is a toy 2-D finite difference Laplace equation example. We can lift the 1D second derivative matrix $K = \partial_x^2$ using the kronecker product $K2 = K \otimes I + I \otimes K$. The direct sum is useful as a notion of stacking matrices. import numpy as np N = 10 # We're making a 10x10 grid row = np.zeros(N) row[0] = -2 row[1] = 1 K = np.toeplitz(row,row) #toeplitz makes constant diagonal matrices I = np.eye(N) #identity matrix K2 = np.kron(K,I) + np.kron(I,K) The following is perhaps the most important point of the entire post. type Kron = Compose type DSum = Product Compose of vector functors gives the Kronecker product, and Product gives the direct sum (this can be confusing but its right. Remember, the sum in direct sum refers to the indices). We can form the Kronecker product of vectors given a Functor constraint. kron :: (Num a, Functor f, Functor g) => f a -> g a -> Kron f g a kron f g = Compose$ fmap (\s1 -> fmap (\s2 -> s1 * s2) g) f

dsum :: f a -> g a -> DSum f g a
dsum f g = Pair f g

Notice we have two distinct but related things called kron: Kron and kron. One operates on vectors spaces and the other operates on vector values.

Building vector spaces out of small combinators like V2, V4, DSum, Kron is interesting for a number of reasons.

• It is well typed. Similar to Nat indexed vectors, the types specify the size of the vector space. We can easily describe vector spaced as powers of 2 as V16 = Kron V2 (Kron V2 (Kron V2 (Kron V2 V1))), similarly in terms of its prime factors, or we can do a binary expansion (least significant bit first) V5 = DSum V1 (Kron V2 (DSum V0 (Kron V2 V1))) or other things. We do it without going into quasi-dependently typed land or GADTs.
• It often has better semantic meaning. It is nice to say Measurements, or XPosition or something rather than just denote the size of a vector space in terms of a nat. It is better to say a vector space is the Kron of two meaningful vector spaces than to just say it is a space of size m*n. I find it pleasant to think of the naturals as a free Semiring rather than as the Peano Naturals and I like the size of my vector space defined similarly.
• Interesting opportunities for parallelism. See Conal Elliott’s paper on scans and FFT: http://conal.net/papers/generic-parallel-functional/

What do linear operators look like?

In the Vectors as shape methodology, Vectors look very much like Functors.

I have been tempted to lift the natural transformation type above to the following for linear operators.

type LinOp f g = forall a. (Num a) => f a -> g a

In a sense this works, we could implement kron because many of the container type (V1, V2, V3, etc) in the linear package implement Num. However, choosing Num is a problem. Why not Fractional? Why not Floating? Sometimes we want those. Why not just specifically Double?

We don’t really want to lock away the scalar in a higher rank polymorphic type. We want to ensure that everyone is working in the same scalar type before allowing things to proceed.

type LinOp f g a = f a -> g a

Note also that this type does not constrain us to linearity. Can we form the Kronecker product of linear operators? Yes, but I’m not in love with it. This is not nearly so beautiful as the little natural transformation dance.

kron''' :: (Applicative f', Applicative g', Traversable f, Traversable g') =>
(f a -> f' a) -> (g a -> g' a) -> (Kron f g a -> Kron f' g' a)
kron'' lf lg (Compose fga) = Compose $sequenceA$ (fmap lf) $sequenceA$ (fmap lg fga)

This was a nice little head scratcher for me. Follow the types, my friend! I find this particularly true for uses of sequenceA. I find that if I want the containers swapped in ordering. In that situation sequenceA is usually the right call. It could be called transpose.

Giving the vector direct access to the scalar feels a bit off to me. I feel like it doesn’t leave enough “room” for compositionally. However, there is another possibility for a definition of morphisms could be that I think is rather elegant.

type LinOp1 f g a = forall k. Additive k => Kron f k a -> Kron g k a

Does this form actually enforce linearity? You may still rearrange objects. Great. You can also now add and scalar multiply them with the Additive k constraint. We also expose the scalar, so it can be enforced to be consistent.

One other interesting thing to note is that these forms allow nonlinear operations. fmap, liftU2 and liftI2 are powerful operations, but I think if we restricted Additive to just a correctly implemented scalar multiply and vector addition operation, and zero, we’d be good.

class Additive' f where
smul :: Num a => a -> f a -> f a
vadd :: Num a => f a -> f a -> f a
zero :: Num a => f a

We can recover the previous form by instantiation k to V1. V1, the 1-d vector space, is almost a scalar and can play the scalars role in many situations. V1 is the unit object with respect to the monoidal product Kron.

There seems to be a missing instance to Additive that is useful. There is probably a good reason it isn’t there, but I need it.

instance (Additive g, Additive f) => Additive (Compose f g) where
(Compose v) ^+^ (Compose w) = Compose (liftU2 (^+^) v w)
zero = zero
liftU2 f (Compose x) (Compose y) = Compose $liftU2 (liftU2 f) x y liftI2 f (Compose x) (Compose y) = Compose$ liftI2 (liftI2 f) x y

Monoidal Categories

The above analogy can be put into mathematical terms by noting that both vectors and functor are monoidal categories. I talked a quite a bit about monoidal categories in a previous post http://www.philipzucker.com/a-touch-of-topological-computation-3-categorical-interlude/ .

Categories are the combo of a collection of objects and arrows between the objects. The arrows can compose as long as the head of one is on the same object as the tail of the other. On every object, there is always an identity arrow, which when composed will do nothing.

We need a little extra spice to turn categories into monoidal categories. One way of thinking about it is that monoidal categories have ordinary category composition and some kind of horizontal composition, putting things side to side. Ordinary composition is often doing something kind of sequentially, applying a sequence of functions, or a sequence of matrices. The horizontal composition is often something parallel feeling, somehow applying the two arrows separately to separate pieces of the system.

Why are they called Monoidal?

There is funny game category people play where they want to lift ideas from other fields and replace the bits and pieces in such a way that the entire thing is defined in terms of categorical terminology. This is one such example.

A monoid is a binary operations that is associative and has an identity.

Sometimes people are more familiar with the concept of a group. If not, ignore the next sentence. Monoids are like groups without requiring an inverse.

Numbers are seperately monoids under both addition, multiplication and minimization (and more), all of which are associative operations with identity (0, 1, and infinity respectively).

Exponentiation is a binary operation that is not a monoid, as it isn’t associative.

class Semigroup a => Monoid a where
mempty  :: a
mappend :: a -> a -> a

A common example of a monoid is list, where mempty is the empty list and mappend appends the lists.

There are different set-like intuitions for categories. One is that the objects in the category are big opaque sets. This is the case for Hask, Rel and Vect.

A different intuitiion is that the category itself is like a set, and the objects are the elements of that set. There just so happens to be some extra structure knocking around in there: the morphisms. This is the often more the feel for the examples of preorders or graphs. The word “monoidal” means that they we a binary operation on the objects. But in the category theory aesthetic, you also need that binary operation to “play nice” with the morphisms that are hanging around too.

Functors are the first thing that has something like this. It has other properties that come along for the ride. A Functor is a map that takes objects to objects and arrows to arrows in a nice way. A binary functor takes two objects to and object, and two arrows to one arrow in a way that plays nice (commutes) with arrow composition.

String diagrams

String diagrams are a graphical notation for monoidal categories. Agin I discussed this more here.

Morphisms are denoted by boxes. Regular composition is shown by plugging arrows together vertically. Monoidal product is denoted by putting the arrows side to side.

When I was even trying to describe what a monoidal category was, I was already using language evocative of string diagrams.

You can see string diagrams in the documentation for the Arrow library. Many diagrams that people use in various fields can be formalized as the string diagrams for some monoidal category. This is big chunk of Applied Category Theory.

This is the connection to quantum circuits, which are after all a graphical notation for very Kroneckery linear operations.

type Qubit = V2
type C = Complex Double

assoc :: Functor f => (Kron (Kron f g) h) ~> (Kron f (Kron g h))
assoc = Compose . (fmap Compose) . getCompose . getCompose

assoc' :: Functor f =>  (Kron f (Kron g h)) ~> (Kron (Kron f g) h)
assoc' (Compose x)  = Compose $Compose$ (fmap getCompose x)

LinOp1 f f' a -> LinOp1 g g' a -> Kron (Kron f g) k a -> Kron (Kron f' g') k a
kron'' lf lg fgk = let v = (assoc fgk) in assoc' (Compose $fmap lg$ getCompose (lf v))

sigx' :: LinOp1 Qubit Qubit C
sigx' (Compose (V2 up down)) = Compose $V2 down up sigz' :: LinOp1 Qubit Qubit C sigz' (Compose (V2 up down)) = Compose$ V2 up ((-1) *^ down)

sigy' :: LinOp1 Qubit Qubit C
sigy' (Compose (V2 up down)) = Compose $V2 ((-i) *^ down) (i *^ up) where i = 0 :+ 1 swap' :: (Traversable f, Applicative g) => LinOp1 (Kron f g) (Kron g f) a swap' (Compose (Compose fgk)) = Compose$ Compose $sequenceA fgk cnot :: LinOp1 (Kron Qubit Qubit) (Kron Qubit Qubit) a cnot (Compose (Compose (V2 (V2 up1 down1) v))) = Compose$ Compose $V2 (V2 down1 up1) v phase :: Double -> LinOp1 Qubit Qubit C phase phi (Compose (V2 up down)) = Compose$ V2 up ((cis phi) *^ down)

lefting :: (Additive f, Additive k, Additive g) => LinOp1 f g a -> LinOp1 (Kron f k) (Kron g k) a
lefting l = kron'' l id -- Qubit.assoc' . l . Qubit.assoc

righting :: (Additive k, Additive f, Additive g) => LinOp1 f g a -> LinOp1 (Kron k f) (Kron k g) a
righting l = kron'' id l -- (Compose (Compose fkk)) = Compose $Compose$ (fmap (getCompose . l . Compose) fkk)

example :: LinOp1 (Kron Qubit Qubit) (Kron Qubit Qubit) C
example = (lefting sigx') . (lefting sigy') . (righting sigz') . swap'


There is an annoying amount of stupid repetitive book keeping with the associative structure of Kron. This can largely be avoided hopefully with coerce, but I’m not sure. I was having trouble with roles when doing it generically.

Bit and Bobbles

• Woof. This post was more draining to write than I expected. I think there is still a lot left to say. Sorry about the editing everyone! Bits and pieces of this post are scattered in this repo
• The discussion of Vect = * -> * is useful for discussion of 2-Vect, coming up next. What if we make vectors of Vect? Wacky shit.
• Metrics and Duals vectors. type Dual f a = f a -> a. type Dual1 f a = forall k. Additive k => Kron f k a -> k a
• Adjunction diagrams have cups and caps. Since we have been using representable functors, they actually have a right adjunction that is tupling with the vector space index type. This gives us something that almost feels like a metric but a weirdly constrained metric.
• LinOp1 form is yoneda? CPS? Universally quantified k is evocative of forall c. (a -> c) -> (b -> c)

Appendix

Representable/Naperian Functors

Containers that are basically big product types are also known as representable, Naperian, or logarithmic. Representable places emphasis on the isomorphism between such a container type and the type (->) i which by the algebra of types correspond is isomorphic to $a^i$ (i copies of a). They are called Naperian/Logarithmic because there is a relationship similar to exponentiation between the index type a and the container type f. If you take the Product f g, this container is indexed by (a + b) = Either a b. Compose f g is indexed by the product (a,b). (f r) ~ r^a The arrow type is written as an exponential b^a because if you have finite enumerable types a and b, that is the number of possible tabulations available for f. The Sum of two representable functors is no longer representable. Regular logarithms of sums Log(f + g) do not have good identities associated with them.

See Gibbons article. There is a good argument to be made that representable functors are a good match for vectors/well typed tensor programming.

But note that there is a reasonable interpretation for container types with sum types in them. These can be thought of as subspaces, different bases, or as choices of sparsity patterns. When you define addition, you’ll need to say how these subspaces reconcile with each other.
— two bases at 45 degrees to each other.

data V2_45 a = XY a a | XY' a a
data Maybe a = Just a | Notinhg -- a 1-d vectro space with a special marker for the zero vector.
data Maybe2 a = Just2 a a | Nothing2 -- a 2d vector space with special zero marker

Hask is a name for the category that has objects as Haskell types and morphisms as Haskell functions.

Note that it’s a curious mixing of type/value layers of Haskell. The objects are types whereas the function morphisms are Haskell values. Composition is given by (.) and the identity morphisms are given by id.

For Haskell, you can compose functions, but you can also smash functions together side by side. These combinators are held in Control.Arrow.

You can smash together types with tuple (,) or with Either. Both of these are binary operators on types. The corresponding mapping on morphisms are given by

(***) :: a b c -> a b' c' -> a (b, b') (c, c')
(+++) :: a b c -> a b' c' -> a (Either b b') (Either c c') 

These are binary operators on morphisms that play nice with the composition structure of Haskell.

Monoidal Combinators of Functors

A monoidal category also has unit objects. This is given by the Identity functor

rightUnitor :: Functor f => Compose f Identity a -> f a
rightUnitor (Compose f) = fmap runIdentity f

rightUnitor' :: f ~> Compose f Identity
rightUnitor' = Compose . fmap Identity

leftUnitor' :: f ~> Compose Identity f
leftUnitor' = Compose . Identity

leftUnitor :: Identity *** f ~> f
leftUnitor = runIdentity . getCompose

There is also a sense of associativity. It is just newtype rearrangement, so it can also be achieved with a coerce (although not polymorphically?).

assoc :: Functor f => ((f *** g) *** h) ~> (f *** (g *** h))
assoc = Compose . (fmap Compose) . getCompose . getCompose

assoc' :: Functor f => (f *** (g *** h)) ~> ((f *** g) *** h)
assoc' (Compose x) = Compose $Compose$ (fmap getCompose x)

Similarly, we can define a monoidal category structure using Product or Sum instead of Compose.

These are all actually just newtype rearrangement, so they should all just be instances of coerce, but I couldn’t get the roles to go through generically?

A Short Skinny on Relations & the Algebra of Programming

I’ve been reading about the Algebra of Programming lately and lovin’ it. See J.N. Oliveira’s draft text in particular and the links in the references. I’ve started exploring the stuff from this post and more over here: https://github.com/philzook58/rel

Why and What?

Relations can expand the power of functional programming for the purpose of specification.

The point of a specification is to be able to write down in a very compact and clear way your intent for a program, more clearly and compactly than a full implementation could be written. It therefore makes sense to add to your specification language constructs that are not necessarily executable or efficient for the sake of compactness and clarity. When one needs executability or efficiency, one writes an implementation whose behavior you can connect to the spec via a formal or informal process.

Functional programming, with it’s focus on the abstraction of the mathematical function, is a good trade-off between executability, efficiency, and expressibility. It lies in a reasonable location between the ideas amenable to reasoning by a human mind and the command-driven requirements of the machines.

Functions are a specialization of relations. Relations extend the mathematical notion of functions with constructs like nondeterministic choice, failure and converse. These constructs are not always obviously executable or efficient. However, they greatly extend the abilities of reasoning and the clarity of expression of a specification.

The point-free style of reasoning about functions extends to a point-free style reasoning about relations, which is known as relation algebra. There are rich analogies with databases, category theory, linear algebra, and other topics.

Plus, I think it is very neato for some reason. If anyone ever thinks something is really neato, isn’t it worth giving it a listen?

A Simple Representation of Relations in Haskell

The simplest description of relations is as a set of tuples. So first let’s talk a bit about the options for sets in Haskell.

There are a couple different reasonable ways to represent sets in Haskell.

• [a] or Vector a
• a -> Bool
• Set a — a tree based Set from the containers package.

These have different performance characteristics and different power. The list [a] is very simple and has specialized pleasant syntax available. The indicator function a -> Bool gives you no ability to produce values of type a, but can easily denote very sophisticated spaces. Set a is a good general purpose data structure with fast lookup. You might also choose to mix and match combinations of these. Interconversion is often possible, but expensive. This is not a complete list of possibilities for sets, for example you may want a representation with a stronger possibility for search.

We can directly use the definition of relations as a set of tuples with the above

type Rel a b = [(a,b)]
type SetRel a b = Set (a,b)
type FunRel a b = (a,b) -> Bool

But we also have the option to “curry” our relation representations, sort of mixing and matching properties of these representations.

type List a b = a -> [b] -- Commonly seen type  in List monad/logic programming
type MapRel a b = Map a (Set b)

You might also choose to package up multiples of these representations, choosing the most appropriate as the situation requires, see for example the relation package, whose type holds both Map a (Set b) and Map b (Set a).

Despite fiendishly poor performance, for simplicity and list comprehension syntax we are going to be using type Rel a b = [(a,b)] for the remainder of the post.

I’m also taking the restriction that we’re working in bounded enumerable spaces for ease. I assume such a requirement can be lifted for many purposes, but finite spaces like these are especially well tamed. The following typeclass and definition is very useful in this case.

type BEnum = (Enum a, Bounded a)
enumAll :: (BEnum a) => [a]
enumAll = [minBound .. maxBound]

Functions and Relations

Functions can be thought of as relations with the special property that for each left part of the tuple, there is exactly one right side and every possible left side appears. The relation corresponding to a function $f$ looks like $F = \{(x,y) | x \in X, y \in Y, y = f (x)\}$.

tabulate :: (BEnum a) => (a -> b) -> Rel a b
tabulate f = [(x, f x) | x <- enumAll]

There is a natural and slightly clever lifting of function composition to relations. We now check whether there exists a value that is in the right side of one and the left side of the other.

rcompose :: Eq b => Rel b c -> Rel a b -> Rel a c
rcompose xs ys = [ (a,c)  | (a, b) <- ys, (b', c) <- xs, b' == b]

rid :: (Enum a, Bounded a) => Rel a a
rid = tabulate id

Because of these two operations (and their properties of associativity and absorption), FinRel is a category. We do however need the Eq b restriction to make Rel an instance of the category typeclass, so it does not quite fit the definition of category in base. It is a constrained category.

We can lift the common arrow/categorical combinators up to relations for example.

-- arrow/category combinators
rfan :: Eq a => Rel a b -> Rel a c -> Rel a (b,c)
rfan f g = [ (a, (b,c)) | (a,b) <- f, (a',c) <- g, a == a']

rfst :: BEnum (a,b) => Rel (a,b) a
rfst = tabulate fst

rsnd :: BEnum (a,b) => Rel (a,b) b
rsnd = tabulate snd

rleft :: (Enum a, Bounded a) => Rel a (Either a b)
rleft = tabulate Left

rright :: BEnum b => Rel b (Either a b)
rright = tabulate Right

reither :: Eq a => Rel a c -> Rel b c -> Rel (Either a b) c
reither f g = [(Left a, c) | (a,c) <- f] ++ [(Right b, c) | (b,c) <- g]


With these combinators, you have access to many functions on basic non-recursive algebraic data types. By combining them in a point free style, you can build some other useful combinators.

--- goofy inefficient definitions
dup :: (Eq a, Eq b, BEnum a, BEnum b) => Rel a (a,a)
dup = rfan rid rid
swap ::(Eq a, Eq b, BEnum (a,b)) => Rel (a,b) (b,a)
swap = rfan rsnd rfst
par :: (Eq a, Eq c, BEnum a, BEnum c) => Rel a b -> Rel c d -> Rel (a,c) (b,d)
par f g =  rfan (rcompose f rfst) (rcompose g rsnd)

An Aside: Relations, Linear Algebra, Databases

The composition operation described above is not so unfamiliar as it may first appear.

Relation algebra has a great similarity to linear algebra. This connection can be made more clear by considering sparsity patterns of matrices and tensors. Sparsity patterns are a useful abstraction of linear algebraic operations. Instead of considering matrices of numbers, instead the entries are "zero" and "possibly nonzero" or, if you prefer, a matrix of boolean values corresponding to those same questions.

The ordinary row times column matrix multiplication corresponds to relation composition. Replace * with AND and + with OR. If any of the numbers is zero, then multiplying them will result in zero. In summing two numbers, if either is possibly nonzero, then the result is possibly nonzero.

Another interesting way of looking at it is that we are replacing the summation binding form $\sum_i$ with the logical quantifier $\exists_i$. Both introduce a scoped "dummy variable" i and have a lot of syntactic similarity. Other related forms include $\lambda i$, $\forall i$, $\int di$, $\max_i$ .

There is also an analog of the point free relation algebra in linear algebra. Linear algebra has the most widely used point free notation in the world, matrix notation. Consider the expressions $Ax=b$ and $X = ABC$ as compared to $\sum_j A_{ij} x_j = b_i$ and $X_{il} = \sum_{jk} A_{ij} B_{jk} C_{kl}$. Matrix notation is SO much better for certain calculations. Other pieces of the matrix notation include transpose, inverse, Kronecker product, the Khatri-Rao product, and Hadamard product. Their properties are more clear in the index free form in my opinion. I believe even massive tensor expressions can be written index free using these operators. There are also analogies to be drawn between the graphical notations in these different fields.

Databases can be thought of very similarly to sparse matrices. In principle, you could enumerate all the possible values for a column of a database. So you could think of a database as a giant matrix with a 1 if the item is in the database and 0 if not. Databases are very very sparse from this perspective, and you would never store them this way. The join operation is a relative of relational composition, however join usually operates via looking at the column names, whereas our join is position based.

Query optimization in databases has interesting analogs in sparse linear algebra. For example, the Taco compiler http://tensor-compiler.org/ is doing something very akin to a query optimizer.

Inverting Relations

Unlike functions, Relations are always "invertible". We call this the converse of a relation. When a function is invertible, it corresponds to the converse. In terms of the tuples underlying our representation, it just swaps them. Relations also possess operations trans and untrans that may be thought of as a kind of currying or as a partial inverse on a single parameter.

converse :: Rel a b -> Rel b a
converse = [(b,a) | (a,b) <- r]

untrans :: Rel a (b,c) -> Rel (a,b) c
untrans r = [((a,b),c)  | (a, (b,c)) <- r]

trans :: Rel (a,b) c -> Rel a (b, c)
trans r = [(a, (b,c))| ((a,b),c)  <- r]

Orderings can also be lifted to relations $(\leq) = \{(a,b) | a \leq b\}$. The composition of relations also respects the usual composition of ordering.

reflectOrd :: (Ord a, BEnum a) => Rel a a
reflectOrd = [(x,y) | x <- enumAll, y <- enumAll, x <= y]

Nondeterministic choice is sometimes represented in Haskell using Set returning functions a -> [b]. You may recall this from the context of the List monad. In fact in this case, we have an isomorphism as evidenced by tabulateSearch and searchRel.

tabulateSearch :: BEnum a => (a -> [b]) -> Rel a b
tabulateSearch f = [(a,b) | a <- enumAll, b <- f a]

searchRel :: Eq a => Rel a b -> (a -> [b])
searchRel r a = [b | (a', b) <- r, a == a']

Similarly partial functions can be reflected into relations

tabulatePartial :: BEnum a => (a -> Maybe b) -> Rel a b
tabulatePartial f = [(a,b) | a <- enumAll, b <- toList (f a)]

A useful trick is to lift sets/subsets to relations as a diagonal relation. $\{(a,a) | a \in S \}$. Projection onto the set can be achieve by composing with this relation. The identity results if you are talking about the entire set S.

diagRel :: [a] -> Rel a a
diagRel = map dup where dup x = (x,x)

leftSet :: Eq a => Rel a b -> [a]
leftSet = nub . (map fst)

rightSet :: Eq b => Rel a b -> [b]
rightSet = nub . (map snd)

Comparing Relations

We can compare sets by asking if one is a subset of the other $A\subseteq B$ . Relations can also be compared by this operation, which we call relational inclusion.

rSub :: (Eq a, Eq b) => Rel a b -> Rel a b -> Bool
rSub xs ys = and [x elem ys | x <- xs]

x <~ y = rSub x y

A subservient notion to this is relational equality.


rEq :: (Eq a, Eq b) => Rel a b -> Rel a b -> Bool
rEq xs ys = (xs rSub ys) && (ys rSub xs)

x ~~ y = rEq x y

Relational algebra is chockful of inequality style reasoning, which is richer and slightly more complicated than equality style reasoning. This is one of the benefits of moving from functional descriptions to a relational description.

Relations also form a lattice with respect to these comparisons. What the hell are lattices? In the context of finite relations, lattices may be over powered mathematical machinery, but it really is useful down the line. They give you binary operators that play nice with some kind of ordering, in our case relational inclusion. These two operations are the meet and the join, which find the greatest lower bound and least upper bound of the operands respectively. For our relations, these correspond to the more familiar notion of set intersection and union. The intersection of two sets is the biggest set that is in both of them. The union is the smallest set for which both sets are a subset of it.

meet' :: (Eq a, Eq b) => Rel a b -> Rel a b -> Rel a b
meet' xs ys = [x | x <- xs, x elem ys] -- intersection

join' :: (Eq a, Eq b) => Rel a b -> Rel a b -> Rel a b
join' p q = nub (p ++ q) -- union

instance (Eq a, Eq b) => Lattice (Rel a b) where
x /\ y = meet' x y
x \/ y = join' x y


Using meet/join vs intersection/union becomes more interesting when the domain is fancier than relations over finite domains. Issues of infinity can make this interesting, or when using a representation that can't explicitly represent arbitrary unions or intersections, but that instead has to approximate them. My favorite example is polyhedra. Polyhedra are not closed under unions. So in this case the join and union do not coincide. You need to take a convex hull of the union instead, which is the best approximation. Concretely, polyhedra can be represented as a list of their vertices, which generate the polyhedron. There is no way to express a union in this representation. Concatenating the lists represents taking the convex hull of the union.

An additional property that a lattice may possess is a largest and small element, called top ($\top$ ) and bottom ($\bot$). Our finite domain relations do have these.

instance (Eq a, Eq b, BEnum a, BEnum b) => BoundedMeetSemiLattice (Rel a b) where
top :: (BEnum a, BEnum b) => Rel a b
top = [(x,y) | x <- enumAll, y <- enumAll]
-- all possible tuples

-- because of superclass constraints :(
instance (Eq a, Eq b) => BoundedJoinSemiLattice (Rel a b) where
bottom :: Rel a b -- no tuples
bottom = [] 

Relational Division

And now finally we get to one of the most interesting, powerful, and confusing operations: relational division. Relational division is a kind of pseudo inverse to relational composition. In linear algebra, the pseudo inverse is a matrix that does the best job it can to invert another matrix in a least squares sense. If the matrix is actually invertible, it equals the inverse. Relational division does the best job it can to invert a relational composition. Instead of taking least squares as a criteria, it ensures that the result doesn't over approximate. If you had the inequality $X \cdot Y \subseteq Z$ and you want to solve for X, relational division is the thing that does that. The right division $Q = Z/Y$ is the largest relation such that $Q \cdot Y \subseteq Z$.

A helpful example is the similar operation of division in database tables.

And here is an implementation that I think is correct. I've goofed it up a couple times, it is a rather confusing construct.

rdiv :: (BEnum a, BEnum b, Eq a, Eq b, Eq c) => Rel a c -> Rel b c -> Rel a b
rdiv x y = [ (a,b)  | a <- enumAll, b <- enumAll, all (\c -> ((b,c) elem y)implies ((a,c) elem x)) (rightSet y)]

There also exists a very similar operation of ldiv.

Relational division encapsulates many notions of searching or optimizing. I invite you to read more about it in J.N. Oliveira's text or the Bird & de Moor text.

Properties and QuickCheck

Relation algebra is so chock full of properties. This is a perfect opportunity for some QuickCheck , a randomized property testing framework. There are so many more to test. I need to dig through to collect up all the identities.

type R1 = Rel Bool Ordering

prop_ridleft :: Rel Bool Ordering -> Bool
prop_ridleft x = (rid <<< x) ~~ x

prop_ridright :: Rel Bool Ordering  -> Bool
prop_ridright x = (x <<< rid) ~~ x

prop_meet :: R1 -> R1  -> Bool
prop_meet x y = (x /\ y) <~ x

prop_meet' :: R1 -> R1  -> Bool
prop_meet' x y = (x /\ y) <~ y

prop_join_univ :: R1 -> R1 -> R1 -> Bool
prop_join_univ x y z = ((x \/ y) <~ z) == ((x <~ z) && (y <~ z))

prop_join :: R1 -> R1  -> Bool
prop_join x y = y <~ (x \/ y)

prop_meet_univ :: R1 -> R1 -> R1 -> Bool
prop_meet_univ x y z = (z <~ (x /\ y)) == ((z <~ x) && (z <~ y))

prop_top :: R1 -> Bool
prop_top x = x <~ top

prop_bottom :: R1 -> Bool
prop_bottom x = bottom <~ x

prop_bottom' :: R1 -> Bool
prop_bottom' x = (x <<< bottom) ~~ (bottom :: R1)

prop_trans_iso :: Rel (Bool, Ordering) Word8 -> Bool
prop_trans_iso x = untrans (trans x) == x

prop_rdiv :: Rel Bool Ordering -> Rel Word8 Ordering -> Bool
prop_rdiv g j = (j <<< (rdiv g j)) <~ g

prop_con :: R1 -> Bool
prop_con x = con (con x) ~~ x

prop_rdiv' :: Rel Bool Word8 -> Rel Bool Ordering -> Rel Word8 Ordering -> Bool
prop_rdiv' x g j = (x <~ (rdiv g j)) == ((j <<< x) <~ g)


Bits and Bobbles

• Relations over continuous spaces. Vector subspaces (Linear Relations), Polyhedra (Linear inequality relations).
• Non Bool-valued Relations. Replace $\exists_x$ with $\max_x$. The weighted edgelist of a graph is a natural relation. By using composition we can ask about paths. We still have a comparison operator $\subseteq$ which now respects the ordering of weights
• Galois connections are cool.
• Relations combined with recursion schemes. Recursion schemes are the point free way of describing recursion.
• Moving into infinite spaces. How do we cope?
• Faster search. Some relations are best specified by functions, Maps, others, mixes and matching.
• If you "singletonize" relations a la the Agda project https://github.com/scmu/aopa, you get very interesting interconnections with profunctors, which people say are a categorical generalization of relations.
• Point-free DSLs are interesting and pleasant. Many worries about alpha renaming are gone, at the expense of point-free pain. A DSL like this may be necessary to choose good relational query plans

Edit: A follow up post on that type level angle here http://www.philipzucker.com/relational-algebra-with-fancy-types/

References

Edit : A math exchange question about a -> [b] relational type. https://math.stackexchange.com/questions/3360026/can-division-be-expressed-intensionally-in-relation-algebra/3361351#3361351

Edit: An interesting comment and related library from /u/stevana

A Touch of Topological Quantum Computation 3: Categorical Interlude

Welcome back, friend.

In the last two posts, I described the basics of how to build and manipulate the Fibonacci anyon vector space in Haskell.

As a personal anecdote, trying to understand the category theory behind the theory of anyons is one of the reasons I started learning Haskell. These spaces are typically described using the terminology of category theory. I found it very frustrating that anyons were described in an abstract and confusing terminology. I really wondered if people were just making things harder than they have to be. I think Haskell is a perfect playground to clarify these constructions. While the category theory stuff isn’t strictly necessary, it is interesting and useful once you get past the frustration.

but I hope everyone can get something out of it. Give it a shot if you’re interested, and don’t sweat the details.

The Aroma of Categories

I think Steve Awodey gives an excellent nutshell of category theory in the introductory section to his book:

“What is category theory? As a first approximation, one could say that category theory is the mathematical study of (abstract) algebras of functions. Just as group theory is the abstraction of the idea of a system of permutations of a set or symmetries of a geometric object, so category theory arises from the idea of a system of functions among some objects.”

For my intuition, a category is any “things” that plug together. The “in” of a thing has to match the “out” of another thing in order to hook them together. In other words, the requirement for something to be a category is having a notion of composition. The things you plug together are called the morphisms of the category and the matching ports are the objects of the category. The additional requirement of always having an identity morphism (a do-nothing connection wire) is usually there once you have composition, although it is good to take especial note of it.

Category theory is an elegant framework for how to think about these composing things in a mathematical way. In my experience, thinking in these terms leads to good abstractions, and useful analogies between disparate things.

It is helpful for any abstract concept to list some examples to expose the threads that connect them. Category theory in particular has a ton of examples connecting to many other fields because it is a science of analogy. These are the examples of categories I usually reach for. Which one feels the most comfortable to you will depend on your background.

• Hask. Objects are types. Morphisms are functions between those types
• Vect. Objects are vector spaces, morphisms are linear maps (roughly matrices).
• Preorders. Objects are values. Morphisms are the inequalities between those values.
• Sets. Objects are Sets. Morphisms are functions between sets.
• Cat. Objects are categories, Morphisms are functors. This is a pretty cool one, although complete categorical narcissism.
• Systems and Processes.
• The Free Category of a directed graphs. Objects are vertices. Morphisms are paths between vertices

Generic Programming and Typeclasses

The goal of generic programming is to run programs that you write once in many way.

There are many ways to approach this generic programming goal, but one way this is achieved in Haskell is by using Typeclasses. Typeclasses allow you to overload names, so that they mean different things based upon the types involved. Adding a vector is different than adding a float or int, but there are programs that can be written that reasonably apply in both situations.

Writing your program in a way that it applies to disparate objects requires abstract ways of talking about things. Mathematics is an excellent place to mine for good abstractions. In particular, the category theory abstraction has demonstrated itself to be a very useful unified vocabulary for mathematical topics. I, and others, find it also to be a beautiful aesthetic by which to structure programs.

In the Haskell base library there is a Category typeclass defined in base. In order to use this, you need to import the Prelude in an unusual way.

{-# LANGUAGE NoImplicitPrelude #-}
import Prelude hiding ((.), id) 

The Category typeclass is defined on the type that corresponds to the morphisms of the category. This type has a slot for the input type and a slot for the output type. In order for something to be a category, it has to have an identity morphisms and a notion of composition.

class Category cat where
id :: cat a a
(.) :: cat b c -> cat a b -> cat a c

The most obvious example of this Category typeclass is the instance for the ordinary Haskell function (->). The identity corresponds to the standard Haskell identity function, and composition to ordinary Haskell function composition.

instance Category (->) where
id = \x -> x
f . g = \x -> f (g x)

Another example of a category that we’ve already encountered is that of linear operators which we’ll call LinOp. LinOp is an example of a Kliesli arrow, a category built using monadic composition rather than regular function composition. In this case, the monad Q from my first post takes care of the linear pipework that happens between every application of a LinOp. The fish <=< operator is monadic composition from Control.Monad.

newtype LinOp a b = LinOp {runLin :: a -> Q b}
instance Category LinOp where
id = LinOp pure
(LinOp f) . (LinOp g) = LinOp (f <=< g) 

A related category is the FibOp category. This is the category of operations on Fibonacci anyons, which are also linear operations. It is LinOp specialized to the Fibonacci anyon space. All the operations we've previously discussed (F-moves, braiding) are in this category.

newtype FibOp a b = FibOp {runFib :: (forall c. FibTree c a -> Q (FibTree c b))}
instance Category (FibOp) where
id = FibOp pure
(FibOp f) . (FibOp g) = FibOp (f <=< g)

The "feel" of category theory takes focus away from the objects and tries to place focus on the morphisms. There is a style of functional programming called "point-free" where you avoid ever giving variables explicit names and instead use pipe-work combinators like (.), fst, snd, or (***). This also has a feel of de-emphasizing objects. Many of the combinators that get used in this style have categorical analogs. In order to generically use categorical typeclasses, you have to write your program in this point free style.

It is possible for a program written in the categorical style to be a reinterpreted as a program, a linear algebra operation, a circuit, or a diagram, all without changing the actual text of the program. For more on this, I highly recommend Conal Elliot's  compiling to categories, which also puts forth a methodology to avoid the somewhat unpleasant point-free style using a compiler plug-in. This might be an interesting place to mine for a good quantum programming language. YMMV.

Monoidal Categories.

Putting two processes in parallel can be considered a kind of product. A category is monoidal if it has this product of this flavor, and has isomorphisms for reassociating objects and producing or consuming a unit object. This will make more sense when you see the examples.

We can sketch out this monoidal category concept as a typeclass, where we use () as the unit object.

class Category k => Monoidal k where
parC :: k a c -> k b d -> k (a,b) (c,d)
assoc :: k ((a,b),c) (a,(b,c))
assoc' :: k (a,(b,c)) ((a,b),c)
leftUnitor :: k ((),a) a
leftUnitor' :: k a ((),a)
rightUnitor :: k (a,()) a
rightUnitor' :: k a (a,()) 

Instances

In Haskell, the standard monoidal product for regular Haskell functions is (***) from Control.Arrow. It takes two functions and turns it into a function that does the same stuff, but on a tuple of the original inputs. The associators and unitors are fairly straightforward. We can freely dump unit () and get it back because there is only one possible value for it.

(***) :: (a -> c) -> (b -> d) -> ((a,b) -> (c,d))
f *** g = \(x,y) -> (f x, g y) ﻿
instance Monoidal (->) where
parC f g = f *** g
assoc ((x,y),z) = (x,(y,z))
assoc' (x,(y,z)) = ((x,y),z)
leftUnitor (_, x) = x
leftUnitor' x = ((),x)
rightUnitor (x, _) = x
rightUnitor' x = (x,()) 

The monoidal product we'll choose for LinOp is the tensor/outer/Kronecker product.

kron :: Num b => W b a -> W b c -> W b (a,c)
kron (W x) (W y) = W [((a,c), r1 * r2) | (a,r1) <- x , (c,r2) <- y ]

Otherwise, LinOp is basically a monadically lifted version of (->). The one dimensional vector space Q () is completely isomorphic to just a number. Taking the Kronecker product with it is basically the same thing as scalar multiplying (up to some shuffling).

instance Monoidal LinOp where
parC (LinOp f) (LinOp g) = LinOp $\(a,b) -> kron (f a) (g b) assoc = LinOp (pure . assoc) assoc' = LinOp (pure . unassoc) leftUnitor = LinOp (pure . leftUnitor) leftUnitor' = LinOp (pure .leftUnitor') rightUnitor = LinOp (pure . rightUnitor) rightUnitor' = LinOp (pure . rightUnitor') Now for a confession. I made a misstep in my first post. In order to make our Fibonacci anyons jive nicely with our current definitions, I should have defined our identity particle using type Id = () rather than data Id. We'll do that now. In addition, we need some new primitive operations for absorbing and emitting identity particles that did not feel relevant at that time. rightUnit :: FibTree e (a,Id) -> Q (FibTree e a) rightUnit (TTI t _) = pure t rightUnit (III t _) = pure t rightUnit' :: FibTree e a -> Q (FibTree e (a,Id)) rightUnit' t@(TTT _ _) = pure (TTI t ILeaf) rightUnit' t@(TTI _ _) = pure (TTI t ILeaf) rightUnit' t@(TIT _ _) = pure (TTI t ILeaf) rightUnit' t@(III _ _) = pure (III t ILeaf) rightUnit' t@(ITT _ _) = pure (III t ILeaf) rightUnit' t@(ILeaf) = pure (III t ILeaf) rightUnit' t@(TLeaf) = pure (TTI t ILeaf) leftUnit :: FibTree e (Id,a) -> Q (FibTree e a) leftUnit = rightUnit <=< braid -- braid vs braid' doesn't matter, but it has a nice symettry. leftUnit' :: FibTree e a -> Q (FibTree e (Id,a)) leftUnit' = braid' <=< rightUnit'  With these in place, we can define a monoidal instance for FibOp. The extremely important and intriguing F-move operations are the assoc operators for the category. While other categories have assoc that feel nearly trivial, these F-moves don't feel so trivial. instance Monoidal (FibOp) where parC (FibOp f) (FibOp g) = (FibOp (lmap f)) . (FibOp (rmap g)) assoc = FibOp fmove' assoc' = FibOp fmove leftUnitor = FibOp leftUnit leftUnitor' = FibOp leftUnit' rightUnitor = FibOp rightUnit rightUnitor' = FibOp rightUnit' This is actually useful The parC operation is extremely useful to explicitly note in a program. It is an opportunity for optimization. It is possible to inefficiently implement parC in terms of other primitives, but it is very worthwhile to implement it in new primitives (although I haven't here). In the case of (->), parC is an explicit location where actual computational parallelism is available. Once you perform parC, it is not longer obviously apparent that the left and right side of the tuple share no data during the computation. In the case of LinOp and FibOp, parC is a location where you can perform factored linear computations. The matrix vector product $(A \otimes B)(v \otimes w)$ can be performed individually $(Av)\otimes (Bw)$. In the first case, where we densify $A \otimes B$ and then perform the multiplication, it costs $O((N_A N_B)^2)$ time, whereas performing them individually on the factors costs $O( N_A^2 + N_B^2)$ time, a significant savings. Applied category theory indeed. Laws Like many typeclasses, these monoidal morphisms are assumed to follow certain laws. Here is a sketch (for a more thorough discussion check out the wikipedia page): • Functions with a tick at the end like assoc' should be the inverses of the functions without the tick like assoc, e.g. assoc . assoc' = id • The parC operation is (bi)functorial, meaning it obeys the commutation law parC (f . f') (g . g') = (parC f g) . (parC f' g') i.e. it doesn't matter if we perform composition before or after the parC. • The pentagon law for assoc: Applying leftbottom is the same as applying topright leftbottom :: (((a,b),c),d) -> (a,(b,(c,d))) leftbottom = assoc . assoc topright :: (((a,b),c),d) -> (a,(b,(c,d))) topright = (id *** assoc) . assoc . (assoc *** id) • The triangle law for the unitors: topright' :: ((a,()),b) -> (a,b) topright' = (id *** leftUnitor) . assoc leftside :: ((a,()),b) -> (a,b) leftside = rightUnitor *** id String Diagrams String diagrams are a diagrammatic notation for monoidal categories. Morphisms are represented by boxes with lines. Composition g . f is made by connecting lines. The identity id is a raw arrow. The monoidal product of morphisms $f \otimes g$ is represented by placing lines next to each other. The diagrammatic notion is so powerful because the laws of monoidal categories are built so deeply into it they can go unnoticed. Identities can be put in or taken away. Association doesn't even appear in the diagram. The boxes in the notation can naturally be pushed around and commuted past each other. This corresponds to the property $(id \otimes g) \circ (f \otimes id) = (f \otimes id) \circ (id \otimes g)$ What expression does the following diagram represent? Is it $(f \circ f') \otimes (g \circ g')$ (in Haskell notation parC (f . f') (g . g') )? Or is it $(f \otimes g) \circ (f' \otimes g')$ (in Haskell notation (parC f g) . (parC f' g')? Answer: It doesn't matter because the functorial requirement of parC means the two expressions are identical. There are a number of notations you might meet in the world that can be interpreted as String diagrams. Three that seem particular pertinent are: • Quantum circuits • Anyon Diagrams! Braided and Symmetric Monoidal Categories: Categories That Braid and Swap Some monoidal categories have a notion of being able to braid morphisms. If so, it is called a braided monoidal category (go figure). class Monoidal k => Braided k where over :: k (a,b) (b,a) under :: k (a,b) (b,a) The over and under morphisms are inverse of each other over . under = id. The over morphism pulls the left morphism over the right, whereas the under pulls the left under the right. The diagram definitely helps to understand this definition. These over and under morphisms need to play nice with the associator of the monoidal category. These are laws that valid instance of the typeclass should follow. We actually already met them in the very first post. If the over and under of the braiding are the same the category is a symmetric monoidal category. This typeclass needs no extra functions, but it is now intended that the law over . over = id is obeyed. class Braided k => Symmetric k where When we draw a braid in a symmetric monoidal category, we don't have to be careful with which one is over and under, because they are the same thing. The examples that come soonest to mind have this symmetric property, for example (->) is a symmetric monoidal category.. swap :: (a, b) -> (b, a) swap (x,y) = (y,x) instance Braided (->) where over = swap under = swap instance Symmetric (->) Similarly LinOp has an notion of swapping that is just a lifting of swap instance Braided (LinOp) where over = LinOp (pure . swap) under = LinOp (pure . swap) instance Symmetric LinOp  However, FibOp is not symmetric! This is perhaps at the core of what makes FibOp so interesting. instance Braided FibOp where over = FibOp braid under = FibOp braid' Automating Association Last time, we spent a lot of time doing weird typelevel programming to automate the pain of manual association moves. We can do something quite similar to make the categorical reassociation less painful, and more like the carefree ideal of the string diagram if we replace composition (.) with a slightly different operator (...) :: ReAssoc b b' => FibOp b' c -> FibOp a b -> FibOp a c (FibOp f) ... (FibOp g) = FibOp$ f <=< reassoc <=< g

Before defining reassoc, let's define a helper LeftCollect typeclass. Given a typelevel integer n, it will reassociate the tree using a binary search procedure to make sure the left branch l at the root has Count l = n.

leftcollect :: forall n gte l r o e. (gte ~ CmpNat n (Count l), LeftCollect n gte (l,r) o) => FibTree e (l,r) -> Q (FibTree e o)
leftcollect x = leftcollect' @n @gte x

class LeftCollect n gte a b | n gte a -> b where
leftcollect' :: FibTree e a -> Q (FibTree e b)

-- The process is like a binary search.
-- LeftCollect pulls n leaves into the left branch of the tuple

-- If n is greater than the size of l, we recurse into the right branch with a new number of leaves to collect
-- then we do a final reshuffle to put those all into the left tree.
instance (
k ~ Count l,
r ~ (l',r'),
n' ~ (n - k),
gte ~ CmpNat n' (Count l'),
LeftCollect n' gte r (l'',r'')) => LeftCollect n 'GT (l,r) ((l,l''),r'') where
leftcollect' x = do
x' <- rmap (leftcollect @n') x -- (l,(l'',r'')) -- l'' is size n - k
fmove x'  -- ((l,l''),r'') -- size of (l,l'') = k + (n-k) = n
instance (
l ~ (l',r'),
gte ~ CmpNat n (Count l'),
LeftCollect n gte l (l'',r'')) => LeftCollect n 'LT (l,r) (l'',(r'',r)) where
leftcollect' x = do
x' <- lmap (leftcollect @n) x -- ((l'',r''),r) -- l'' is of size n
fmove' x'  -- (l'',(r'',r)

instance LeftCollect n 'EQ (l,r) (l,r) where
leftcollect' = pure

Once we have LeftCollect, the typeclass ReAssoc is relatively simple to define. Given a pattern tree, we can count the elements in it's left branch and LeftCollect the source tree to match that number. Then we recursively apply reassoc in the left and right branch of the tree. This means that every node has the same number of children in the tree, hence the trees will end up in an identical shape (modulo me mucking something up).

class ReAssoc a b where
reassoc :: FibTree e a -> Q (FibTree e b)
instance (n ~ Count l',
gte ~ CmpNat n (Count l),
LeftCollect n gte (l,r) (l'',r''),
ReAssoc l'' l',
ReAssoc r'' r') => ReAssoc (l,r) (l',r') where
reassoc x = do
x' <- leftcollect @n x
x'' <- rmap reassoc x'
lmap reassoc x''

--instance {-# OVERLAPS #-} ReAssoc a a where
--   reassoc = pure

instance ReAssoc Tau Tau where
reassoc = pure
instance ReAssoc Id Id where
reassoc = pure

It seems likely that one could write equivalent instances that would work for an arbitrary monoidal category with a bit more work. We are aided somewhat by the fact that FibOp has a finite universe of possible leaf types to work with.

Closing Thoughts

While our categorical typeclasses are helpful and nice, I should point out that they are not going to cover all the things that can be described as categories, even in Haskell. Just like the Functor typeclass does not describe all the conceptual functors you might meet. One beautiful monoidal category is that of Haskell Functors under the monoidal product of Functor Composition. More on this to come, I think. https://parametricity.com/posts/2015-07-18-braids.html

We never even touched the dot product in this post. This corresponds to another doodle in a string diagram, and another power to add to your category. It is somewhat trickier to work with cleanly in familiar Haskell terms, I think because (->) is at least not super obviously a dagger category?

You can find a hopefully compiling version of all my snippets and more in my chaotic mutating Github repo https://github.com/philzook58/fib-anyon

See you next time.

References

The Rosetta Stone paper by Baez and Stay is probably the conceptual daddy of this entire post (and more).

Bartosz Milewski's Category Theory for Programmer's blog (online book really) and youtube series are where I learned most of what I know about category theory. I highly recommend them (huge Bartosz fanboy).