A Short Skinny on Relations & the Algebra of Programming

I’ve been reading about the Algebra of Programming lately and lovin’ it. See J.N. Oliveira’s draft text in particular and the links in the references. I’ve started exploring the stuff from this post and more over here: https://github.com/philzook58/rel

Why and What?

Relations can expand the power of functional programming for the purpose of specification.

The point of a specification is to be able to write down in a very compact and clear way your intent for a program, more clearly and compactly than a full implementation could be written. It therefore makes sense to add to your specification language constructs that are not necessarily executable or efficient for the sake of compactness and clarity. When one needs executability or efficiency, one writes an implementation whose behavior you can connect to the spec via a formal or informal process.

Functional programming, with it’s focus on the abstraction of the mathematical function, is a good trade-off between executability, efficiency, and expressibility. It lies in a reasonable location between the ideas amenable to reasoning by a human mind and the command-driven requirements of the machines.

Functions are a specialization of relations. Relations extend the mathematical notion of functions with constructs like nondeterministic choice, failure and converse. These constructs are not always obviously executable or efficient. However, they greatly extend the abilities of reasoning and the clarity of expression of a specification.

The point-free style of reasoning about functions extends to a point-free style reasoning about relations, which is known as relation algebra. There are rich analogies with databases, category theory, linear algebra, and other topics.

Plus, I think it is very neato for some reason. If anyone ever thinks something is really neato, isn’t it worth giving it a listen?

A Simple Representation of Relations in Haskell

The simplest description of relations is as a set of tuples. So first let’s talk a bit about the options for sets in Haskell.

Sets in Haskell

There are a couple different reasonable ways to represent sets in Haskell.

  • [a] or Vector a
  • a -> Bool
  • Set a — a tree based Set from the containers package.

These have different performance characteristics and different power. The list [a] is very simple and has specialized pleasant syntax available. The indicator function a -> Bool gives you no ability to produce values of type a, but can easily denote very sophisticated spaces. Set a is a good general purpose data structure with fast lookup. You might also choose to mix and match combinations of these. Interconversion is often possible, but expensive. This is not a complete list of possibilities for sets, for example you may want a representation with a stronger possibility for search.

Relations in Haskell

We can directly use the definition of relations as a set of tuples with the above

But we also have the option to “curry” our relation representations, sort of mixing and matching properties of these representations.

You might also choose to package up multiples of these representations, choosing the most appropriate as the situation requires, see for example the relation package, whose type holds both Map a (Set b) and Map b (Set a).

Despite fiendishly poor performance, for simplicity and list comprehension syntax we are going to be using type Rel a b = [(a,b)] for the remainder of the post.

I’m also taking the restriction that we’re working in bounded enumerable spaces for ease. I assume such a requirement can be lifted for many purposes, but finite spaces like these are especially well tamed. The following typeclass and definition is very useful in this case.

Functions and Relations

Functions can be thought of as relations with the special property that for each left part of the tuple, there is exactly one right side and every possible left side appears. The relation corresponding to a function f looks like F = \{(x,y) | x \in X, y \in Y, y = f (x)\}.

There is a natural and slightly clever lifting of function composition to relations. We now check whether there exists a value that is in the right side of one and the left side of the other.

Because of these two operations (and their properties of associativity and absorption), FinRel is a category. We do however need the Eq b restriction to make Rel an instance of the category typeclass, so it does not quite fit the definition of category in base. It is a constrained category.

We can lift the common arrow/categorical combinators up to relations for example.

With these combinators, you have access to many functions on basic non-recursive algebraic data types. By combining them in a point free style, you can build some other useful combinators.

An Aside: Relations, Linear Algebra, Databases

The composition operation described above is not so unfamiliar as it may first appear.

Relation algebra has a great similarity to linear algebra. This connection can be made more clear by considering sparsity patterns of matrices and tensors. Sparsity patterns are a useful abstraction of linear algebraic operations. Instead of considering matrices of numbers, instead the entries are “zero” and “possibly nonzero” or, if you prefer, a matrix of boolean values corresponding to those same questions.

The ordinary row times column matrix multiplication corresponds to relation composition. Replace * with AND and + with OR. If any of the numbers is zero, then multiplying them will result in zero. In summing two numbers, if either is possibly nonzero, then the result is possibly nonzero.

Another interesting way of looking at it is that we are replacing the summation binding form \sum_i with the logical quantifier \exists_i. Both introduce a scoped “dummy variable” i and have a lot of syntactic similarity. Other related forms include \lambda i, \forall i, \int di, \max_i .

There is also an analog of the point free relation algebra in linear algebra. Linear algebra has the most widely used point free notation in the world, matrix notation. Consider the expressions Ax=b and X = ABC as compared to \sum_j A_{ij} x_j = b_i and X_{il} = \sum_{jk} A_{ij} B_{jk} C_{kl} . Matrix notation is SO much better for certain calculations. Other pieces of the matrix notation include transpose, inverse, Kronecker product, the Khatri-Rao product, and Hadamard product. Their properties are more clear in the index free form in my opinion. I believe even massive tensor expressions can be written index free using these operators. There are also analogies to be drawn between the graphical notations in these different fields.

Databases can be thought of very similarly to sparse matrices. In principle, you could enumerate all the possible values for a column of a database. So you could think of a database as a giant matrix with a 1 if the item is in the database and 0 if not. Databases are very very sparse from this perspective, and you would never store them this way. The join operation is a relative of relational composition, however join usually operates via looking at the column names, whereas our join is position based.

Query optimization in databases has interesting analogs in sparse linear algebra. For example, the Taco compiler http://tensor-compiler.org/ is doing something very akin to a query optimizer.

Inverting Relations

Unlike functions, Relations are always “invertible”. We call this the converse of a relation. When a function is invertible, it corresponds to the converse. In terms of the tuples underlying our representation, it just swaps them. Relations also possess operations trans and untrans that may be thought of as a kind of currying or as a partial inverse on a single parameter.

Orderings can also be lifted to relations (\leq) = \{(a,b) | a \leq b\}. The composition of relations also respects the usual composition of ordering.

Nondeterministic choice is sometimes represented in Haskell using Set returning functions a -> [b]. You may recall this from the context of the List monad. In fact in this case, we have an isomorphism as evidenced by tabulateSearch and searchRel.

Similarly partial functions can be reflected into relations

A useful trick is to lift sets/subsets to relations as a diagonal relation. \{(a,a) | a \in S \}. Projection onto the set can be achieve by composing with this relation. The identity results if you are talking about the entire set S.

Comparing Relations

We can compare sets by asking if one is a subset of the other A\subseteq B . Relations can also be compared by this operation, which we call relational inclusion.

A subservient notion to this is relational equality.

Relational algebra is chockful of inequality style reasoning, which is richer and slightly more complicated than equality style reasoning. This is one of the benefits of moving from functional descriptions to a relational description.

Relations also form a lattice with respect to these comparisons. What the hell are lattices? In the context of finite relations, lattices may be over powered mathematical machinery, but it really is useful down the line. They give you binary operators that play nice with some kind of ordering, in our case relational inclusion. These two operations are the meet and the join, which find the greatest lower bound and least upper bound of the operands respectively. For our relations, these correspond to the more familiar notion of set intersection and union. The intersection of two sets is the biggest set that is in both of them. The union is the smallest set for which both sets are a subset of it.

Using meet/join vs intersection/union becomes more interesting when the domain is fancier than relations over finite domains. Issues of infinity can make this interesting, or when using a representation that can’t explicitly represent arbitrary unions or intersections, but that instead has to approximate them. My favorite example is polyhedra. Polyhedra are not closed under unions. So in this case the join and union do not coincide. You need to take a convex hull of the union instead, which is the best approximation. Concretely, polyhedra can be represented as a list of their vertices, which generate the polyhedron. There is no way to express a union in this representation. Concatenating the lists represents taking the convex hull of the union.

An additional property that a lattice may possess is a largest and small element, called top (\top ) and bottom (\bot). Our finite domain relations do have these.

Relational Division

And now finally we get to one of the most interesting, powerful, and confusing operations: relational division. Relational division is a kind of pseudo inverse to relational composition. In linear algebra, the pseudo inverse is a matrix that does the best job it can to invert another matrix in a least squares sense. If the matrix is actually invertible, it equals the inverse. Relational division does the best job it can to invert a relational composition. Instead of taking least squares as a criteria, it ensures that the result doesn’t over approximate. If you had the inequality X \cdot Y \subseteq Z and you want to solve for X, relational division is the thing that does that. The right division Q = Z/Y is the largest relation such that Q \cdot Y \subseteq Z.

A helpful example is the similar operation of division in database tables.

And here is an implementation that I think is correct. I’ve goofed it up a couple times, it is a rather confusing construct.

There also exists a very similar operation of ldiv.

Relational division encapsulates many notions of searching or optimizing. I invite you to read more about it in J.N. Oliveira’s text or the Bird & de Moor text.

Properties and QuickCheck

Oh. Mah. Glob. You guys. So many properties. (Artwork courtesy of David)

Relation algebra is so chock full of properties. This is a perfect opportunity for some QuickCheck , a randomized property testing framework. There are so many more to test. I need to dig through to collect up all the identities.

Bits and Bobbles

  • Relations over continuous spaces. Vector subspaces (Linear Relations), Polyhedra (Linear inequality relations).
  • Non Bool-valued Relations. Replace \exists_x with \max_x. The weighted edgelist of a graph is a natural relation. By using composition we can ask about paths. We still have a comparison operator \subseteq which now respects the ordering of weights
  • Galois connections are cool.
  • Relations combined with recursion schemes. Recursion schemes are the point free way of describing recursion.
  • Moving into infinite spaces. How do we cope?
  • Faster search. Some relations are best specified by functions, Maps, others, mixes and matching.
  • If you “singletonize” relations a la the Agda project https://github.com/scmu/aopa, you get very interesting interconnections with profunctors, which people say are a categorical generalization of relations.
  • Point-free DSLs are interesting and pleasant. Many worries about alpha renaming are gone, at the expense of point-free pain. A DSL like this may be necessary to choose good relational query plans

References

Edit: An interesting comment and related library from /u/stevana

A Touch of Topological Quantum Computation 3: Categorical Interlude

Welcome back, friend.

In the last two posts, I described the basics of how to build and manipulate the Fibonacci anyon vector space in Haskell.

As a personal anecdote, trying to understand the category theory behind the theory of anyons is one of the reasons I started learning Haskell. These spaces are typically described using the terminology of category theory. I found it very frustrating that anyons were described in an abstract and confusing terminology. I really wondered if people were just making things harder than they have to be. I think Haskell is a perfect playground to clarify these constructions. While the category theory stuff isn’t strictly necessary, it is interesting and useful once you get past the frustration.

Unfortunately, I can’t claim that this article is going to be enough to take you from zero to categorical godhood

but I hope everyone can get something out of it. Give it a shot if you’re interested, and don’t sweat the details.

The Aroma of Categories

I think Steve Awodey gives an excellent nutshell of category theory in the introductory section to his book:

“What is category theory? As a first approximation, one could say that category theory is the mathematical study of (abstract) algebras of functions. Just as group theory is the abstraction of the idea of a system of permutations of a set or symmetries of a geometric object, so category theory arises from the idea of a system of functions among some objects.”

For my intuition, a category is any “things” that plug together. The “in” of a thing has to match the “out” of another thing in order to hook them together. In other words, the requirement for something to be a category is having a notion of composition. The things you plug together are called the morphisms of the category and the matching ports are the objects of the category. The additional requirement of always having an identity morphism (a do-nothing connection wire) is usually there once you have composition, although it is good to take especial note of it.

Category theory is an elegant framework for how to think about these composing things in a mathematical way. In my experience, thinking in these terms leads to good abstractions, and useful analogies between disparate things.

It is helpful for any abstract concept to list some examples to expose the threads that connect them. Category theory in particular has a ton of examples connecting to many other fields because it is a science of analogy. These are the examples of categories I usually reach for. Which one feels the most comfortable to you will depend on your background.

  • Hask. Objects are types. Morphisms are functions between those types
  • Vect. Objects are vector spaces, morphisms are linear maps (roughly matrices).
  • Preorders. Objects are values. Morphisms are the inequalities between those values.
  • Sets. Objects are Sets. Morphisms are functions between sets.
  • Cat. Objects are categories, Morphisms are functors. This is a pretty cool one, although complete categorical narcissism.
  • Systems and Processes.
  • The Free Category of a directed graphs. Objects are vertices. Morphisms are paths between vertices

Generic Programming and Typeclasses

The goal of generic programming is to run programs that you write once in many way.

There are many ways to approach this generic programming goal, but one way this is achieved in Haskell is by using Typeclasses. Typeclasses allow you to overload names, so that they mean different things based upon the types involved. Adding a vector is different than adding a float or int, but there are programs that can be written that reasonably apply in both situations.

Writing your program in a way that it applies to disparate objects requires abstract ways of talking about things. Mathematics is an excellent place to mine for good abstractions. In particular, the category theory abstraction has demonstrated itself to be a very useful unified vocabulary for mathematical topics. I, and others, find it also to be a beautiful aesthetic by which to structure programs.

In the Haskell base library there is a Category typeclass defined in base. In order to use this, you need to import the Prelude in an unusual way.

The Category typeclass is defined on the type that corresponds to the morphisms of the category. This type has a slot for the input type and a slot for the output type. In order for something to be a category, it has to have an identity morphisms and a notion of composition.

The most obvious example of this Category typeclass is the instance for the ordinary Haskell function (->). The identity corresponds to the standard Haskell identity function, and composition to ordinary Haskell function composition.

Another example of a category that we’ve already encountered is that of linear operators which we’ll call LinOp. LinOp is an example of a Kliesli arrow, a category built using monadic composition rather than regular function composition. In this case, the monad Q from my first post takes care of the linear pipework that happens between every application of a LinOp. The fish <=< operator is monadic composition from Control.Monad.

A related category is the FibOp category. This is the category of operations on Fibonacci anyons, which are also linear operations. It is LinOp specialized to the Fibonacci anyon space. All the operations we've previously discussed (F-moves, braiding) are in this category.

The "feel" of category theory takes focus away from the objects and tries to place focus on the morphisms. There is a style of functional programming called "point-free" where you avoid ever giving variables explicit names and instead use pipe-work combinators like (.), fst, snd, or (***). This also has a feel of de-emphasizing objects. Many of the combinators that get used in this style have categorical analogs. In order to generically use categorical typeclasses, you have to write your program in this point free style.

It is possible for a program written in the categorical style to be a reinterpreted as a program, a linear algebra operation, a circuit, or a diagram, all without changing the actual text of the program. For more on this, I highly recommend Conal Elliot's  compiling to categories, which also puts forth a methodology to avoid the somewhat unpleasant point-free style using a compiler plug-in. This might be an interesting place to mine for a good quantum programming language. YMMV.

Monoidal Categories.

Putting two processes in parallel can be considered a kind of product. A category is monoidal if it has this product of this flavor, and has isomorphisms for reassociating objects and producing or consuming a unit object. This will make more sense when you see the examples.

We can sketch out this monoidal category concept as a typeclass, where we use () as the unit object.

Instances

In Haskell, the standard monoidal product for regular Haskell functions is (***) from Control.Arrow. It takes two functions and turns it into a function that does the same stuff, but on a tuple of the original inputs. The associators and unitors are fairly straightforward. We can freely dump unit () and get it back because there is only one possible value for it.

The monoidal product we'll choose for LinOp is the tensor/outer/Kronecker product.

Otherwise, LinOp is basically a monadically lifted version of (->). The one dimensional vector space Q () is completely isomorphic to just a number. Taking the Kronecker product with it is basically the same thing as scalar multiplying (up to some shuffling).

Now for a confession. I made a misstep in my first post. In order to make our Fibonacci anyons jive nicely with our current definitions, I should have defined our identity particle using type Id = () rather than data Id. We'll do that now. In addition, we need some new primitive operations for absorbing and emitting identity particles that did not feel relevant at that time.

With these in place, we can define a monoidal instance for FibOp. The extremely important and intriguing F-move operations are the assoc operators for the category. While other categories have assoc that feel nearly trivial, these F-moves don't feel so trivial.

This is actually useful

The parC operation is extremely useful to explicitly note in a program. It is an opportunity for optimization. It is possible to inefficiently implement parC in terms of other primitives, but it is very worthwhile to implement it in new primitives (although I haven't here). In the case of (->), parC is an explicit location where actual computational parallelism is available. Once you perform parC, it is not longer obviously apparent that the left and right side of the tuple share no data during the computation. In the case of LinOp and FibOp, parC is a location where you can perform factored linear computations. The matrix vector product (A \otimes B)(v \otimes w) can be performed individually (Av)\otimes (Bw). In the first case, where we densify A \otimes B and then perform the multiplication, it costs O((N_A N_B)^2) time, whereas performing them individually on the factors costs O( N_A^2 + N_B^2) time, a significant savings. Applied category theory indeed.

Laws

Judge Dredd courtesy of David

Like many typeclasses, these monoidal morphisms are assumed to follow certain laws. Here is a sketch (for a more thorough discussion check out the wikipedia page):

  • Functions with a tick at the end like assoc' should be the inverses of the functions without the tick like assoc, e.g. assoc . assoc' = id
  • The parC operation is (bi)functorial, meaning it obeys the commutation law parC (f . f') (g . g') = (parC f g) . (parC f' g') i.e. it doesn't matter if we perform composition before or after the parC.
  • The pentagon law for assoc: Applying leftbottom is the same as applying topright
  • The triangle law for the unitors:

String Diagrams

String diagrams are a diagrammatic notation for monoidal categories. Morphisms are represented by boxes with lines.

Composition g . f is made by connecting lines.

The identity id is a raw arrow.

The monoidal product of morphisms f \otimes g is represented by placing lines next to each other.

The diagrammatic notion is so powerful because the laws of monoidal categories are built so deeply into it they can go unnoticed. Identities can be put in or taken away. Association doesn't even appear in the diagram. The boxes in the notation can naturally be pushed around and commuted past each other.

This corresponds to the property

(id \otimes g) \circ (f \otimes id) = (f \otimes id) \circ (id \otimes g)

What expression does the following diagram represent?

Is it (f \circ f') \otimes (g \circ g') (in Haskell notation parC (f . f') (g . g') )?

Or is it (f \otimes g) \circ (f' \otimes g') (in Haskell notation (parC f g) . (parC f' g')?

Answer: It doesn't matter because the functorial requirement of parC means the two expressions are identical.

There are a number of notations you might meet in the world that can be interpreted as String diagrams. Three that seem particular pertinent are:

  • Quantum circuits
Image result for quantum circuits
  • Anyon Diagrams!

Braided and Symmetric Monoidal Categories: Categories That Braid and Swap

Some monoidal categories have a notion of being able to braid morphisms. If so, it is called a braided monoidal category (go figure).

The over and under morphisms are inverse of each other over . under = id. The over morphism pulls the left morphism over the right, whereas the under pulls the left under the right. The diagram definitely helps to understand this definition.

These over and under morphisms need to play nice with the associator of the monoidal category. These are laws that valid instance of the typeclass should follow. We actually already met them in the very first post.

If the over and under of the braiding are the same the category is a symmetric monoidal category. This typeclass needs no extra functions, but it is now intended that the law over . over = id is obeyed.

When we draw a braid in a symmetric monoidal category, we don't have to be careful with which one is over and under, because they are the same thing.

The examples that come soonest to mind have this symmetric property, for example (->) is a symmetric monoidal category..

Similarly LinOp has an notion of swapping that is just a lifting of swap

However, FibOp is not symmetric! This is perhaps at the core of what makes FibOp so interesting.

Automating Association

Last time, we spent a lot of time doing weird typelevel programming to automate the pain of manual association moves. We can do something quite similar to make the categorical reassociation less painful, and more like the carefree ideal of the string diagram if we replace composition (.) with a slightly different operator

Before defining reassoc, let's define a helper LeftCollect typeclass. Given a typelevel integer n, it will reassociate the tree using a binary search procedure to make sure the left branch l at the root has Count l = n.

Once we have LeftCollect, the typeclass ReAssoc is relatively simple to define. Given a pattern tree, we can count the elements in it's left branch and LeftCollect the source tree to match that number. Then we recursively apply reassoc in the left and right branch of the tree. This means that every node has the same number of children in the tree, hence the trees will end up in an identical shape (modulo me mucking something up).

It seems likely that one could write equivalent instances that would work for an arbitrary monoidal category with a bit more work. We are aided somewhat by the fact that FibOp has a finite universe of possible leaf types to work with.

Closing Thoughts

While our categorical typeclasses are helpful and nice, I should point out that they are not going to cover all the things that can be described as categories, even in Haskell. Just like the Functor typeclass does not describe all the conceptual functors you might meet. One beautiful monoidal category is that of Haskell Functors under the monoidal product of Functor Composition. More on this to come, I think. https://parametricity.com/posts/2015-07-18-braids.html

We never even touched the dot product in this post. This corresponds to another doodle in a string diagram, and another power to add to your category. It is somewhat trickier to work with cleanly in familiar Haskell terms, I think because (->) is at least not super obviously a dagger category?

You can find a hopefully compiling version of all my snippets and more in my chaotic mutating Github repo https://github.com/philzook58/fib-anyon

See you next time.

References

The Rosetta Stone paper by Baez and Stay is probably the conceptual daddy of this entire post (and more).

Bartosz Milewski's Category Theory for Programmer's blog (online book really) and youtube series are where I learned most of what I know about category theory. I highly recommend them (huge Bartosz fanboy).

Catsters - https://byorgey.wordpress.com/catsters-guide-2/

https://en.wikibooks.org/wiki/Haskell/Category_theory

https://www.math3ma.com/blog/what-is-category-theory-anyway

There are fancier embeddings of category theory and monoidal categories than I've shown here. Often you want constrained categories and the ability to choose unit objects. I took a rather simplistic approach here.

http://hackage.haskell.org/package/constrained-categories

http://hackage.haskell.org/package/data-category


https://parametricity.com/posts/2015-07-18-braids.html