Linear Relation Algebra of Circuits with HMatrix

Oooh this is a fun one.

I’ve talked before about relation algebra and I think it is pretty neat. http://www.philipzucker.com/a-short-skinny-on-relations-towards-the-algebra-of-programming/. In that blog post, I used finite relations. In principle, they are simple to work with. We can perform relation algebra operations like composition, meet, and join by brute force enumeration.

Unfortunately, brute force may not always be an option. First off, the finite relations grow so enormous as to be make this infeasible. Secondly, it is not insane to talk about relations or regions with an infinite number of elements, such as some continuous blob in 2D space. In that case, we can’t even in principle enumerate all the points in the region. What are we to do? We need to develop some kind of finite parametrization of regions to manipulate. This parametrization basically can’t possibly be complete in some sense, and we may choose more or less powerful systems of description for computational reasons.

In this post, we are going to be talking about linear or affine subspaces of a continuous space. These subspaces are hyperplanes. Linear subspaces have to go through the origin, while affine spaces can have an offset from the origin.

In the previous post, I mentioned that the finite relations formed a lattice, with operations meet and join. These operations were the same as set intersection and union so the introduction of the extra terminology meet and join felt a bit unwarranted. Now the meet and join aren’t union and intersection anymore. We have chosen to not have the capability to represent the union of two vectors, instead we can only represent the smallest subspace that contains them both, which is the union closed under vector addition. For example, the join of a line and point will be the plane that goes through both.

Linear/Affine stuff is great because it is so computational. Most questions you cant to ask are answerable by readily available numerical linear algebra packages. In this case, we’ll use the Haskell package HMatrix, which is something like a numpy/scipy equivalent for Haskell. We’re going to use type-level indices to denote the sizes and partitioning of these spaces so we’ll need some helper functions.

Matrices are my patronum. They make everything good. Artwork courtesy of David

In case I miss any extensions, make typos, etc, you can find a complete compiling version here https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs

In analogy with sets of tuples for defining finite relations, we partition the components of the linear spaces to be “input” and “output” indices/variables \begin{bmatrix} x_1 & x_2 & x_3 & ... & y_1 & y_2 & y_3 & ... \end{bmatrix}. This partition is somewhat arbitrary and easily moved around, but the weakening of strict notions of input and output as compared to functions is the source of the greater descriptive power of relations.

Relations are extensions of functions, so linear relations are an extension of linear maps. A linear map has the form y = Ax. A linear relation has the form Ax + By = 0. An affine map has the form y = Ax + b and an affine relation has the form Ax + By = b.

There are at least two useful concrete representation for subspaces.

  1. We can write a matrix A and vector b down that corresponds to affine constraints. Ax = b. The subspace described is the nullspace of A plus a solution of the equation. The rows of A are orthogonal to the space.
  2. We can hold onto generators of subspace. x = A' l+b where l parametrizes the subspace. In other words, the subspace is generated by / is the span of the columns of A'. It is the range of A'.

We’ll call these two representations the H-Rep and V-Rep, borrowing terminology from similar representations in polytopes (describing a polytope by the inequalities that define it’s faces or as the convex combination of it’s vertices). https://inf.ethz.ch/personal/fukudak/lect/pclect/notes2015/PolyComp2015.pdf These two representations are dual in many respects.

It is useful to have both reps and interconversion routines, because different operations are easy in the two representations. Any operations defined on one can be defined on the other by sandwiching between these conversion functions. Hence, we basically only need to define operations for one of the reps (if we don’t care too much about efficiency loss which, fair warning, is out the window for today). The bulk of computation will actually be performed by these interconversion routines. The HMatrix function nullspace performs an SVD under the hood and gathers up the space with 0 singular values.

These linear relations form a category. I’m not using the Category typeclass because I need BEnum constraints hanging around. The identity relations is x = y aka Ix - Iy = 0.

Composing relations is done by combining the constraints of the two relations and then projecting out the interior variables. Taking the conjunction of constraints is easiest in the H-Rep, where we just need to vertically stack the individual constraints. Projection easily done in the V-rep, where you just need to drop the appropriate section of the generator vectors. So we implement this operation by flipping between the two.

We can implement the general cadre of relation operators, meet, join, converse. I feel the converse is the most relational thing of all. It makes inverting a function nearly a no-op.

Relational inclusion is the question of subspace inclusion. It is fairly easy to check if a VRep is in an HRep (just see plug the generators into the constraints and see if they obey them) and by using the conversion functions we can define it for arbitrary combos of H and V.

It is useful the use the direct sum of the spaces as a monoidal product.

A side note: Void causes some consternation. Void is the type with no elements and is the index type of a 0 dimensional space. It is the unit object of the monoidal product. Unfortunately by an accident of the standard Haskell definitions, actual Void is not a BEnum. So, I did a disgusting hack. Let us not discuss it more.

Circuits

Baez and Fong have an interesting paper where they describe building circuits using a categorical graphical calculus. We have the pieces to go about something similar. What we have here is a precise way in which circuit diagrams can be though of as string diagrams in a monoidal category of linear relations.

An idealized wire has two quantities associated with it, the current flowing through it and the voltage it is at.

When we connect wires, the currents must be conserved and the voltages must be equal. hid and hcompose from above still achieve that. Composing two independent circuits in parallel is achieve by hpar.

Independent resistors in parallel.

We will want some basic tinker toys to work with.

A resistor in series has the same current at both ends and a voltage drop proportional to the current

Composing two resistors in parallel adds the resistance. (resistor r1) <<< (resistor r2) == resistor (r1 + r2))

A bridging resistor allows current to flow between the two branches

A bridging resistor

Composing two bridge circuits is putting the bridge resistors in parallel. The conductance G=\frac{1}{R} of resistors in parallel adds. hcompose (bridge r1) (bridge r2) == bridge 1 / (1/r1 + 1/r2).

parallel resistors compose

An open circuit allows no current to flow and ends a wire. open ~ resistor infinity

At branching points, the voltage is maintained, but the current splits.

This cmerge combinator could also be built using a short == bridge 0 , composing a branch with open, and then absorbing the Void away.

We can bend wires up or down by using a composition of cmerge and open.

Voltage and current sources enforce current and voltage to be certain values

Measurements of circuits proceed by probes.

Inductors and capacitors could be included easily, but would require the entries of the HMatrix values to be polynomials in the frequency \omega, which it does not support (but it could!). We’ll leave those off for another day.

We actually can determine that the rules suggested above are being followed by computation.

Bits and Bobbles

  • Homogenous systems are usually a bit more elegant to deal with, although a bit more unfamiliar and abstract.
  • Could make a pandas like interface for linear relations that uses numpy/scipy.sparse for the computation. All the swapping and associating is kind of fun to design, not so much to use. Labelled n-way relations are nice for users.
  • Implicit/Lazy evaluation. We should let the good solvers do the work when possible. We implemented our operations eagerly. We don’t have to. By allowing hidden variables inside our relations, we can avoid the expensive linear operations until it is useful to actually compute on them.
  • Relational division = quotient spaces?
  • DSL. One of the beauties of the pointfree/categorical approach is that you avoid the need for binding forms. This makes for a very easily manipulated DSL. The transformations feel like those of ordinary algebra and you don’t have to worry about the subtleties of index renaming or substitution under binders.
  • Sparse is probably really good. We have lots of identity matrices and simple rearrangements. It is very wasteful to use dense operations on these.
  • Schur complement https://en.wikipedia.org/wiki/Schur_complement are the name in the game for projecting out pieces of linear problems. We have some overlap.
  • Linear relations -> Polyhedral relations -> Convex Relations. Linear is super computable, polyhedral can blow up. Rearrange a DSL to abuse Linear programming as much as possible for queries.
  • Network circuits. There is an interesting subclass of circuits that is designed to be pretty composable.

https://en.wikipedia.org/wiki/Two-port_network Two port networks are a very useful subclass of electrical circuits. They model transmission lines fairly well, and easily composable for filter construction.

It is standard to describe these networks by giving a linear function between two variables and the other two variables. Depending on your choice of which variables depend on which, these are called the z-parameters, y-parameters, h-parameters, scattering parameters, abcd parameters. There are tables of formula for converting from one form to the others. The different parameters hold different use cases for composition and combining in parallel or series. From the perspective of linear relations this all seems rather silly. The necessity for so many descriptions and the confusing relationship between them comes from the unnecessary and overly rigid requirement of have a linear function-like relationship rather than just a general relation, which depending of the circuit may not even be available (there are degenerate configurations where two of the variables do not imply the values of the other two). A function relationship is always a lie (although a sometimes useful one), as there is always back-reaction of new connections.

The relation model also makes clearer how to build lumped models out of continuous ones. https://en.wikipedia.org/wiki/Lumped-element_model

https://en.wikipedia.org/wiki/Transmission_line https://en.wikipedia.org/wiki/Distributed-element_model

    null
  • Because the type indices have no connection to the actual data types (they are phantom) it is a wise idea to use smart constructors that check that the sizes of the matrices makes sense.
  • Nonlinear circuits. Grobner Bases and polynomial relations?
  • Quadratic optimization under linear constraints. Can’t get it to come out right yet. Clutch for Kalman filters. Nice for many formulations like least power, least action, minimum energy principles.
  • Quadratic Operators -> Convex operators. See last chapter of Rockafellar.
  • Duality of controllers and filters. It is well known (I think) that for ever controller algorithm there is a filter algorithm that is basically the same thing.
    • LQR – Kalman
    • Viterbi filter – Value function table
    • particle filter – Monte Carlo control
    • Extended Kalman – iLQR-ish? Use local approximation of dynamics
    • unscented kalman – ?

Refs

Neural Networks with Weighty Lenses (DiOptics?)

I wrote a while back how you can make a pretty nice DSL for reverse mode differentiation based on the same type as Lens. I’d heard some interesting rumblings on the internet around these ideas and so was revisiting them.

Composition is defined identically for reverse mode just as it is for lens.

The forward computation shares info with the backwards differential propagation, which corresponds to a transposed Jacobian

After chewing on it a while, I realized this really isn’t that exotic. How it works is that you store the reverse mode computation graph, and all necessary saved data from the forward pass in the closure of the (dy -> dx). I also have a suspicion that if you defunctionalized this construction, you’d get the Wengert tape formulation of reverse mode ad.

Second, Lens is just a nice structure for bidirectional computation, with one forward pass and one backward pass which may or may not be getting/setting. There are other examples for using it like this.

It is also pretty similar to the standard “dual number” form type FAD x dx y dy = (x,dx)->(y,dy) for forward mode AD. We can bring the two closer by a CPS/Yoneda transformation and then some rearrangement.

and meet it in the middle with

I ended the previous post somewhat unsatisfied by how ungainly writing that neural network example was, and I called for Conal Elliot’s compiling to categories plugin as a possible solution. The trouble is piping the weights all over the place. This piping is very frustrating in point-free form, especially when you know it’d be so trivial pointful. While the inputs and outputs of layers of the network compose nicely (you no longer need to know about the internal computations), the weights do not. As we get more and more layers, we get more and more weights. The weights are in some sense not as compositional as the inputs and outputs of the layers, or compose in a different way that you need to maintain access to.

I thought of a very slight conceptual twist that may help.

The idea is we keep the weights out to the side in their own little type parameter slots. Then we define composition such that it composes input/outputs while tupling the weights. Basically we throw the repetitive complexity appearing in piping the weights around into the definition of composition itself.

These operations are easily seen as 2 dimensional diagrams.

Three layers composed, exposing the weights from all layers
The 2-D arrow things can be built out of the 1-d arrows of the original basic AD lens by bending the weights up and down. Ultimately they are describing the same thing

Here’s the core reverse lens ad combinators

And here are the two dimensional combinators. I tried to write them point-free in terms of the combinators above to demonstrate that there is no monkey business going on. We

I wonder if this is actually nice?

I asked around and it seems like this idea may be what davidad is talking about when he refers to dioptics

http://events.cs.bham.ac.uk/syco/strings3-syco5/slides/dalrymple.pdf

Perhaps this will initiate a convo.

Edit: He confirms that what I’m doing appears to be a dioptic. Also he gave a better link http://events.cs.bham.ac.uk/syco/strings3-syco5/papers/dalrymple.pdf

He is up to some interesting diagrams

https://twitter.com/davidad/status/1179760373030801408?s=20

Bits and Bobbles

  • Does this actually work or help make things any better?
  • Recurrent neural nets flip my intended role of weights and inputs.
  • Do conv-nets naturally require higher dimensional diagrams?
  • This weighty style seems like a good fit for my gauss seidel and iterative LQR solvers. A big problem I hit there was getting all the information to the outside, which is a similar issue to getting the weights around in a neural net.

Functors, Vectors, and Quantum Circuits

Vectors are dang useful things, and any energy you put into them seems to pay off massive dividends.

Vectors and Linear Algebra are useful for:

  • 2D, 3D, 4D geometry stuff. Computer graphics, physics etc.
  • Least Squares Fitting
  • Solving discretized PDEs
  • Quantum Mechanics
  • Analysis of Linear Dynamical Systems
  • Probabilistic Transition Matrices

There are certain analogies between Haskell Functors and Vectors that corresponds to a style of computational vector mathematics that I think is pretty cool and don’t see talked about much.

Due to the expressivity of its type system, Haskell has a first class notion of container that many other languages don’t. In particular, I’m referring to the fact that Haskell has higher kinded types * -> * (types parametrized on other types) that you can refer to directly without filling them first. Examples in the standard library include Maybe, [], Identity, Const b, and Either b. Much more vector-y feeling examples can be found in Kmett’s linear package V0, V1, V2, V3, V4. For example, the 4 dimensional vector type V4

This really isn’t such a strange, esoteric thing as it may appear. You wouldn’t blink an eye at the type

in some other language. What makes Haskell special is how compositional and generic it is. We can build thousand element structs with ease via composition. What we have here is an alternative to the paradigm of computational vectors ~ arrays. Instead we have computational vectors ~ structs. In principle, I see no reason why this couldn’t be as fast as arrays, although with current compiler expectations it probably isn’t.

Monoidal categories are a mathematical structure that models this analogy well. It has been designed by mathematicians for aesthetic elegance, and it seems plausible that following its example leads us to interesting, useful, and pleasant vector combinators. And I personally find something that tickles me about category theory.

So to get started, let’s talk a bit about functors.

The Algebra of Functors

Functors in Haskell are a typeclass for containers. They allow you to map functions over all the items in the container. They are related to the categorical notion of functor, which is a mapping between categories.

You can lift the product and sum of types to the product and sum of Functors which you may find in Data.Functor.Product and Data.Functor.Sum. This is analogous to the lifting of ordinary addition and multiplication to the addition and multiplication of polynomials, which are kind of like numbers with a “hole”.

Functors also compose. A container of containers of a is still a container of a. We can form composite containers by using the Compose newtype wrapper.

When you use this Compose newtype, instead of having to address the individual elements by using fmap twice, a single application of fmap will teleport you through both layers of the container.

Product, Sum, and Compose are all binary operator on functors. The type constructor has kind

Some important other functors from the algebra of types perspective are Const Void a, Const () a, and Identity a. These are identity elements for Sum, Product, and Compose respectively.

You can define mappings between containers that don’t depend on the specifics of their contents. These mappings can only rearrange, copy and forget items of their contained type. This can be enforced at the type level by the polymorphic type signature

These mappings correspond in categorical terminology to natural transformations between the functors f and g. There is a category where objects are Functors and morphisms are natural transformations. Sum, Product, and Compose all obeys the laws necessary to be a monoidal product on this category.

How the lifting of functions works for Compose is kind of neat.

Because the natural transformations require polymorphic types, when you apply ntf to fg the polymorphic variable a in the type of ntf restricts to a ~ g a'.

Product and Sum have a straight forward notion of commutativity ( (a,b) is isomorphic to (b,a)) . Compose is more subtle. sequenceA from the Traversable typeclass can swap the ordering of composition. sequenceA . sequenceA may or may not be the identity depending on the functors in question, so it has some flavor of a braiding operation. This is an interesting post on that topic https://parametricity.com/posts/2015-07-18-braids.html

Combinators of these sorts are used arise in at least the following contexts

  • Data types a la carte – A systematic way of building extensible data types
  • GHC Generics – A system for building generic functions that operate on data types that can be described with sums, products, recursion, and holes.
  • In and around the Lens ecosystem

Also see the interesting post by Russell O’Connor and functor oriented programming http://r6.ca/blog/20171010T001746Z.html. I think the above is part of that to which he is referring.

Vector Spaces as Shape

Vector spaces are made of two parts, the shape (dimension) of the vector space and the scalar.

Just as a type of kind * -> * can be thought of as a container modulo it’s held type, it can also be a vector modulo its held scalar type. The higher kinded type for vector gives an explicit slot to place the scalar type.

The standard Haskell typeclass hierarchy gives you some of the natural operations on vectors if you so choose to abuse it in that way.

  • Functor ~> Scalar Multiplication: smul s = fmap (* s)
  • Applicative ~> Vector Addition: vadd x y = (+) <$> x <*> y
  • Traversable ~> Tranposition. sequenceA has the type of transposition and works correctly for the linear style containers like V4.

The linear library does use Functor for scalar multiplication, but defines a special typeclass for addition, Additive. I think this is largely for the purposes for bringing Map like vectors into the fold, but I’m not sure.

Once we’ve got the basics down of addition and scalar multiplication, the next thing I want is operations for combining vector spaces. Two important ones are the Kronecker product and direct sum. In terms of indices, the Kronecker product is a space that is indexed by the cartesian product (,) of its input space indices and the direct sum is a space indexed by the Either of its input space indices. Both are very useful constructs. I use the Kronecker product all the time when I want to work on 2D or 3D grids for example. If you’ll excuse my python, here is a toy 2-D finite difference Laplace equation example. We can lift the 1D second derivative matrix K = \partial_x^2 using the kronecker product K2 = K \otimes I + I \otimes K. The direct sum is useful as a notion of stacking matrices.

The following is perhaps the most important point of the entire post.

Compose of vector functors gives the Kronecker product, and Product gives the direct sum (this can be confusing but its right. Remember, the sum in direct sum refers to the indices).

We can form the Kronecker product of vectors given a Functor constraint.

Notice we have two distinct but related things called kron: Kron and kron. One operates on vectors spaces and the other operates on vector values.

Building vector spaces out of small combinators like V2, V4, DSum, Kron is interesting for a number of reasons.

  • It is well typed. Similar to Nat indexed vectors, the types specify the size of the vector space. We can easily describe vector spaced as powers of 2 as V16 = Kron V2 (Kron V2 (Kron V2 (Kron V2 V1))), similarly in terms of its prime factors, or we can do a binary expansion (least significant bit first) V5 = DSum V1 (Kron V2 (DSum V0 (Kron V2 V1))) or other things. We do it without going into quasi-dependently typed land or GADTs.
  • It often has better semantic meaning. It is nice to say Measurements, or XPosition or something rather than just denote the size of a vector space in terms of a nat. It is better to say a vector space is the Kron of two meaningful vector spaces than to just say it is a space of size m*n. I find it pleasant to think of the naturals as a free Semiring rather than as the Peano Naturals and I like the size of my vector space defined similarly.
  • Interesting opportunities for parallelism. See Conal Elliott’s paper on scans and FFT: http://conal.net/papers/generic-parallel-functional/

What do linear operators look like?

In the Vectors as shape methodology, Vectors look very much like Functors.

I have been tempted to lift the natural transformation type above to the following for linear operators.

In a sense this works, we could implement kron because many of the container type (V1, V2, V3, etc) in the linear package implement Num. However, choosing Num is a problem. Why not Fractional? Why not Floating? Sometimes we want those. Why not just specifically Double?

We don’t really want to lock away the scalar in a higher rank polymorphic type. We want to ensure that everyone is working in the same scalar type before allowing things to proceed.

Note also that this type does not constrain us to linearity. Can we form the Kronecker product of linear operators? Yes, but I’m not in love with it. This is not nearly so beautiful as the little natural transformation dance.

This was a nice little head scratcher for me. Follow the types, my friend! I find this particularly true for uses of sequenceA. I find that if I want the containers swapped in ordering. In that situation sequenceA is usually the right call. It could be called transpose.

Giving the vector direct access to the scalar feels a bit off to me. I feel like it doesn’t leave enough “room” for compositionally. However, there is another possibility for a definition of morphisms could be that I think is rather elegant.

Does this form actually enforce linearity? You may still rearrange objects. Great. You can also now add and scalar multiply them with the Additive k constraint. We also expose the scalar, so it can be enforced to be consistent.

One other interesting thing to note is that these forms allow nonlinear operations. fmap, liftU2 and liftI2 are powerful operations, but I think if we restricted Additive to just a correctly implemented scalar multiply and vector addition operation, and zero, we’d be good.

We can recover the previous form by instantiation k to V1. V1, the 1-d vector space, is almost a scalar and can play the scalars role in many situations. V1 is the unit object with respect to the monoidal product Kron.

There seems to be a missing instance to Additive that is useful. There is probably a good reason it isn’t there, but I need it.

Monoidal Categories

The above analogy can be put into mathematical terms by noting that both vectors and functor are monoidal categories. I talked a quite a bit about monoidal categories in a previous post http://www.philipzucker.com/a-touch-of-topological-computation-3-categorical-interlude/ .

Categories are the combo of a collection of objects and arrows between the objects. The arrows can compose as long as the head of one is on the same object as the tail of the other. On every object, there is always an identity arrow, which when composed will do nothing.

We need a little extra spice to turn categories into monoidal categories. One way of thinking about it is that monoidal categories have ordinary category composition and some kind of horizontal composition, putting things side to side. Ordinary composition is often doing something kind of sequentially, applying a sequence of functions, or a sequence of matrices. The horizontal composition is often something parallel feeling, somehow applying the two arrows separately to separate pieces of the system.

Why are they called Monoidal?

There is funny game category people play where they want to lift ideas from other fields and replace the bits and pieces in such a way that the entire thing is defined in terms of categorical terminology. This is one such example.

A monoid is a binary operations that is associative and has an identity.

Sometimes people are more familiar with the concept of a group. If not, ignore the next sentence. Monoids are like groups without requiring an inverse.

Numbers are seperately monoids under both addition, multiplication and minimization (and more), all of which are associative operations with identity (0, 1, and infinity respectively).

Exponentiation is a binary operation that is not a monoid, as it isn’t associative.

The Monoid typeclass in Haskell demonstrates this http://hackage.haskell.org/package/base-4.12.0.0/docs/Data-Monoid.html

A common example of a monoid is list, where mempty is the empty list and mappend appends the lists.

There are different set-like intuitions for categories. One is that the objects in the category are big opaque sets. This is the case for Hask, Rel and Vect.

A different intuitiion is that the category itself is like a set, and the objects are the elements of that set. There just so happens to be some extra structure knocking around in there: the morphisms. This is the often more the feel for the examples of preorders or graphs. The word “monoidal” means that they we a binary operation on the objects. But in the category theory aesthetic, you also need that binary operation to “play nice” with the morphisms that are hanging around too.

Functors are the first thing that has something like this. It has other properties that come along for the ride. A Functor is a map that takes objects to objects and arrows to arrows in a nice way. A binary functor takes two objects to and object, and two arrows to one arrow in a way that plays nice (commutes) with arrow composition.

String diagrams

String diagrams are a graphical notation for monoidal categories. Agin I discussed this more here.

Morphisms are denoted by boxes. Regular composition is shown by plugging arrows together vertically. Monoidal product is denoted by putting the arrows side to side.

When I was even trying to describe what a monoidal category was, I was already using language evocative of string diagrams.

You can see string diagrams in the documentation for the Arrow library. Many diagrams that people use in various fields can be formalized as the string diagrams for some monoidal category. This is big chunk of Applied Category Theory.

This is the connection to quantum circuits, which are after all a graphical notation for very Kroneckery linear operations.

example circuit

There is an annoying amount of stupid repetitive book keeping with the associative structure of Kron. This can largely be avoided hopefully with coerce, but I’m not sure. I was having trouble with roles when doing it generically.

Bit and Bobbles

  • Woof. This post was more draining to write than I expected. I think there is still a lot left to say. Sorry about the editing everyone! Bits and pieces of this post are scattered in this repo
  • How would you go about this in other languages? C, Rust, OCaml, C++, Agda
  • The discussion of Vect = * -> * is useful for discussion of 2-Vect, coming up next. What if we make vectors of Vect? Wacky shit.
  • Metrics and Duals vectors. type Dual f a = f a -> a. type Dual1 f a = forall k. Additive k => Kron f k a -> k a
  • Adjunction diagrams have cups and caps. Since we have been using representable functors, they actually have a right adjunction that is tupling with the vector space index type. This gives us something that almost feels like a metric but a weirdly constrained metric.
  • LinOp1 form is yoneda? CPS? Universally quantified k is evocative of forall c. (a -> c) -> (b -> c)

References

Appendix

Representable/Naperian Functors

Containers that are basically big product types are also known as representable, Naperian, or logarithmic. Representable places emphasis on the isomorphism between such a container type and the type (->) i which by the algebra of types correspond is isomorphic to a^i (i copies of a). They are called Naperian/Logarithmic because there is a relationship similar to exponentiation between the index type a and the container type f. If you take the Product f g, this container is indexed by (a + b) = Either a b. Compose f g is indexed by the product (a,b). (f r) ~ r^a The arrow type is written as an exponential b^a because if you have finite enumerable types a and b, that is the number of possible tabulations available for f. The Sum of two representable functors is no longer representable. Regular logarithms of sums Log(f + g) do not have good identities associated with them.

See Gibbons article. There is a good argument to be made that representable functors are a good match for vectors/well typed tensor programming.

But note that there is a reasonable interpretation for container types with sum types in them. These can be thought of as subspaces, different bases, or as choices of sparsity patterns. When you define addition, you’ll need to say how these subspaces reconcile with each other.
— two bases at 45 degrees to each other.

Monoidal Products on Hask

Hask is a name for the category that has objects as Haskell types and morphisms as Haskell functions.

Note that it’s a curious mixing of type/value layers of Haskell. The objects are types whereas the function morphisms are Haskell values. Composition is given by (.) and the identity morphisms are given by id.

For Haskell, you can compose functions, but you can also smash functions together side by side. These combinators are held in Control.Arrow.

You can smash together types with tuple (,) or with Either. Both of these are binary operators on types. The corresponding mapping on morphisms are given by

These are binary operators on morphisms that play nice with the composition structure of Haskell.

Monoidal Combinators of Functors

A monoidal category also has unit objects. This is given by the Identity functor

There is also a sense of associativity. It is just newtype rearrangement, so it can also be achieved with a coerce (although not polymorphically?).

Similarly, we can define a monoidal category structure using Product or Sum instead of Compose.

These are all actually just newtype rearrangement, so they should all just be instances of coerce, but I couldn’t get the roles to go through generically?

Concolic Weakest Precondition is Kind of Like a Lens

That’s a mouthful.

Lens are described as functional getters and setters. The simple lens type is

. The setter is

and the getter is

This type does not constrain lenses to obey the usual laws of getters and setters. So we can use/abuse lens structures for nontrivial computations that have forward and backwards passes that share information. Jules Hedges is particular seems to be a proponent for this idea.

I’ve described before how to encode reverse mode automatic differentiation in this style. I have suspicions that you can make iterative LQR and guass-seidel iteration have this flavor too, but I’m not super sure. My attempts ended somewhat unsatisfactorily a whiles back but I think it’s not hopeless. The trouble was that you usually want the whole vector back, not just its ends.

I’ve got another example in imperative program analysis that kind of makes sense and might be useful though. Toy repo here: https://github.com/philzook58/wp-lens

In program analysis it sometimes helps to run a program both concretely and symbolically. Concolic = CONCrete / symbOLIC. Symbolic stuff can slowly find hard things and concrete execution just sprays super fast and can find the dumb things really quick.  

We can use a lens structure to organize a DSL for describing a simple imperative language

The forward pass is for the concrete execution. The backward pass is for transforming the post condition to a pre condition in a weakest precondition analysis. Weakest precondition semantics is a way of specifying what is occurring in an imperative language. It tells how each statement transforms post conditions (predicates about the state after the execution) into pre conditions (predicates about before the execution).  The concrete execution helps unroll loops and avoid branching if-then-else behavior that would make the symbolic stuff harder to process. I’ve been flipping through Djikstra’s book on this. Interesting stuff, interesting man.

I often think of a state machine as a function taking s -> s. However, this is kind of restrictive. It is possible to have heterogenous transformations s -> s’. Why not? I think I am often thinking about finite state machines, which we really don’t intend to have a changing state size. Perhaps we allocated new memory or something or brought something into or out of scope. We could model this by assuming the memory was always there, but it seems wasteful and perhaps confusing. We need to a priori know everything we will need, which seems like it might break compositionally.

We could model our language making some data type like
data Imp = Skip | Print String | Assign String Expr | Seq Imp Imp | ...
and then build an interpreter

Imp.

But we can also cut out the middle man and directly define our language using combinators.

To me this has some flavor of a finally tagless style.


Likewise for expressions. Expressions evaluate to something in the context of the state (they can lookup variables), so let’s just use

And, confusingly (sorry), I think it makes sense to use Lens in their original getter/setter intent for variables. So Lens structure is playing double duty.

type Var s a = Lens' s a

With that said, here we go.

Weakest precondition can be done similarly, instead we start from the end and work backwards

Predicates are roughly sets. A simple type for sets is

Now, this doesn’t have much deductive power, but I think it demonstrates the principles simply. We could replace Pred with perhaps an SMT solver expression, or some data type for predicates, for which we’ll need to implement things like substitution. Let’s not today.

A function

is equivalent to

. This is some kind of CPS / Yoneda transformation thing. A state transformer

to predicate transformer

is somewhat evocative of that. I’m not being very precise here at all.

Without further ado, here’s how I think a weakest precondition looks roughly.

Finally here is a combination of the two above that uses the branching structure of the concrete execution to aid construction of the precondition. Although I haven’t expanded it out, we are using the full s t a b parametrization of lens in the sense that states go forward and predicates come back.

Neat. Useful? Me dunno.

Linear Algebra of Types

It gives my brain a pleasant thrum to learn new mathematics which mimics the algebra I learned in middle school. Basically this means that the new system has operations with properties that match those of regular numbers as much as possible. Two pretty important operations are addition and multiplication with the properties of distributivity and associativity. Roughly this corresponds to the mathematical notion of a Semiring.

Some examples of semirings include 

  • Regular multiplication and addition
  • And-Or
  • Min-plus 
  • Matrices.
  • Types

I have written before about how types also form a semiring, using Either for plus and (,) for times. These constructions don’t obey distributivity or associativity “on the nose”, but instead are isomorphic to the rearranged type, which when you squint is pretty similar to equality.

Matrices are grids of numbers which multiply by “row times column”. You can form matrices out of other semirings besides just numbers. One somewhat trivial but interesting example is block matrices, where the elements of the matrix itself are also matrices. Another interesting example is that of relations, which can be thought of as matrices of boolean values. Matrix multiplication using the And-Or semiring on the elements corresponds to relational composition.

What if we put our type peanut butter in our matrix chocolate and consider matrices of types, using the Either(,) semiring?

The simplest implementation to show how this could go can be made using the naive list based implementation of vectors and matrices. We can directly lift this representation to the typelevel and the appropriate value-level functions to type families.

This was just for demonstration purposes. It is not my favorite representation of vectors. You can lift a large fraction of possible ways to encode vector spaces at the value level up to the type level, such as the linear package, or using dual vectors type V2 a = a -> a -> a. Perhaps more on that another day.

What is the point?

Ok. That’s kind of neat, but why do it? Well, one way to seek an answer to that question is to ask “what are matrices useful for anyway?”

One thing they can do is describe transition systems. You can write down a matrix whose entire a_{ij} describes something about the transition from state i to state j. For example the entry could be:

  • The cost of getting from i to j (min-plus gives shortest path),
  • The count of ways to get from i to j (combinatorics of paths)
  • The connectivity of the system from i to j using boolean values and the and-or semiring
  • The probability of transition from i to j
  • The quantum amplitude of going from i to j if we’re feeling saucy.

If we form a matrix describing a single time step, then multiplying this matrix by itself gives 2 time steps and so on.

Lifting this notion to types, we can build a type exactly representing all the possible paths from state i to j.

Concretely, consider the following humorously bleak transition system: You are going between home and work. Every 1 hour period you can make a choice to do a home activity, commute, or work. There are different options of activities at each.


This is described by the following transition diagram

Stapler.

The transitions are described by the following matrix.type:

What is the data type that describe all possible 4-hour day? You’ll find the appropriate data types in the following matrix.

Now, time to come clean. I don’t think this is necessarily the best way to go about this problem. There are alternative ways of representing it.

Here are two data types that describe an indefinite numbers of transition steps.

Another style would hold the current state as a type parameter in the type using a GADT.

We could construct types that are to the above types as Vec n is to [] by including an explicit step size parameter.

Still, food for thought.

Further Thoughts

The reason i was even thinking about this is because we can lift the above construction to perform a linear algebra of vectors spaces. And I mean the spaces, not the vectors themselves. This is a confusing point.

Vector spaces have also have two natural operations on them that act like addition and multiplication, the direct sum and kronecker product. These operations do form a semiring, although again not on the nose.

This is connected to the above algebra of types picture by considering the index types of these vector spaces. The simplest way to denote this in Haskell is using the free vector space construction as shown in this Dan Piponi post. The Kronecker product makes tuples of the indices and the direct sum has an index that is the Either of the original index types.

This is by far not the only way to go about it. We can also consider using the Compose-Product semiring on functors (Compose is Kron, Product is DSum) to get a more index-free kind of feel and work with dense vectors.

Going down this road (plus a couple layers of mathematical sophistication) leads to a set of concepts known as 2Vect. Dan Roberts and James Vicary produced a Mathematica package for 2Vect which is basically incomprehensible to me. It seems to me that typed functional programming is a more appropriate venue for something of this kind of pursuit, given how evocative/ well modeled by category theory it can be. These mathematical ideas are applicable to describing anyonic vector spaces. See my previous post below. It is not a coincidence that the Path data type above is so similar to FibTree data type. The root type variable takes the place of the work/home state, and the tuple structure take the place of a Vec-like size parameter n .

More to on this to come probably as I figure out how to explain it cleanly.

Edit: WordPress, your weird formatting is killing me.

Edit: Hoo Boy. This is why we write blog posts. Some relevant material was pointed out to me that I was not aware of. Thanks @DrEigenbastard.

https://link.springer.com/chapter/10.1007/978-3-319-19797-5_6

http://blog.sigfpe.com/2010/08/constraining-types-with-regular.html

http://blog.sigfpe.com/2010/08/divided-differences-and-tomography-of.html

Relational Algebra with Fancy Types

Last time, I tried to give a primer of relations and relational algebra using the Haskell type type Rel a b = [(a,b)]. In this post we’re going to look at these ideas from a slightly different angle. Instead of encoding relations using value level sets, we’ll encode relations in the type system. The Algebra of Programming Agda repo and the papers quoted therein are very relevant, so if you’re comfortable wading into those waters, give them a look. You can find my repo for fiddling here

At this point, depending on what you’ve seen before, you’re either thinking “Yeah, sure. That’s a thing.” or you’re thinking “How and why the hell would you do such a ridiculous thing.”

Most of this post will be about how, so let’s address why first:

  1. Examining relations in this style illuminates some constructions that appear around the Haskell ecosystem, particularly some peculiar fellows in the profunctor package.
  2. More syntactic approach to relations allows discussion of larger/infinite domains. The finite enumerations of the previous post is nice for simplicity, but it seems you can’t get far that way.
  3. Mostly because we can – It’s a fun game. Maybe a useful one? TBD.

With that out of the way, let’s go on to how.

Translating Functions to Relation GADTs

We will be using some Haskell extensions in this post, at the very least GADTs and DataKinds. For an introduction to GADTs and DataKinds, check out this blog post. DataKinds is an extension that reflects every data constructor of data types to a type constructor. Because there are values True and False there are corresponding types created'True and 'False. GADTs is an extension of the type definition mechanism of standard Haskell. They allow you to declare refined types for the constructors of your data and they infer those refined type when you pattern match out of the data as well, such that the whole process is kind of information preserving.

We will use the GADT extension to define relational datatypes with the kind

. That way it has a slot a for the “input” and b for the “output” of the relation. What will goes in these type slots will be DataKind lifted types like 'True, not ordinary Haskell types like Int. This is a divergence from from the uses of similar kinds you see in Category, Profunctor, or Arrow. We’re doing a more typelevel flavored thing than you’ll see in those libraries. What we’re doing is clearly a close brother of the singleton approach to dependently typed programming in Haskell.

Some examples are in order for what I mean. Here are two simple boolean functions, not and and defined in ordinary Haskell functions, and their equivalent GADT relation data type.

You can already start to see how mechanical the correspondence between the ordinary function definition and our new fancy relation type. A function is often defined via cases. Each case corresponds to a new constructor of the relation and each pattern that occurs in that case is the pattern that appears in the GADT. Multiple arguments to the relations are encoded by uncurrying everything by default.

Any function calls that occur on the right hand side of a function definition becomes fields in the constructor of our relation. This includes recursive calls and external function calls. Here are some examples with a Peano style natural number data type.

We can also define things that aren’t functions. Relations are a larger class of things than functions are, which is part of their utility. Here is a “less than equal” relation LTE.

You can show that elements are in a particular relation by finding a value of that relational type. Is ([4,7], 11) in the relation Plus? Yes, and I can show it with with the value PS (PS (PS (PS PZ))) :: Plus (4,7) 11 . This is very much the Curry-Howard correspondence. The type R a b corresponds to the proposition/question is (a,b) \in R .

The Fun Stuff : Relational Combinators

While you need to build some primitive relations using new data type definitions, others can be built using relational combinators. If you avoid defining too many primitive relations like the above and build them out of combinators, you expose a rich high level manipulation algebra. Otherwise you are stuck in the pattern matching dreck. We are traveling down the same road we did in the previous post, so look there for less confusing explanations of the relational underpinnings of these constructions, or better yet some of the references below.

Higher order relational operators take in a type parameters of kind

and produce new types of a similar kind. The types appearing in these combinators is the AST of our relational algebra language.

The first two combinators of interest is the composition operator and the identity relation. An element (a,c) is in R \cdot Q if there exists a b such that (a,b) \in R and (b,c) \in Q. The fairly direct translation of this into a type is

The type of the composition is the same as that of Profunctor composition found in the profunctors package.

Alongside a composition operator, it is a knee jerk to look for an identity relation and we do have one

This is also a familiar friend. The identity relation in this language is the Equality type.

We can build an algebra for handling product and sum types by defining the appropriate relational combinators. These are very similar to the combinators in the Control.Arrow package.

The converse of relations is very interesting operation and is the point where relations really differ from functions. Inverting a function is tough. Conversing a relation always works. This data type has no analog in profunctor to my knowledge and probably shouldn’t.

Relations do not have a notion of currying. The closest thing they have is

Lattice Operators

For my purposes, lattices are descriptions of sets that trade away descriptive power for efficiency. So most operations you’d perform on sets have an analog in the lattice structure, but it isn’t a perfect matching and you’re forced into approximation. It is nice to have the way you perform these approximation be principled, so that you can know at the end of your analysis whether you’ve actually really shown anything or not about the actual sets in question.

? No. No… Yes? Oh. OH! IT IS!

The top relation holds all values. This is represented by making no conditions on the type parameters. They are completely phantom.

Bottom is a relation with no inhabitants.

The meet is basically the intersection of the relations, the join is basically the union.

A Lattice has an order on it. This order is given by relational inclusion. This is the same as the :-> combinator can be found in the profunctors package.

Relational equality can be written as back and forth inclusion, a natural isomorphism between the relations. There is also an interesting indirect form.

Relational Division

If we consider the equation (r <<< p) :-> q with p and q given, in what sense is there a solution for r? By analogy, this looks rather like r*p = q, so we’re asking a kind of division question. Well, unfortunately, this equation may not necessarily have a solution (neither do linear algebraic equations for that matter), but we can ask for the best under approximation instead. This is the operation of relational division. It also appears in the profunctor package as the right Kan Extension. You’ll also find the universal property of the right division under the name curryRan and uncurryRan in that module.

One formulation of Galois connections can be found in the adjunctions file. Galois Connections are very slick, but I’m running out of steam, so let’s leave that one for another day.

Properties and Proofs

We can prove many properties about these relational operations. Here a a random smattering that we showed using quickcheck last time.

Odds and Ends

  • Recursion Schemes – Recursion schemes are a methodology to talk about recursion in a point free style and where the rubber meets the road in the algebra of programming. Here is an excellent series of articles about them. Here is a sample of how I think they go:
  • Higher Order Relations?
  • Examples of use. Check out the examples folder in the AoP Agda repo. These are probably translatable into Haskell.
  • Interfacing with Singletons. Singletonized functions are a specialized case or relations. Something like?
  • A comment to help avoid confusion. What we’ve done here feels confusingly similar to profunctor, but it is in fact distinct I think. Profunctors are described as a categorical generalization of relations , but to be honest, I kind of don’t get it. Despite many of our constructions appearing in the profunctor package, the profunctor typeclass itself appears to not play a role in our formulation. There just isn’t a good way to dimap under our relations as written, unless you construct free profunctors. Converse at the least is a wrench in the works.
  • Star and graphs. Transition relations are a powerful methodology. A transition relation is in some respects the analog of a square matrix. We can iteratively compose it with itself.

References

Notes on Getting Started in OCaml

I know a bit of Haskell. It’s the functional programming language I have the strongest background in. OCaml is very similar to Haskell, which is why I haven’t felt a strong need to learn it. Having had to delve into it for necessity because of work I think that was an oversight. The biggest thing for me is being able to more easily read a new set of literature and see familiar things in a new light, which is very refreshing.

Getting OCaml

https://ocaml.org/docs/install.html

opam is the package manager. Follow the instructions to install it and get your environment variables setup. It’ll tell you some extra commands you have to run to do so. You use it to install packages via opam install packagename. You can also use it to switch between different ocaml compiler versions via command like opam switch 4.08.1.

Dune is a build tool. You can place a small config file called dune in your folder and it can figure out how to appropriately call the compiler. Dune is in flux, so check out the documentation. What I write here may be wrong.

https://dune.readthedocs.io/en/stable/

Here’s an example execution. Note that even though the file is called main.ml in this example, you call build with main.exe. And exec requires the ./ for some reason. Weird.

Here’s a dune file with some junk in it. You make executables with blocks. You include a list of the files without the .ml suffix require by the executable in the modules line. You list libraries needed in the libraries line.

You want to also install merlin. opam install merlin. Merlin is a very slick IDE tool with autocomplete and type information. dune will setup a .merlin file for you

The ReasonML plugin is good for vscode. Search for it on the marketplace. It is the one for OCaml also. ReasonML is a syntactic facelift intended for the web btw. I don’t particularly recommend it to start. There are also emacs and vim modes if that is what you’re into.

The enhanced repl is called utop. Install it with opam install utop. Basic repl usage: Every line has to end with ;;. That’s how you get stuff to be run. Commands start with #. For example #quit;; is how you end the session. #use "myfile.ml";; will load a file you’ve made. Sometimes when you start you need to run #use "topfind";; which loads a package finder. You can load libraries via the require command like #require "Core";;

#help;; for more.

A Quick Look at the Language

With any new language I like to check out Learn X from Y if one is available.

https://learnxinyminutes.com/docs/ocaml/

Here are some shortish cheat sheets with a quick overview of syntax

https://github.com/alhassy/OCamlCheatSheet

https://ocaml.org/docs/cheat_sheets.html

More In Depth Looks

This is a phenomenal book online book built for a Cornell course: https://www.cs.cornell.edu/courses/cs3110/2019sp/textbook/

Real World OCaml is also quite good but denser. Very useful as a reference for usage of Core and other important libraries.

The reference manual is also surprisingly readable https://caml.inria.fr/pub/docs/manual-ocaml/ . The first 100 or so pages are a straightforward introduction to the language.

https://github.com/janestreet/learn-ocaml-workshop Pretty basic workshop. Could be useful getting you up and running though.

Useful libraries

Core – a standard library replacement. Becoming increasingly common https://github.com/janestreet/core It is quite a bit more confusing for a newcomer than the standard library IMO. And the way they have formatted their docs is awful.

Owl – a numerical library. Similar to Numpy in many ways. https://ocaml.xyz/ These tutorials are clutch https://github.com/owlbarn/owl/wiki

Bap – Binary Analysis Platform. Neato stuff

Lwt – https://github.com/ocsigen/lwt asynchronous programming

Graphics – gives you some toy and not toy stuff. Lets you draw lines and circles and get keyboard events in a simple way.

OCamlGraph – a graph library

Jupyter Notebook – Kind of neat. I’ve got one working on binder. Check it out here. https://github.com/philzook58/ocaml_binder

Menhir and OCamlLex. Useful for lexer and parser generators. Check out the ocaml book for more

fmt – for pretty printing

Interesting Other Stuff (A Descent into Lazy Writing)

https://discuss.ocaml.org/ – The discourse. Friendly people. They don’t bite. Ask questions.

https://github.com/ocaml-community/awesome-ocaml Awesome-Ocaml list. A huge dump of interesting libraries and resources.

An excerpt of cool stuff:

  • Coq – Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.
  • Why3 – Why3 is a platform for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions.
  • Alt-Ergo – Alt-Ergo is an open-source SMT solver dedicated to the proof of mathematical formulas generated in the context of program verification.

http://ocamlverse.github.io/ – A pretty good set of beginner advice and articles. Seems like I have a lot of accidental overlap. Would’ve been nice to find earlier

https://www.cl.cam.ac.uk/teaching/1617/L28/materials.html – advanced functional programming course. Interesting material.

TAPL – https://www.cis.upenn.edu/~bcpierce/tapl/ Has implementations in OCaml of different lambda calculi. Good book too.

It is not uncommon to use a preprocessor in OCaml for some useful features. There is monad syntax, list comprehensions, deriving and more available as ppx extensions.

https://whitequark.org/blog/2014/04/16/a-guide-to-extension-points-in-ocaml/ ppx perepsorcssor. ocamlp4 5 are both preprocessors too

https://tarides.com/blog/2019-05-09-an-introduction-to-ocaml-ppx-ecosystem.html

https://blog.janestreet.com/archive/ The jane street blog. They are very prominent users of ocaml.

https://opensource.janestreet.com/standards/ Jane Street style guide

Oleg Kiselyov half works in Haskell, half in OCaml, so that’s cool.

https://arxiv.org/pdf/1905.06544.pdf oleg effects without monads

Oleg metaocaml book. MetaOCaml is super cool. http://okmij.org/ftp/ML/MetaOCaml.html And the switch funtionality of opam makes it easy to install!

Oleg tagless final http://okmij.org/ftp/tagless-final/index.html

https://github.com/ocamllabs/higher

Cohttp, LWT and Async

https://github.com/backtracking/ocamlgraph ocaml graphs

https://mirage.io/ Mirage os. What the hell is this?

https://github.com/ocamllabs/fomega

https://github.com/janestreet/hardcaml

ppx_let monaidc let bindings

some of the awesome derivinig capabilites are given by ppx_jane. SExp seems to be a realy good one. It’s where generic printing is?

dune build lambda.bc.js will make a javascript file. That’s pretty cool. Uses js_of_ocaml. The js_of_ocaml docs are intimidating https://ocsigen.org/js_of_ocaml/dev/manual/overview

http://ocsigen.org/js_of_ocaml/dev/api/

Note you need to install both the js_of_ocaml-compiler AND the library js_of_ocaml and also the js_of_ocaml-ppx.

Go digging through your _build folder and you can find a completely mangled incomprehensible file jsboy.bc.js. But you can indeed import and use it like so.

dune build --profile release lambda.bc.js putting it in the release profile makes an insane difference in build size. 10mb -> 100kb

There is also bucklescript for compiling to javascript. Outputs readable javascript. Old compiler fork?

Check out J.T. Paach’s snippets. Helpful

Dune:

https://gist.github.com/jtpaasch/ce364f62e283d654f8316922ceeb96db

Z3 ocaml

https://gist.github.com/jtpaasch/3a93a9e1bcf9cae86e9e7f7d3484734b

Ocaml new monadic let syntax

https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-syntax.html

#require “ppx_jane”;; in utop in order to import a thing using ppx

And argument could be made for working from a docker

Weird dsls that generate parsers and lexers. Also oddly stateful.

Took a bit of fiddling to figure out how to get dune to do.

Otherwise pretty straight forward encoding

expereince rport: using f omega as a teaching language

Because they aren’t hidden behind a monadic interface (for better or for worse), OCaml has a lot more of imperative feel. You could program in a subset of the language and have it not feel all that different from Java or python or something. There are for loops and while loops, objects and classes, and mutable variables if you so choose. I feel like the community is trying to shy away from these features for most purposes however, sitcking to the functional core.

However, it does let you do for loops and has an interesting community anddifferent areas of strength.

Maybe more importantly it let’s you access a new set of literature and books. Sligthly different but familiar ideas

I think Core is bewildering for a newcomer.

Rando Trash Poor Style OCaml Snippets

lex_lisp.mll : simplistic usage of ocamllex and menhir

parse_lisp.mly

Doinking around with some graphics

A couple Advent of code 2018

A little Owl usage

Doing Basic Ass Shit in Haskell

Haskell has lots of fancy weird corners, but you want to get rippin’ and runnin’

The Haskell phrase book is a new useful thingy. Nice and terse.

https://typeclasses.com/phrasebook

This one is also quite good https://lotz84.github.io/haskellbyexample/

I also like what FP complete is up to. Solid set of useful stuff, although a bit more emphasis towards their solutions than is common https://haskell.fpcomplete.com/learn

I was fiddling with making some examples for my friends a while ago, but I think the above do a similar better job.

https://github.com/philzook58/basic-ass-shit

Highlights include:

Makin a json request

Showing a plot of a sine function

Doing a least squares fit of some randomly created data

Not too complicated stuff to get you excited about Haskell:

I love Power Serious. https://www.cs.dartmouth.edu/~doug/powser.html Infinite power series using the power of laziness in something like 20 lines

https://blog.plover.com/prog/haskell/monad-search.html Using the list monad to solve SEND+MORE=MONEY puzzle.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.8903&rep=rep1&type=pdf Jerzy Karczmarczuk doing automatic differentiation in Haskell before it was cool. Check out Conal Elliott’s stuff after.

Very simple symbolic differentiation example. When I saw this in SICP for the first time, I crapped my pants.

https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf Why functional Programming Matters by John Hughes

https://www.cs.cmu.edu/~crary/819-f09/Backus78.pdf John Backus emphasizing escaping the imperative mindset in his 1978 Turing Award speech. A call to arms of functional programming

https://www.cs.tufts.edu/~nr/cs257/archive/richard-bird/sudoku.pdf Richard Bird defining sudoku solutions and then using equation reasoning to build a more efficient solver

https://wiki.haskell.org/Research_papers/Functional_pearls – Functional Pearls

Here’s how I find useful Haskell packages:

I google. I go to hackage (if I’m in a subpage, click on “contents” in the upper right hand corner). Click on a category that seems reasonable (like “web” or something) and then sort by Downloads (DL). This at least tells me what is popular-ish. I look for tutorials if I can find them. Sometimes there is a very useful getting started snippet in the main subfile itself. Some packages are overwhelming, others aren’t.

The Real World Haskell book is kind of intimidating although a lovely resource.

The wiki has a pretty rockin set of tutorials. Has some kind soul been improving it?

https://wiki.haskell.org/Category:Tutorials

I forgot learn you a Haskell has a chapter on basic io

http://learnyouahaskell.com/input-and-output

Learn more

When you’re ready to sit down with Haskell more, the best intro is currently the Haskell Book

You may also be interested in https://www.edx.org/course/introduction-functional-programming-delftx-fp101x-0 this MOOC

https://github.com/data61/fp-course or this Data61 course

Then there is a fun infinitude of things to learn after that.

______

More ideas for simple examples?

This post is intentionally terse.

IO is total infective poison.

standard output io

mutation & loops. You probably don’t want these. They are not idiomatic Haskell, and you may be losing out on some of the best lessons Haskell has to offer.

file IO

web requests

http://www.serpentine.com/wreq/tutorial.html

web serving – scotty

image processing

basic data structures

command line arguments

plotting

Parallelism and Concurrency

https://nokomprendo.frama.io/tuto_fonctionnel/posts/tuto_fonctionnel_25/2018-08-25-en-README.html

CAV 2019 Notes: Probably Nothin Interestin’ for You. A bit of noodling with Liquid Haskell

I went to the opening workshops of CAV 2019 in New York this year (on my own dime mind you!) after getting back from joining Ben on the long trail for a bit. The tutorials on Rosette and Liquid Haskell were really fun. Very interesting technology. Also got some good ramen and mochi pudding, so it’s all good. Big Gay Ice Cream was dece.

Day 1 workshops

Calin Belta http://sites.bu.edu/hyness/calin/.Has a new book. Control of Temporal logic systems. Automata. Optimized. Partition space into abstraction. Bisimulation https://www.springer.com/gp/book/9783319507620

Control Lyapunov Function (CLF) – guarantees you are going where you want to go

Control Barrier Function – Somehow controls regions you don’t want to go to.

Lyapunov function based trajectory optimization. You somehow have (Ames 2014) http://ames.gatech.edu/CLF_QP_ACC_final.pdf Is this it?

Differential flatness , input output linearization

Sadradiini worked there.

Temproal logic with

Rise of Temporal Logic

Linear Temporal Logic vs CTL

Fixpoint logic,

Buchi automata – visit accepting state infinite times

equivalency to first order logic

monadic logic, propositions only take 1 agrument. Decidable. Lowenheim. Quantifier elimination. Bounded Mondel property

Languages: ForSpec, SVA, LDL, PSL, Sugar

Monadic second order logic (MSO).

method of tableau

Sadraddini

Polytopic regions. Can push forward the dynmaics around a trajectory and the polytope that you lie in. RRT/LQR polytopic tree. pick random poitn. Run.

Evauating branching heuristics

branch and prune icp. dreal.

branch and prune. Take set. Propagate constraints until none fire.

branching heuristics on variables

largest first, smearing, lookahead. Try different options see who has the most pruning. Non clear that helped that muhc

QF_NRA. dreal benchmarks. flyspeck, control, robotics, SMT-lib

http://capd.sourceforge.net/capdDynSys/docs/html/index.html

Rosette

commands: saolver adied programming

verify – find an input on which the assertions fail. exists x. not safe

debug – Minimal unsat core if you give an unsat query. x=42/\ safe(s,P(x))$ we know thia is unsat because of previous step

solve – exists v si.t safe(v)

synthesis – exists e forall x safe(x,P(x))

define-symbolic, assert, verify, debug, solve, sythesis

Rosette. Alloy is also connected to her. Z Method. Is related to relational logic?

https://homes.cs.washington.edu/~emina/media/cav19-tutorial/index.html

http://emina.github.io/rosette/

Building solver aided programming tool.

symbolic compiler. reduce program all possible paths to a constraint

Cling – symbolic execution engine for llvm

implement intepreter in rosette

Symbolic virtual machine

layering of languages. DSL. library (shallow) embedding. interpreter (deep) embedding.

deep embedding for sythesis.

I can extract coq to rosette?

how does it work?

reverse and filter keeping only positive queries.

symbolic execution vs bounded model checking

symbolic checks every possible branch of the program. Cost is expoentntial

CBMC.

type driven state merging. Merge instances of primitiv types. (like BMC), value types structurally ()