# Applicative Bidirectional Programming and Automatic Differentiation

I got referred to an interesting paper by a comment of /u/syrak.

http://www2.sf.ecei.tohoku.ac.jp/~kztk/papers/kztk_jfp_am_2018.pdf

Applicative bidirectional programming (PDF), by Kazutaka Matsuda and Meng Wang

In it, they use a couple interesting tricks to make Lens programming more palatable. Lens often need to be be programmed in a point free style, which is rough, but by using some combinators, they are able to program lenses in a pointful style (with some pain points still left over). It is a really interesting, well-written paper. Lots ‘o’ Yoneda and laws. I’m not doing it justice. Check it out!

A while back I noted that reverse mode auto-differentiation has a type very similar to a lens and in fact you can build a working reverse mode automatic differentiation DSL out of lenses and lens-combinators. Many things about lenses, but not all, transfer over to automatic differentiation. The techniques of Matsuda and Wang do appear to transfer fairly well.

This is interesting to me for another reason. Their `lift2` and `unlift2` functions remind me very much of my recent approach to compiling to categories. The `lift2` function is fanning a lens pair. This is basically what my FanOutput typeclass automated. `unlift2` is building the input for a function function by supplying a tuple of projection lenses. This is what my BuildInput typeclass did. I think their style may extend many monoidal cartesian categories, not just lenses.

One can use the function `b -> a` in many of the situations one can use `a` in. You can do elegant things by making a Num typeclass of `b -> a` for example. This little fact seems to extend to other categories as well. By making a Num typeclass for `Lens s a` when `a` is a Num, we can use reasonable looking notation for arithmetic.

They spend some time discussing the necessity of a Poset typeclass. For actual lawful lenses, the `dup` implementation needs a way to recombine multiple adjustments to the same object. In the AD-lens case, dup takes care of this by adding together the differentials. This means that everywhere they needed an Eq typeclass, we can use a Num typeclass. There may be usefulness to building a wrapped type `data NZ a = Zero | NonZero a` like their Tag type to accelerate the many 0 values that may be propagating through the system.

Unfortunately, as is, the performance of this is abysmal. Maybe there is a way to fix it? Unlifting and lifting destroys a lot of sharing and often necessitates adding many redundant zeros. Why are you doing reverse mode differentiation unless you care about performance? Forward mode is simpler to implement. In the intended use case of Matsuda and Wang, they are working with actual lawful lenses, which have far less computational content than AD-lenses. Good lawful lenses should just be shuffling stuff around a little. Maybe one can hope GHC is insanely intelligent and can optimize these zeros away. One point in favor of that is that our differentiation is completely pure (no mutation). Nevertheless, I suspect it will not without help. Being careful and unlifting and lifting manually may also help. In principle, I think the Lensy approach could be pretty fast (since all it is is just packing together exactly what you need to differentiate into a data type), but how to make it fast while still being easily programmable? It is also nice that it is pretty simple to implement. It is the simplest method that I know of if you needed to port operable reverse mode differentiation to a new library (Massiv?) or another functional language (Futhark?). And a smart compiler really does have a shot at finding optimizations/fusions.

While I was at it, unrelated to the paper above, I think I made a working generic auto differentiable fold lens combinator. Pretty cool. `mapAccumL` is a hot tip.

For practical Haskell purposes, all of this is a little silly with the good Haskell AD packages around, the most prominent being

It is somewhat interesting to note the similarity of type `forall s. Lens s` appearing in the Matsuda and Wang approach to those those of the `forall s. BVar s` monad appearing in the backprop package. In this case I believe that the `s` type variable plays the same role it does in the ST monad, protecting a mutating Wengert tape state held in the monad, but I haven’t dug much into it. I don’t know enough about backprop to know what to make of this similarity.

http://hackage.haskell.org/package/backprop

The github repo with my playing around and stream of consciousness commentary is here

## 3 thoughts on “Applicative Bidirectional Programming and Automatic Differentiation”

1. Nuban says:

> — types work, but does this actually make sense?

No, that really does not make any sense. Maybe it’s possible to formulate some condition under which it does?

1. philzook58 says:

Yea, I guess I have a heavy suspicion that is not a good operator. sequenceA is occasionally a transposition operator (like for V2 from the linear library) and in that case it does make sense. I probably want a different typeclass.It may be the condition for making sense is that that sequenceA is idempotent?

1. philzook58 says:

Aw crap, shouldn’t try to talk fancy. I didn’t mean idempotent. I meant it’s own inverse.
Another example where it is useful is ZipList. In general, ZipList is much more appropriate than List for these AD Lens