The post Functors, Vectors, and Quantum Circuits appeared first on Hey There Buddo!.

]]>Vectors and Linear Algebra are useful for:

- 2D, 3D, 4D geometry stuff. Computer graphics, physics etc.
- Least Squares Fitting
- Solving discretized PDEs
- Quantum Mechanics
- Analysis of Linear Dynamical Systems
- Probabilistic Transition Matrices

There are certain analogies between Haskell Functors and Vectors that corresponds to a style of computational vector mathematics that I think is pretty cool and don’t see talked about much.

Due to the expressivity of its type system, Haskell has a first class notion of container that many other languages don’t. In particular, I’m referring to the fact that Haskell has higher kinded types `* -> *`

(types parametrized on other types) that you can refer to directly without filling them first. Examples in the standard library include `Maybe`

, `[]`

, `Identity`

, `Const b`

, and `Either b`

. Much more vector-y feeling examples can be found in Kmett’s linear package `V0`

, `V1`

, `V2`

, `V3`

, `V4`

. For example, the 4 dimensional vector type `V4`

data V4 a = V4 a a a a

This really isn’t such a strange, esoteric thing as it may appear. You wouldn’t blink an eye at the type

struct V4 { double x, y, z, w; }

in some other language. What makes Haskell special is how compositional and generic it is. We can build thousand element structs with ease via composition. What we have here is an alternative to the paradigm of computational vectors ~ arrays. Instead we have computational vectors ~ structs. In principle, I see no reason why this couldn’t be as fast as arrays, although with current compiler expectations it probably isn’t.

Monoidal categories are a mathematical structure that models this analogy well. It has been designed by mathematicians for aesthetic elegance, and it seems plausible that following its example leads us to interesting, useful, and pleasant vector combinators. And I personally find something that tickles me about category theory.

So to get started, let’s talk a bit about functors.

Functors in Haskell are a typeclass for containers. They allow you to map functions over all the items in the container. They are related to the categorical notion of functor, which is a mapping between categories.

type Container = * -> * -- Note: This is actually a kind signature. -- Kinds and types are the same thing in Haskell.

You can lift the product and sum of types to the product and sum of Functors which you may find in Data.Functor.Product and Data.Functor.Sum. This is analogous to the lifting of ordinary addition and multiplication to the addition and multiplication of polynomials, which are kind of like numbers with a “hole”.

newtype Product f g a = Pair (f a , g a) newtype Sum f g a = Sum (Either (f a) (g a))

Functors also compose. A container of containers of `a`

is still a container of `a`

. We can form composite containers by using the Compose newtype wrapper.

newtype Compose f g a = Compose (f (g a))

When you use this Compose newtype, instead of having to address the individual elements by using `fmap`

twice, a single application of `fmap`

will teleport you through both layers of the container.

`Product`

, `Sum`

, and `Compose`

are all binary operator on functors. The type constructor has kind

{- Enter into ghci -} :kind Compose ==> (* -> *) -> (* -> *) -> (* -> *) :kind Product ==> (* -> *) -> (* -> *) -> (* -> *) :kind Sum ==> (* -> *) -> (* -> *) -> (* -> *)

Some important other functors from the algebra of types perspective are `Const Void a`

, `Const () a`

, and `Identity a`

. These are identity elements for `Sum`

, `Product`

, and `Compose`

respectively.

You can define mappings between containers that don’t depend on the specifics of their contents. These mappings can only rearrange, copy and forget items of their contained type. This can be enforced at the type level by the polymorphic type signature

type f ~> g = forall a. f a -> g a

These mappings correspond in categorical terminology to natural transformations between the functors `f`

and `g`

. There is a category where objects are Functors and morphisms are natural transformations. `Sum`

, `Product`

, and `Compose`

all obeys the laws necessary to be a monoidal product on this category.

How the lifting of functions works for `Compose`

is kind of neat.

mon_prod :: Functor f' => (f ~> f') -> (g ~> g') -> (Compose f g ~> Compose f' g') mon_prod ntf ntg (Compose fg) = Compose (fmap ntg (ntf fg)) -- or equvalently Compose (ntf (fmap ntg fg)) with a (Functor f) typeclass requirement.

Because the natural transformations require polymorphic types, when you apply `ntf`

to `fg`

the polymorphic variable `a`

in the type of `ntf`

restricts to `a ~ g a'`

.

`Product`

and `Sum`

have a straight forward notion of commutativity ( `(a,b)`

is isomorphic to `(b,a)`

) . `Compose`

is more subtle. `sequenceA`

from the Traversable typeclass can swap the ordering of composition. `sequenceA . sequenceA`

may or may not be the identity depending on the functors in question, so it has some flavor of a braiding operation. This is an interesting post on that topic https://parametricity.com/posts/2015-07-18-braids.html

Combinators of these sorts are used arise in at least the following contexts

- Data types a la carte – A systematic way of building extensible data types
- GHC Generics – A system for building generic functions that operate on data types that can be described with sums, products, recursion, and holes.
- In and around the Lens ecosystem

Also see the interesting post by Russell O’Connor and functor oriented programming http://r6.ca/blog/20171010T001746Z.html. I think the above is part of that to which he is referring.

Vector spaces are made of two parts, the shape (dimension) of the vector space and the scalar.

Just as a type of kind `* -> *`

can be thought of as a container modulo it’s held type, it can also be a vector modulo its held scalar type. The higher kinded type for vector gives an explicit slot to place the scalar type.

type Vect = * -> *

The standard Haskell typeclass hierarchy gives you some of the natural operations on vectors if you so choose to abuse it in that way.

- Functor ~> Scalar Multiplication:
`smul s = fmap (* s)`

- Applicative ~> Vector Addition: vadd x y = (+) <$> x <*> y
- Traversable ~> Tranposition.
`sequenceA`

has the type of transposition and works correctly for the linear style containers like V4.

The linear library does use `Functor`

for scalar multiplication, but defines a special typeclass for addition, `Additive`

. I think this is largely for the purposes for bringing `Map`

like vectors into the fold, but I’m not sure.

Once we’ve got the basics down of addition and scalar multiplication, the next thing I want is operations for combining vector spaces. Two important ones are the Kronecker product and direct sum. In terms of indices, the Kronecker product is a space that is indexed by the cartesian product `(,)`

of its input space indices and the direct sum is a space indexed by the `Either`

of its input space indices. Both are very useful constructs. I use the Kronecker product all the time when I want to work on 2D or 3D grids for example. If you’ll excuse my python, here is a toy 2-D finite difference Laplace equation example. We can lift the 1D second derivative matrix using the kronecker product . The direct sum is useful as a notion of stacking matrices.

import numpy as np N = 10 # We're making a 10x10 grid row = np.zeros(N) row[0] = -2 row[1] = 1 K = np.toeplitz(row,row) #toeplitz makes constant diagonal matrices I = np.eye(N) #identity matrix K2 = np.kron(K,I) + np.kron(I,K)

The following is perhaps the most important point of the entire post.

type Kron = Compose type DSum = Product

`Compose`

of vector functors gives the Kronecker product, and `Product`

gives the direct sum (this can be confusing but its right. Remember, the sum in direct sum refers to the *indices*).

We can form the Kronecker product of vectors given a Functor constraint.

kron :: (Num a, Functor f, Functor g) => f a -> g a -> Kron f g a kron f g = Compose $ fmap (\s1 -> fmap (\s2 -> s1 * s2) g) f dsum :: f a -> g a -> DSum f g a dsum f g = Pair f g

Notice we have two distinct but related things called kron: `Kron`

and `kron`

. One operates on vectors *spaces* and the other operates on vector *values*.

Building vector spaces out of small combinators like V2, V4, DSum, Kron is interesting for a number of reasons.

- It is well typed. Similar to Nat indexed vectors, the types specify the size of the vector space. We can easily describe vector spaced as powers of 2 as
`V16 = Kron V2 (Kron V2 (Kron V2 (Kron V2 V1)))`

, similarly in terms of its prime factors, or we can do a binary expansion (least significant bit first)`V5 = DSum V1 (Kron V2 (DSum V0 (Kron V2 V1)))`

or other things. We do it without going into quasi-dependently typed land or GADTs. - It often has better semantic meaning. It is nice to say Measurements, or XPosition or something rather than just denote the size of a vector space in terms of a nat. It is better to say a vector space is the Kron of two meaningful vector spaces than to just say it is a space of size m*n. I find it pleasant to think of the naturals as a free Semiring rather than as the Peano Naturals and I like the size of my vector space defined similarly.
- Interesting opportunities for parallelism. See Conal Elliott’s paper on scans and FFT: http://conal.net/papers/generic-parallel-functional/

In the Vectors as shape methodology, Vectors look very much like Functors.

I have been tempted to lift the natural transformation type above to the following for linear operators.

type LinOp f g = forall a. (Num a) => f a -> g a

In a sense this works, we could implement `kron`

because many of the container type (`V1`

, `V2`

, `V3`

, etc) in the linear package implement Num. However, choosing Num is a problem. Why not Fractional? Why not Floating? Sometimes we want those. Why not just specifically Double?

We don’t really want to lock away the scalar in a higher rank polymorphic type. We want to ensure that everyone is working in the same scalar type before allowing things to proceed.

type LinOp f g a = f a -> g a

Note also that this type does not constrain us to linearity. Can we form the Kronecker product of linear operators? Yes, but I’m not in love with it. This is not nearly so beautiful as the little natural transformation dance.

kron''' :: (Applicative f', Applicative g', Traversable f, Traversable g') => (f a -> f' a) -> (g a -> g' a) -> (Kron f g a -> Kron f' g' a) kron'' lf lg (Compose fga) = Compose $ sequenceA $ (fmap lf) $ sequenceA $ (fmap lg fga)

This was a nice little head scratcher for me. Follow the types, my friend! I find this particularly true for uses of `sequenceA`

. I find that if I want the containers swapped in ordering. In that situation sequenceA is usually the right call. It could be called `transpose`

.

Giving the vector direct access to the scalar feels a bit off to me. I feel like it doesn’t leave enough “room” for compositionally. However, there is another possibility for a definition of morphisms could be that I think is rather elegant.

type LinOp1 f g a = forall k. Additive k => Kron f k a -> Kron g k a

Does this form actually enforce linearity? You may still rearrange objects. Great. You can also now add and scalar multiply them with the `Additive k`

constraint. We also expose the scalar, so it can be enforced to be consistent.

One other interesting thing to note is that these forms allow nonlinear operations. `fmap`

, `liftU2`

and `liftI2`

are powerful operations, but I think if we restricted `Additive`

to just a correctly implemented scalar multiply and vector addition operation, and zero, we’d be good.

class Additive' f where smul :: Num a => a -> f a -> f a vadd :: Num a => f a -> f a -> f a zero :: Num a => f a

We can recover the previous form by instantiation `k`

to `V1`

. `V1`

, the 1-d vector space, is almost a scalar and can play the scalars role in many situations. `V1`

is the unit object with respect to the monoidal product `Kron`

.

There seems to be a missing instance to Additive that is useful. There is probably a good reason it isn’t there, but I need it.

instance (Additive g, Additive f) => Additive (Compose f g) where (Compose v) ^+^ (Compose w) = Compose (liftU2 (^+^) v w) zero = zero liftU2 f (Compose x) (Compose y) = Compose $ liftU2 (liftU2 f) x y liftI2 f (Compose x) (Compose y) = Compose $ liftI2 (liftI2 f) x y

The above analogy can be put into mathematical terms by noting that both vectors and functor are monoidal categories. I talked a quite a bit about monoidal categories in a previous post http://www.philipzucker.com/a-touch-of-topological-computation-3-categorical-interlude/ .

Categories are the combo of a collection of objects and arrows between the objects. The arrows can compose as long as the head of one is on the same object as the tail of the other. On every object, there is always an identity arrow, which when composed will do nothing.

We need a little extra spice to turn categories into monoidal categories. One way of thinking about it is that monoidal categories have ordinary category composition and some kind of horizontal composition, putting things side to side. Ordinary composition is often doing something kind of sequentially, applying a sequence of functions, or a sequence of matrices. The horizontal composition is often something parallel feeling, somehow applying the two arrows separately to separate pieces of the system.

There is funny game category people play where they want to lift ideas from other fields and replace the bits and pieces in such a way that the entire thing is defined in terms of categorical terminology. This is one such example.

A monoid is a binary operations that is associative and has an identity.

Sometimes people are more familiar with the concept of a group. If not, ignore the next sentence. Monoids are like groups without requiring an inverse.

Numbers are seperately monoids under both addition, multiplication and minimization (and more), all of which are associative operations with identity (0, 1, and infinity respectively).

Exponentiation is a binary operation that is not a monoid, as it isn’t associative.

The Monoid typeclass in Haskell demonstrates this http://hackage.haskell.org/package/base-4.12.0.0/docs/Data-Monoid.html

class Semigroup a => Monoid a where mempty :: a mappend :: a -> a -> a

A common example of a monoid is list, where `mempty`

is the empty list and `mappend`

appends the lists.

There are different set-like intuitions for categories. One is that the objects in the category are big opaque sets. This is the case for Hask, Rel and Vect.

A different intuitiion is that the category itself is like a set, and the objects are the elements of that set. There just so happens to be some extra structure knocking around in there: the morphisms. This is the often more the feel for the examples of preorders or graphs. The word “monoidal” means that they we a binary operation on the objects. But in the category theory aesthetic, you also need that binary operation to “play nice” with the morphisms that are hanging around too.

Functors are the first thing that has something like this. It has other properties that come along for the ride. A Functor is a map that takes objects to objects and arrows to arrows in a nice way. A binary functor takes two objects to and object, and two arrows to one arrow in a way that plays nice (commutes) with arrow composition.

String diagrams are a graphical notation for monoidal categories. Agin I discussed this more here.

Morphisms are denoted by boxes. Regular composition is shown by plugging arrows together vertically. Monoidal product is denoted by putting the arrows side to side.

When I was even trying to describe what a monoidal category was, I was already using language evocative of string diagrams.

You can see string diagrams in the documentation for the Arrow library. Many diagrams that people use in various fields can be formalized as the string diagrams for some monoidal category. This is big chunk of Applied Category Theory.

This is the connection to quantum circuits, which are after all a graphical notation for very Kroneckery linear operations.

type Qubit = V2 type C = Complex Double assoc :: Functor f => (Kron (Kron f g) h) ~> (Kron f (Kron g h)) assoc = Compose . (fmap Compose) . getCompose . getCompose assoc' :: Functor f => (Kron f (Kron g h)) ~> (Kron (Kron f g) h) assoc' (Compose x) = Compose $ Compose $ (fmap getCompose x) kron'' :: (Additive f, Additive g, Additive k, Additive f', Additive g') => LinOp1 f f' a -> LinOp1 g g' a -> Kron (Kron f g) k a -> Kron (Kron f' g') k a kron'' lf lg fgk = let v = (assoc fgk) in assoc' (Compose $ fmap lg $ getCompose (lf v)) sigx' :: LinOp1 Qubit Qubit C sigx' (Compose (V2 up down)) = Compose $ V2 down up sigz' :: LinOp1 Qubit Qubit C sigz' (Compose (V2 up down)) = Compose $ V2 up ((-1) *^ down) sigy' :: LinOp1 Qubit Qubit C sigy' (Compose (V2 up down)) = Compose $ V2 ((-i) *^ down) (i *^ up) where i = 0 :+ 1 swap' :: (Traversable f, Applicative g) => LinOp1 (Kron f g) (Kron g f) a swap' (Compose (Compose fgk)) = Compose $ Compose $ sequenceA fgk cnot :: LinOp1 (Kron Qubit Qubit) (Kron Qubit Qubit) a cnot (Compose (Compose (V2 (V2 up1 down1) v))) = Compose $ Compose $ V2 (V2 down1 up1) v phase :: Double -> LinOp1 Qubit Qubit C phase phi (Compose (V2 up down)) = Compose $ V2 up ((cis phi) *^ down) lefting :: (Additive f, Additive k, Additive g) => LinOp1 f g a -> LinOp1 (Kron f k) (Kron g k) a lefting l = kron'' l id -- Qubit.assoc' . l . Qubit.assoc righting :: (Additive k, Additive f, Additive g) => LinOp1 f g a -> LinOp1 (Kron k f) (Kron k g) a righting l = kron'' id l -- (Compose (Compose fkk)) = Compose $ Compose $ (fmap (getCompose . l . Compose) fkk) example :: LinOp1 (Kron Qubit Qubit) (Kron Qubit Qubit) C example = (lefting sigx') . (lefting sigy') . (righting sigz') . swap'

There is an annoying amount of stupid repetitive book keeping with the associative structure of Kron. This can largely be avoided hopefully with `coerce`

, but I’m not sure. I was having trouble with roles when doing it generically.

- Woof. This post was more draining to write than I expected. I think there is still a lot left to say. Sorry about the editing everyone! Bits and pieces of this post are scattered in this repo
- How would you go about this in other languages? C, Rust, OCaml, C++, Agda
- The discussion of Vect = * -> * is useful for discussion of 2-Vect, coming up next. What if we make vectors of Vect? Wacky shit.
- Metrics and Duals vectors.
`type Dual f a = f a -> a`

.`type Dual1 f a = forall k. Additive k => Kron f k a -> k a`

- Adjunction diagrams have cups and caps. Since we have been using representable functors, they actually have a right adjunction that is tupling with the vector space index type. This gives us something that almost feels like a metric but a weirdly constrained metric.
- LinOp1 form is yoneda? CPS? Universally quantified k is evocative of
`forall c. (a -> c) -> (b -> c)`

- I got the LinOp1 construction from Kitaev https://arxiv.org/abs/cond-mat/0506438 Presumably there are better references as this is not the thrust of this massive article.
- Jeremy Gibbons Naperian Functor – Very interesting and related to this post. https://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/aplicative.pdf
- Bartosz Milewski – String Diagrams https://www.youtube.com/watch?v=eOdBTqY3-Og
- Huenen and Vicary – http://www.cs.ox.ac.uk/people/jamie.vicary/IntroductionToCategoricalQuantumMechanics.pdf Intro to categorical quantum mechanics.
- http://hackage.haskell.org/package/linear Kmett’s linear package
- https://graphicallinearalgebra.net/ – Graphical Linear Algebra
- http://www.philipzucker.com/resources-string-diagrams-adjunctions-kan-extensions/
- http://www.philipzucker.com/functor-vector-part-2-function-vectors/
- http://www.philipzucker.com/functor-vector-part-1-functors-basis/
- Baez and Stay. Rosetta Stone. A classic http://math.ucr.edu/home/baez/rosetta.pdf

Containers that are basically big product types are also known as representable, Naperian, or logarithmic. Representable places emphasis on the isomorphism between such a container type and the type `(->) i`

which by the algebra of types correspond is isomorphic to (i copies of a). They are called Naperian/Logarithmic because there is a relationship similar to exponentiation between the index type `a`

and the container type `f`

. If you take the `Product f g`

, this container is indexed by (a + b) = `Either a b`

. `Compose f g`

is indexed by the product `(a,b)`

. `(f r) ~ r^a`

The arrow type is written as an exponential `b^a`

because if you have finite enumerable types `a`

and `b`

, that is the number of possible tabulations available for `f`

. The Sum of two representable functors is no longer representable. Regular logarithms of sums Log(f + g) do not have good identities associated with them.

See Gibbons article. There is a good argument to be made that representable functors are a good match for vectors/well typed tensor programming.

But note that there is a reasonable interpretation for container types with sum types in them. These can be thought of as subspaces, different bases, or as choices of sparsity patterns. When you define addition, you’ll need to say how these subspaces reconcile with each other.

— two bases at 45 degrees to each other.

data V2_45 a = XY a a | XY' a a data Maybe a = Just a | Notinhg -- a 1-d vectro space with a special marker for the zero vector. data Maybe2 a = Just2 a a | Nothing2 -- a 2d vector space with special zero marker

Hask is a name for the category that has objects as Haskell types and morphisms as Haskell functions.

Note that it’s a curious mixing of type/value layers of Haskell. The objects are types whereas the function morphisms are Haskell values. Composition is given by `(.)`

and the identity morphisms are given by `id`

.

For Haskell, you can compose functions, but you can also smash functions together side by side. These combinators are held in Control.Arrow.

You can smash together types with tuple `(,)`

or with `Either`

. Both of these are binary operators on types. The corresponding mapping on morphisms are given by

(***) :: a b c -> a b' c' -> a (b, b') (c, c') (+++) :: a b c -> a b' c' -> a (Either b b') (Either c c')

These are binary operators on morphisms that play nice with the composition structure of Haskell.

A monoidal category also has unit objects. This is given by the Identity functor

rightUnitor :: Functor f => Compose f Identity a -> f a rightUnitor (Compose f) = fmap runIdentity f rightUnitor' :: f ~> Compose f Identity rightUnitor' = Compose . fmap Identity leftUnitor' :: f ~> Compose Identity f leftUnitor' = Compose . Identity leftUnitor :: Identity *** f ~> f leftUnitor = runIdentity . getCompose

There is also a sense of associativity. It is just newtype rearrangement, so it can also be achieved with a `coerce`

(although not polymorphically?).

assoc :: Functor f => ((f *** g) *** h) ~> (f *** (g *** h)) assoc = Compose . (fmap Compose) . getCompose . getCompose assoc' :: Functor f => (f *** (g *** h)) ~> ((f *** g) *** h) assoc' (Compose x) = Compose $ Compose $ (fmap getCompose x)

Similarly, we can define a monoidal category structure using Product or Sum instead of Compose.

These are all actually just newtype rearrangement, so they should all just be instances of `coerce`

, but I couldn’t get the roles to go through generically?

The post Functors, Vectors, and Quantum Circuits appeared first on Hey There Buddo!.

]]>The post Concolic Weakest Precondition is Kind of Like a Lens appeared first on Hey There Buddo!.

]]>Lens are described as functional getters and setters. The simple lens type is

type Lens a b = a -> (b, b -> a)

. The setter is

a->b

and the getter is

a -> b -> a

This type does not constrain lenses to obey the usual laws of getters and setters. So we can use/abuse lens structures for nontrivial computations that have forward and backwards passes that share information. Jules Hedges is particular seems to be a proponent for this idea.

I’ve described before how to encode reverse mode automatic differentiation in this style. I have suspicions that you can make iterative LQR and guass-seidel iteration have this flavor too, but I’m not super sure. My attempts ended somewhat unsatisfactorily a whiles back but I think it’s not hopeless. The trouble was that you usually want the whole vector back, not just its ends.

I’ve got another example in imperative program analysis that kind of makes sense and might be useful though. Toy repo here: https://github.com/philzook58/wp-lens

In program analysis it sometimes helps to run a program both concretely and symbolically. Concolic = CONCrete / symbOLIC. Symbolic stuff can slowly find hard things and concrete execution just sprays super fast and can find the dumb things really quick.

We can use a lens structure to organize a DSL for describing a simple imperative language

The forward pass is for the concrete execution. The backward pass is for transforming the post condition to a pre condition in a weakest precondition analysis. Weakest precondition semantics is a way of specifying what is occurring in an imperative language. It tells how each statement transforms post conditions (predicates about the state after the execution) into pre conditions (predicates about before the execution). The concrete execution helps unroll loops and avoid branching if-then-else behavior that would make the symbolic stuff harder to process. I’ve been flipping through Djikstra’s book on this. Interesting stuff, interesting man.

I often think of a state machine as a function taking s -> s. However, this is kind of restrictive. It is possible to have heterogenous transformations s -> s’. Why not? I think I am often thinking about finite state machines, which we really don’t intend to have a changing state size. Perhaps we allocated new memory or something or brought something into or out of scope. We could model this by assuming the memory was always there, but it seems wasteful and perhaps confusing. We need to a priori know everything we will need, which seems like it might break compositionally.

We could model our language making some data type like`data Imp = Skip | Print String | Assign String Expr | Seq Imp Imp | ...`

and then build an interpreter

interp :: Imp -> s -> s'

But we can also cut out the middle man and directly define our language using combinators.

type Stmt s s' = s ->s'

To me this has some flavor of a finally tagless style.

Likewise for expressions. Expressions evaluate to something in the context of the state (they can lookup variables), so let’s just use

type Expr s a = s -> a

And, confusingly (sorry), I think it makes sense to use Lens in their original getter/setter intent for variables. So Lens structure is playing double duty.

`type Var s a = Lens' s a`

With that said, here we go.

type Stmt s s' = s -> s' type Lens' a b = a -> (b, b -> a) set l s a = let (_, f) = l s in f a type Expr s a = s -> a type Var s a = Lens' s a skip :: Stmt s s skip = id sequence :: Stmt s s' -> Stmt s' s'' -> Stmt s s'' sequence = flip (.) assign :: Var s a -> Expr s a -> Stmt s s assign v e = \s -> set v s (e s) (===) :: Var s a -> Expr s a -> Stmt s s v === e = assign v e ite :: Expr s Bool -> Stmt s s' -> Stmt s s' -> Stmt s s' ite e stmt1 stmt2 = \s -> if (e s) then stmt1 s else stmt2 s while :: Expr s Bool -> Stmt s s -> Stmt s s while e stmt = \s -> if (e s) then ((while e stmt) (stmt s)) else s assert :: Expr s Bool -> Stmt s s assert e = \s -> if (e s) then s else undefined abort :: Stmt s s' abort = const undefined

Weakest precondition can be done similarly, instead we start from the end and work backwards

Predicates are roughly sets. A simple type for sets is

type Pred s = s -> Bool

Now, this doesn’t have much deductive power, but I think it demonstrates the principles simply. We could replace `Pred`

with perhaps an SMT solver expression, or some data type for predicates, for which we’ll need to implement things like substitution. Let’s not today.

A function

a -> b

is equivalent to

forall c. (b -> c) -> (a -> c)

. This is some kind of CPS / Yoneda transformation thing. A state transformer

s -> s'

to predicate transformer

(s' -> Bool) -> (s -> Bool)

is somewhat evocative of that. I’m not being very precise here at all.

Without further ado, here’s how I think a weakest precondition looks roughly.

type Lens' a b = a -> (b, b -> a) set l s a = let (_, f) = l s in f a type Expr s a = s -> a type Var s a = Lens' s a type Pred s = s -> Bool type Stmt s s' = Pred s' -> Pred s skip :: Stmt s s skip = \post -> let pre = post in pre -- if sequence :: Stmt s s' -> Stmt s' s'' -> Stmt s s'' sequence = (.) assign :: Var s a -> Expr s a -> Stmt s s assign v e = \post -> let pre s = post (set v s (e s)) in pre (===) :: Var s a -> Expr s a -> Stmt s s v === e = assign v e ite :: Expr s Bool -> Stmt s s' -> Stmt s s' -> Stmt s s' ite e stmt1 stmt2 = \post -> let pre s = if (e s) then (stmt1 post) s else (stmt2 post) s in pre abort :: Stmt s s' abort = \post -> const False assert :: Expr s Bool -> Stmt s s assert e = \post -> let pre s = (e s) && (post s) in pre {- -- tougher. Needs loop invariant while :: Expr s Bool -> Stmt s s -> Stmt s s while e stmt = \post -> let pre s = if (e s) then ((while e stmt) (stmt post)) s else in pre -}

Finally here is a combination of the two above that uses the branching structure of the concrete execution to aid construction of the precondition. Although I haven’t expanded it out, we are using the full `s t a b`

parametrization of lens in the sense that states go forward and predicates come back.

type Lens' a b = a -> (b, b -> a) set l s a = let (_, f) = l s in f a type Expr s a = s -> a type Var s a = Lens' s a type Pred a = a -> Bool type Stmt s s' = s -> (s', Pred s' -> Pred s) -- eh. Screw the newtype skip :: Stmt s s skip = \x -> (x, id) sequence :: Stmt s s' -> Stmt s' s'' -> Stmt s s'' sequence f g = \s -> let (s', j) = f s in let (s'', j') = g s' in (s'', j . j') assign :: Var s a -> Expr s a -> Stmt s s assign v e = \s -> (set v s (e s), \p -> \s -> p (set v s (e s))) --if then else ite :: Expr s Bool -> Stmt s s' -> Stmt s s' -> Stmt s s' ite e stmt1 stmt2 = \s -> if (e s) then let (s', wp) = stmt1 s in (s', \post -> \s -> (e s) && (wp post s)) else let (s', wp) = stmt2 s in (s', \post -> \s -> (not (e s)) && (wp post s)) assert :: Pred s -> Stmt s s assert p = \s -> (s, \post -> let pre s = (post s) && (p s) in pre) while :: Expr s Bool -> Stmt s s -> Stmt s s while e stmt = \s -> if e s then let (s' , wp) = (while e stmt) s in (s', \post -> let pre s'' = (post s'') && (wp post s'') in pre) else (s, \p -> p) {- -- declare and forget can change the size and shape of the state space. -- These are heterogenous state commpands declare :: Iso (s,Int) s' -> Int -> Stmt s s' declare iso defalt = (\s -> to iso (s, defalt), \p -> \s -> p $ to iso (s, defalt)) forget :: Lens' s s' -> Stmt s s' -- forgets a chunk of state declare_bracket :: Iso (s,Int) s' -> Int -> Stmt s' s' -> Stmt s s declare_bracket iso defalt stmt = (declare iso default) . stmt . (forget (_1 . iso))

Neat. Useful? Me dunno.

The post Concolic Weakest Precondition is Kind of Like a Lens appeared first on Hey There Buddo!.

]]>The post Flappy Bird as a Mixed Integer Program appeared first on Hey There Buddo!.

]]>Mixed Integer Programming is a methodology where you can specify convex (usually linear) optimization problems that include integer/boolean variables.

Flappy Bird is a game about a bird avoiding pipes.

We can use mixed integer programming to make a controller for Flappy Bird. Feel free to put this as a real-world application in your grant proposals, people.

While thinking about writing a MIP for controlling a lunar lander game, I realized how amenable to mixed integer modeling flappy bird is. Ben and I put together the demo on Saturday. You can find his sister blog post here.

The bird is mostly in free fall, on parabolic trajectories. This is a linear dynamic, so it can directly be expressed as a linear constraint. It can discretely flap to give itself an upward impulse. This is a boolean force variable at every time step. Avoiding the ground and sky is a simple linear constraint. The bird has no control over its x motion, so that can be rolled out as concrete values. Because of this, we can check what pipes are relevant at time points in the future and putting the bird in the gap is also a simple linear constraint.

There are several different objectives one might want to consider and weight. Perhaps you want to save the poor birds energy and minimize the sum of all flaps `cvx.sum(flap)`

. Or perhaps you want to really be sure it doesn’t hit any pipes by maximizing the minimum distance from any pipe. Or perhaps minimize the absolute value of the y velocity, which is a reasonable heuristic for staying in control. All are expressible as linear constraints. Perhaps you might want a weighted combo of these. All things to fiddle with.

There is a pygame flappy bird clone which made this all the much more slick. It is well written and easy to understand and modify. Actually figuring out the appropriate bounding boxes for pipe avoidance was finicky. Figuring out the right combo of bird size and pipe size is hard, combined with computer graphics and their goddamn upside down coordinate system.

We run our solver in a model predictive control configuration. Model predictive control is where you roll out a trajectory as an optimization problem and resolve it at every action step. This turns an open loop trajectory solve into a closed loop control, at the expense of needing to solve a perhaps very complicated problem in real time. This is not always feasible.

My favorite mip modeling tool is cvxpy. It gives you vectorized constraints and slicing, which I love. More tools should aspire to achieve numpy-like interfaces. I’ve got lots of other blog posts using this package which you can find in my big post list the side-bar .

The github repo for the entire code is here: https://github.com/philzook58/FlapPyBird-MPC

And here’s the guts of the controller:

import cvxpy as cvx import numpy as np import matplotlib.pyplot as plt N = 20 # time steps to look ahead path = cvx.Variable((N, 2)) # y pos and vel flap = cvx.Variable(N-1, boolean=True) # whether or not the bird should flap in each step last_solution = [False, False, False] last_path = [(0,0),(0,0)] PIPEGAPSIZE = 100 PIPEWIDTH = 52 BIRDWIDTH = 34 BIRDHEIGHT = 24 BIRDDIAMETER = np.sqrt(BIRDHEIGHT**2 + BIRDWIDTH**2) SKY = 0 GROUND = (512*0.79)-1 PLAYERX = 57 def getPipeConstraints(x, y, lowerPipes): constraints = [] for pipe in lowerPipes: dist_from_front = pipe['x'] - x - BIRDDIAMETER dist_from_back = pipe['x'] - x + PIPEWIDTH if (dist_from_front < 0) and (dist_from_back > 0): #print(pipe['y'] + BIRDDIAMETER, pipe['y'] + PIPEGAPSIZE) constraints += [y <= (pipe['y'] - BIRDDIAMETER)] # y above lower pipe constraints += [y >= (pipe['y'] - PIPEGAPSIZE)] # y below upper pipe #if len(constraints) > 0: #print(constraints) return constraints def solve(playery, playerVelY, lowerPipes): global last_path, last_solution print(last_path) pipeVelX = -4 playerAccY = 1 # players downward accleration playerFlapAcc = -14 # players speed on flapping # unpack variables y = path[:,0] vy = path[:,1] c = [] #constraints c += [y <= GROUND, y >= SKY] c += [y[0] == playery, vy[0] == playerVelY] x = PLAYERX xs = [x] for t in range(N-1): dt = t//10 + 1 #dt = 1 x -= dt * pipeVelX xs += [x] c += [vy[t + 1] == vy[t] + playerAccY * dt + playerFlapAcc * flap[t] ] c += [y[t + 1] == y[t] + vy[t + 1]*dt ] c += getPipeConstraints(x, y[t+1], lowerPipes) #objective = cvx.Minimize(cvx.sum(flap)) # minimize total fuel use objective = cvx.Minimize(cvx.sum(flap) + 10* cvx.sum(cvx.abs(vy))) # minimize total fuel use prob = cvx.Problem(objective, c) try: prob.solve(verbose = False, solver="GUROBI") #print(np.round(flap.value).astype(bool)) #plt.plot(y.value) #plt.show() last_path = list(zip(xs, y.value)) last_solution = np.round(flap.value).astype(bool) return last_solution[0], last_path except: last_solution = last_solution[1:] last_path = [((x-4), y) for (x,y) in last_path[1:]] return last_solution[0], last_path

I think it is largely self explanatory but here are some notes. The `dt = t//10 + 1`

thing is about decreasing our time resolution the further out from the current time step. This increases the time horizon without the extra computation cost. Intuitively modeling accuracy further out in time should matter less. The `last_solution`

stuff is for in case the optimization solver fails for whatever reason, in which case it’ll follow open-loop the last trajectory it got.

- We changed the dynamics slightly from the python original to make it easier to model. We found this did not change the feel of the game. The old dynamics were piecewise affine though, so are also modelable using mixed integer programming. http://groups.csail.mit.edu/robotics-center/public_papers/Marcucci18.pdf . Generally check out the papers coming out of the Tedrake group. They are sweet. Total fanboy over here.
- The controller as is is not perfect. It fails eventually, and it probably shouldn’t. A bug? Numerical problems? Bad modeling of the pipe collision? A run tends to get through about a hundred pipes before something gets goofy.
- Since we had access to the source code, we could mimic the dynamics very well. How robust is flappy bird to noise and bad modeling? We could add wind, or inaccurate pipe data.
- Unions of Convex Region. Giving the flappy bird some x position control would change the nature of the problem. We could easily cut up the allowable regions of the bird into rectangles, and represent the total space as a union of convex regions, which is also MIP representable.
- Verification – The finite difference scheme used is crude. It is conceivable for the bird to clip a pipe. Since basically we know the closed form of the trajectories, we could verify that the parabolas do not intersect the regions. For funzies, maybe use sum of squares optimization?
- Realtime MIP. Our solver isn’t quite realtime. Maybe half real time. One might pursue methods to make the mixed integer program faster. This might involve custom branching heuristics, or early stopping. If one can get the solver fast enough, you might run the solver in parallel and only query a new path plan every so often.
- 3d flappy bird? Let the bird rotate? What about a platformer (Mario) or lunar lander? All are pretty interesting piecewise affine systems.
- Is this the best way to do this? Yes and no. Other ways to do this might involve doing some machine learning, or hardcoding a controller that monitors the pipe locations and has some simple feedback. You can find some among the forks of FlapPyBird. I have no doubt that you could write these quickly, fiddle with them and get them to work better and faster than this MIP controller. However, for me there is a difference between
*could*work and*should*work. You can come up with a thousand bizarre schemes that could work. RL algorithms fall in this camp. But I have every reason to believe the MIP controller*should*work, which makes it easier to troubleshoot.

The post Flappy Bird as a Mixed Integer Program appeared first on Hey There Buddo!.

]]>The post Linear Algebra of Types appeared first on Hey There Buddo!.

]]>Some examples of semirings include

- Regular multiplication and addition
- And-Or
- Min-plus
- Matrices.
- Types

I have written before about how types also form a semiring, using `Either`

for plus and `(,)`

for times. These constructions don’t obey distributivity or associativity “on the nose”, but instead are isomorphic to the rearranged type, which when you squint is pretty similar to equality.

Matrices are grids of numbers which multiply by “row times column”. You can form matrices out of other semirings besides just numbers. One somewhat trivial but interesting example is block matrices, where the elements of the matrix itself are also matrices. Another interesting example is that of relations, which can be thought of as matrices of boolean values. Matrix multiplication using the And-Or semiring on the elements corresponds to relational composition.

What if we put our type peanut butter in our matrix chocolate and consider matrices of types, using the `Either`

–`(,`

) semiring?

The simplest implementation to show how this could go can be made using the naive list based implementation of vectors and matrices. We can directly lift this representation to the typelevel and the appropriate value-level functions to type families.

type a :*: b = (a,b) type a :+: b = Either a b type family Dot v v' where Dot '[x] '[y] = x :*: y Dot (x : xs) (y : ys) = (x :*: y) :+: (Dot xs ys) type family MVMult m v where MVMult '[r] v = '[Dot r v] MVMult (r : rs) v = (Dot r v) : (MVMult rs v) type family VMMult m v where VMMult v '[c] = '[Dot v c] VMMult v (c : cs) = (Dot v c) : (VMMult v cs) type family MMMult' m m' where MMMult' '[r] cs = '[VMMult r cs] MMMult' (r : rs) cs = (VMMult r cs) : (MMMult' rs cs) type family MMMult m m' where MMMult m m' = MMMult' m (Transpose m') type family Transpose m where Transpose ((r1 : rs') : rs) = (r1 : (Heads rs)) : (Conss rs' (Transpose (Tails rs))) Transpose '[] = '[] -- some mapped helper functions -- verrrrrry ugly. Eh. Get 'er dun type family Heads v where Heads ((v : vs) : xs) = v : (Heads xs) Heads '[] = '[] type family Tails v where Tails ((v : vs) : xs) = vs : (Tails xs) Tails '[] = '[] type family Conss v vs where Conss (x : xs) (y : ys) = (x : y) : (Conss xs ys) Conss '[] '[] = '[] type family Index v i where Index (x : xs) 0 = x Index (x : xs) n = Index xs (n-1)

This was just for demonstration purposes. It is not my favorite representation of vectors. You can lift a large fraction of possible ways to encode vector spaces at the value level up to the type level, such as the linear package, or using dual vectors `type V2 a = a -> a -> a`

. Perhaps more on that another day.

Ok. That’s kind of neat, but why do it? Well, one way to seek an answer to that question is to ask “what are matrices useful for anyway?”

One thing they can do is describe transition systems. You can write down a matrix whose entire describes something about the transition from state to state . For example the entry could be:

- The cost of getting from to (min-plus gives shortest path),
- The count of ways to get from to (combinatorics of paths)
- The connectivity of the system from to using boolean values and the and-or semiring
- The probability of transition from to
- The quantum amplitude of going from to if we’re feeling saucy.

If we form a matrix describing a single time step, then multiplying this matrix by itself gives 2 time steps and so on.

Lifting this notion to types, we can build a type exactly representing all the possible paths from state to .

Concretely, consider the following humorously bleak transition system: You are going between home and work. Every 1 hour period you can make a choice to do a home activity, commute, or work. There are different options of activities at each.

data Commute = Drive data Home = Sleep | Eat data Work = TPSReport | Bitch | Moan

This is described by the following transition diagram

The transitions are described by the following matrix.type:

type T = '[ '[Home , Commute ], '[Commute , Work ]]

What is the data type that describe all possible 4-hour day? You’ll find the appropriate data types in the following matrix.

type FourHour = MMMult T (MMMult T (MMMult T T))

Now, time to come clean. I don’t think this is necessarily the best way to go about this problem. There are alternative ways of representing it.

Here are two data types that describe an indefinite numbers of transition steps.

data HomeChoice = StayHome Home HomeChoice | GoWork Commute WorkChoice data WorkChoice = StayWork Work WorkChoice | GoHome Commute HomeChoice

Another style would hold the current state as a type parameter in the type using a GADT.

data Path state where StayWork :: Work -> Path Work -> Path Work CommuteHome :: Commute -> Path Home -> Path Work StayHome :: Home -> Path Home -> Path Home CommuteWork :: Commute -> Path Work -> Path Home

We could construct types that are to the above types as `Vec n `

is to `[]`

by including an explicit step size parameter.

Still, food for thought.

The reason i was even thinking about this is because we can lift the above construction to perform a linear algebra of vectors spaces. And I mean the spaces, not the vectors themselves. This is a confusing point.

Vector spaces have also have two natural operations on them that act like addition and multiplication, the direct sum and kronecker product. These operations do form a semiring, although again not on the nose.

This is connected to the above algebra of types picture by considering the index types of these vector spaces. The simplest way to denote this in Haskell is using the free vector space construction as shown in this Dan Piponi post. The Kronecker product makes tuples of the indices and the direct sum has an index that is the Either of the original index types.

type Vec b r = [(b, a)] -- Example 2D vector space type type V2D = Vec Bool Double

This is by far not the only way to go about it. We can also consider using the Compose-Product semiring on functors (Compose is Kron, Product is DSum) to get a more index-free kind of feel and work with dense vectors.

Going down this road (plus a couple layers of mathematical sophistication) leads to a set of concepts known as 2Vect. Dan Roberts and James Vicary produced a Mathematica package for 2Vect which is basically incomprehensible to me. It seems to me that typed functional programming is a more appropriate venue for something of this kind of pursuit, given how evocative/ well modeled by category theory it can be. These mathematical ideas are applicable to describing anyonic vector spaces. See my previous post below. It is not a coincidence that the `Path`

data type above is so similar to `FibTree`

data type. The `root`

type variable takes the place of the work/home state, and the tuple structure take the place of a Vec-like size parameter `n`

.

More to on this to come probably as I figure out how to explain it cleanly.

Edit: WordPress, your weird formatting is *killing* me.

Edit: Hoo Boy. This is why we write blog posts. Some relevant material was pointed out to me that I was not aware of. Thanks @DrEigenbastard.

https://link.springer.com/chapter/10.1007/978-3-319-19797-5_6

http://blog.sigfpe.com/2010/08/constraining-types-with-regular.html

http://blog.sigfpe.com/2010/08/divided-differences-and-tomography-of.html

The post Linear Algebra of Types appeared first on Hey There Buddo!.

]]>The post Relational Algebra with Fancy Types appeared first on Hey There Buddo!.

]]>`type Rel a b = [(a,b)]`

. In this post we’re going to look at these ideas from a slightly different angle. Instead of encoding relations using value level sets, we’ll encode relations in the type system. The Algebra of Programming Agda repo and the papers quoted therein are very relevant, so if you’re comfortable wading into those waters, give them a look. You can find my repo for fiddling here
At this point, depending on what you’ve seen before, you’re either thinking “Yeah, sure. That’s a thing.” or you’re thinking “How and why the hell would you do such a ridiculous thing.”

Most of this post will be about how, so let’s address why first:

- Examining relations in this style illuminates some constructions that appear around the Haskell ecosystem, particularly some peculiar fellows in the profunctor package.
- More syntactic approach to relations allows discussion of larger/infinite domains. The finite enumerations of the previous post is nice for simplicity, but it seems you can’t get far that way.
- Mostly because we can – It’s a fun game. Maybe a useful one? TBD.

With that out of the way, let’s go on to how.

We will be using some Haskell extensions in this post, at the very least GADTs and DataKinds. For an introduction to GADTs and DataKinds, check out this blog post. DataKinds is an extension that reflects every data constructor of data types to a type constructor. Because there are values `True`

and `False`

there are corresponding types created`'True`

and `'False`

. GADTs is an extension of the type definition mechanism of standard Haskell. They allow you to declare refined types for the constructors of your data and they infer those refined type when you pattern match out of the data as well, such that the whole process is kind of information preserving.

We will use the GADT extension to define relational datatypes with the kind

a -> b -> *

. That way it has a slot `a`

for the “input” and `b`

for the “output” of the relation. What will goes in these type slots will be DataKind lifted types like `'True`

, not ordinary Haskell types like `Int`

. This is a divergence from from the uses of similar kinds you see in Category, Profunctor, or Arrow. We’re doing a more typelevel flavored thing than you’ll see in those libraries. What we’re doing is clearly a close brother of the singleton approach to dependently typed programming in Haskell.

Some examples are in order for what I mean. Here are two simple boolean functions, `not`

and `and`

defined in ordinary Haskell functions, and their equivalent GADT relation data type.

not True = False not False = True data Not a b where NotTF :: Not 'True 'False NotFT :: Not 'False 'True and True True = True and False _ = False and _ False = False data And a b where AndTT :: And '( 'True, 'True) 'True AndFU :: And '( 'False, a) 'False AndUF :: And '( a, 'False) 'False

You can already start to see how mechanical the correspondence between the ordinary function definition and our new fancy relation type. A function is often defined via cases. Each case corresponds to a new constructor of the relation and each pattern that occurs in that case is the pattern that appears in the GADT. Multiple arguments to the relations are encoded by uncurrying everything by default.

Any function calls that occur on the right hand side of a function definition becomes fields in the constructor of our relation. This includes recursive calls and external function calls. Here are some examples with a Peano style natural number data type.

data Nat = S Nat | Z plus Z x = x plus (S x) y = S (plus x y) data Plus a b where PZ :: Plus '( 'Z, a) a PS :: Plus '( a,b) c -> Plus '( 'S a, b) c

We can also define things that aren’t functions. Relations are a larger class of things than functions are, which is part of their utility. Here is a “less than equal” relation `LTE`

.

data LTE a b where LTERefl :: LTE n n LTESucc :: LTE n m -> LTE n ('S m)

You can show that elements are in a particular relation by finding a value of that relational type. Is `([4,7], 11)`

in the relation `Plus`

? Yes, and I can show it with with the value `PS (PS (PS (PS PZ))) :: Plus (4,7) 11`

. This is very much the Curry-Howard correspondence. The type `R a b`

corresponds to the proposition/question is .

While you need to build some primitive relations using new data type definitions, others can be built using relational combinators. If you avoid defining too many primitive relations like the above and build them out of combinators, you expose a rich high level manipulation algebra. Otherwise you are stuck in the pattern matching dreck. We are traveling down the same road we did in the previous post, so look there for less confusing explanations of the relational underpinnings of these constructions, or better yet some of the references below.

Higher order relational operators take in a type parameters of kind

a -> b -> *

and produce new types of a similar kind. The types appearing in these combinators is the AST of our relational algebra language.

The first two combinators of interest is the composition operator and the identity relation. An element is in if there exists a such that and . The fairly direct translation of this into a type is

{- rcompose :: Rel b c -> Rel a b -> Rel a c -} data RCompose k1 k2 a c where RCompose :: k1 b c -> k2 a b -> RCompose k1 k2 a c type k <<< k' = RCompose k k' type k >>> k' = RCompose k' k

The type of the composition is the same as that of Profunctor composition found in the profunctors package.

type RCompose = Procompose

Alongside a composition operator, it is a knee jerk to look for an identity relation and we do have one

data Id a b where Refl :: Id a a -- monomorphic identity. Leave this out? data IdBool a b where ReflTrue :: IdBool 'True 'True ReflFalse :: IdBool 'False 'False

This is also a familiar friend. The identity relation in this language is the Equality type.

-- identity function is the same as Equality type Id a b = Id (a :~: b)

We can build an algebra for handling product and sum types by defining the appropriate relational combinators. These are very similar to the combinators in the Control.Arrow package.

-- Product types data Fan k k' a b where Fan :: k a b -> k' a c -> Fan k k' a '(b,c) type k &&& k' = Fan k k' data Fst a b where Prj1 :: Fst '(a, b) a data Snd a b where Prj2 :: Snd '(a, b) b -- Sum type data Split k k' a b where CaseLeft :: k a c -> Split k k' ('Left a) c CaseRight :: k' b c -> Split k k' ('Right b) c type k ||| k' = Split k k' data Inj1 a b where Inj1 :: Inj1 a ('Left a) data Inj2 a b where Inj2 :: Inj2 a ('Right a) -- some derived combinators type Par f g = Fan (f <<< Fst) (g <<< Snd) type Dup = Fan Id Id type Swap = Fan Snd Fst

The converse of relations is very interesting operation and is the point where relations really differ from functions. Inverting a function is tough. Conversing a relation always works. This data type has no analog in profunctor to my knowledge and probably shouldn’t.

data RConverse k a b where RConverse :: k a b -> RConverse k b a -- Shorter synonym type RCon = RConverse

Relations do not have a notion of currying. The closest thing they have is

data Trans k a b where Trans :: k '(a,b) c -> Trans k a '(b,c)

For my purposes, lattices are descriptions of sets that trade away descriptive power for efficiency. So most operations you’d perform on sets have an analog in the lattice structure, but it isn’t a perfect matching and you’re forced into approximation. It is nice to have the way you perform these approximation be principled, so that you can know at the end of your analysis whether you’ve actually really shown anything or not about the actual sets in question.

The top relation holds all values. This is represented by making no conditions on the type parameters. They are completely phantom.

newtype Top a b = Top ()

Bottom is a relation with no inhabitants.

newtype Bottom a b = Bottom Void

The meet is basically the intersection of the relations, the join is basically the union.

newtype RMeet k k' a b = RMeet (k a b, k' a b) type k /\ k' = RMeet k k' newtype RJoin k k' a b = RJoin (Either (k a b) (k' a b)) type k \/ k' = RJoin k k'

A Lattice has an order on it. This order is given by relational inclusion. This is the same as the :-> combinator can be found in the profunctors package.

type (:->) p q = forall a b. p a b -> q a b type RSub p q = p :-> q

Relational equality can be written as back and forth inclusion, a natural isomorphism between the relations. There is also an interesting indirect form.

data REq k k' = REq {to' :: k :-> k', from' :: k' :-> k }

If we consider the equation `(r <<< p) :-> q`

with `p`

and `q`

given, in what sense is there a solution for `r`

? By analogy, this looks rather like `r*p = q`

, so we’re asking a kind of division question. Well, unfortunately, this equation may not necessarily have a solution (neither do linear algebraic equations for that matter), but we can ask for the best under approximation instead. This is the operation of relational division. It also appears in the profunctor package as the right Kan Extension. You’ll also find the universal property of the right division under the name `curryRan`

and `uncurryRan`

in that module.

newtype Ran p q a b = Ran { runRan :: forall x. p x a -> q x b } type RDiv = Ran

One formulation of Galois connections can be found in the adjunctions file. Galois Connections are very slick, but I’m running out of steam, so let’s leave that one for another day.

We can prove many properties about these relational operations. Here a a random smattering that we showed using quickcheck last time.

prop_ridleft :: (k <<< Id) :-> k prop_ridleft (RCompose k IdRefl) = k prop_ridright :: (Id <<< k) :-> k prop_ridright (RCompose IdRefl k) = k prop_meet :: p /\ q :-> p prop_meet (RMeet (p, q)) = p prop_join :: p :-> p \/ q prop_join p = RJoin (Left p) meet_assoc :: RMeet k (RMeet k' k'') a b -> RMeet (RMeet k k') k'' a b meet_assoc (RMeet (k, (RMeet (k',k'')))) = RMeet (RMeet (k,k'), k'') prop_top :: k :-> Top prop_top _ = top prop_bottom :: Bottom :-> k prop_bottom (Bottom x) = absurd x bottom_compose :: REq (k <<< Bottom) Bottom bottom_compose = REq (\(RCompose k (Bottom b)) -> absurd b) prop_bottom data Iso a b = Iso {to :: a -> b, from :: b -> a} type a <-> b = Iso a b meet_universal :: (p ::-> RMeet k k') <-> (p ::-> k, p ::-> k') meet_universal = Iso to from where to (RSub f) = (RSub $ \p -> case f p of RMeet (k,k') -> k , RSub $ \p -> case f p of RMeet (k,k') -> k') from (RSub f,RSub g) = RSub $ \p -> RMeet (f p, g p) prop_con :: RCon (RCon k) :-> k prop_con (RConverse (RConverse k)) = k

- Recursion Schemes – Recursion schemes are a methodology to talk about recursion in a point free style and where the rubber meets the road in the algebra of programming. Here is an excellent series of articles about them. Here is a sample of how I think they go:

data MapMaybe k a b where MapJust :: k a b -> MapMaybe k ('Just a) ('Just b) MapNothing :: MapMaybe k 'Nothing 'Nothing data Cata map k a b where Cata :: k fa a -> map (Cata map k) x fa -> Cata map k ('Fix x)

- Higher Order Relations?
- Examples of use. Check out the examples folder in the AoP Agda repo. These are probably translatable into Haskell.
- Interfacing with Singletons. Singletonized functions are a specialized case or relations. Something like?

newtype SFun a b = SFun (Sing a -> Sing b)

-- Check out "term rewriting and all that" -- This is also the reflection without remorse data type -- TSequence http://okmij.org/ftp/Haskell/zseq.pdf -- this is also a free instance of Category data Star k a b where Done :: Star k a a Roll :: k b c -> Star k a b -> Star k a c data KPlus k a b where PDone :: k a b -> KPlus k a b PRoll :: k b c -> KPlus k a b -> KPlus k a c type SymClos k a b = RJoin k (RCon k) a b type RefClos k a b = RJoin k Id a b {- n-fold composition -} -- similar to Fin. -- This is also the Vec n is to list and this is to reflection without remorse. Kind of interesting data NFold n k a b where One :: k a b -> NFold ('S n) k a b More :: k b c -> NFold n k a b -> NFold ('S n) k a b

- Program Design by Calculation – JN Oliveira
- Bird and de Moor
- Term Rewriting and all that
- Software Abstractions
- https://softwarefoundations.cis.upenn.edu/lf-current/Rel.html
- https://softwarefoundations.cis.upenn.edu/lf-current/Imp.html#lab335
- https://github.com/scmu/aopa

The post Relational Algebra with Fancy Types appeared first on Hey There Buddo!.

]]>The post Notes on Getting Started in OCaml appeared first on Hey There Buddo!.

]]>https://ocaml.org/docs/install.html

opam is the package manager. Follow the instructions to install it and get your environment variables setup. It’ll tell you some extra commands you have to run to do so. You use it to install packages via `opam install packagename`

. You can also use it to switch between different ocaml compiler versions via command like `opam switch 4.08.1`

.

Dune is a build tool. You can place a small config file called `dune`

in your folder and it can figure out how to appropriately call the compiler. Dune is in flux, so check out the documentation. What I write here may be wrong.

https://dune.readthedocs.io/en/stable/

Here’s an example execution. Note that even though the file is called `main.ml`

in this example, you call build with `main.exe`

. And exec requires the `./`

for some reason. Weird.

dune init exe hello dune exec ./main.exe dune build main.exe

Here’s a dune file with some junk in it. You make executables with blocks. You include a list of the files without the .ml suffix require by the executable in the modules line. You list libraries needed in the libraries line.

(executable (name main) (modules ("main")) (libraries core z3 owl owl-plplot) ) (executable (name lambda) (modules ("lambda")) (libraries core) )

You want to also install merlin. `opam install merlin`

. Merlin is a very slick IDE tool with autocomplete and type information. dune will setup a .merlin file for you

The ReasonML plugin is good for vscode. Search for it on the marketplace. It is the one for OCaml also. ReasonML is a syntactic facelift intended for the web btw. I don’t particularly recommend it to start. There are also emacs and vim modes if that is what you’re into.

The enhanced repl is called utop. Install it with `opam install utop`

. Basic repl usage: Every line has to end with `;;`

. That’s how you get stuff to be run. Commands start with `#`

. For example `#quit;;`

is how you end the session. `#use "myfile.ml";;`

will load a file you’ve made. Sometimes when you start you need to run `#use "topfind";;`

which loads a package finder. You can load libraries via the require command like `#require "Core";;`

`#help;;`

for more.

With any new language I like to check out Learn X from Y if one is available.

https://learnxinyminutes.com/docs/ocaml/

Here are some shortish cheat sheets with a quick overview of syntax

https://github.com/alhassy/OCamlCheatSheet

https://ocaml.org/docs/cheat_sheets.html

This is a phenomenal book online book built for a Cornell course: https://www.cs.cornell.edu/courses/cs3110/2019sp/textbook/

Real World OCaml is also quite good but denser. Very useful as a reference for usage of Core and other important libraries.

The reference manual is also surprisingly readable https://caml.inria.fr/pub/docs/manual-ocaml/ . The first 100 or so pages are a straightforward introduction to the language.

https://github.com/janestreet/learn-ocaml-workshop Pretty basic workshop. Could be useful getting you up and running though.

Core – a standard library replacement. Becoming increasingly common https://github.com/janestreet/core It is quite a bit more confusing for a newcomer than the standard library IMO. And the way they have formatted their docs is awful.

Owl – a numerical library. Similar to Numpy in many ways. https://ocaml.xyz/ These tutorials are clutch https://github.com/owlbarn/owl/wiki

Bap – Binary Analysis Platform. Neato stuff

Lwt – https://github.com/ocsigen/lwt asynchronous programming

Graphics – gives you some toy and not toy stuff. Lets you draw lines and circles and get keyboard events in a simple way.

OCamlGraph – a graph library

Jupyter Notebook – Kind of neat. I’ve got one working on binder. Check it out here. https://github.com/philzook58/ocaml_binder

Menhir and OCamlLex. Useful for lexer and parser generators. Check out the ocaml book for more

fmt – for pretty printing

https://discuss.ocaml.org/ – The discourse. Friendly people. They don’t bite. Ask questions.

https://github.com/ocaml-community/awesome-ocaml Awesome-Ocaml list. A huge dump of interesting libraries and resources.

An excerpt of cool stuff:

- Coq – Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.
- Why3 – Why3 is a platform for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions.
- Alt-Ergo – Alt-Ergo is an open-source SMT solver dedicated to the proof of mathematical formulas generated in the context of program verification.

http://ocamlverse.github.io/ – A pretty good set of beginner advice and articles. Seems like I have a lot of accidental overlap. Would’ve been nice to find earlier

https://www.cl.cam.ac.uk/teaching/1617/L28/materials.html – advanced functional programming course. Interesting material.

TAPL – https://www.cis.upenn.edu/~bcpierce/tapl/ Has implementations in OCaml of different lambda calculi. Good book too.

It is not uncommon to use a preprocessor in OCaml for some useful features. There is monad syntax, list comprehensions, deriving and more available as ppx extensions.

https://whitequark.org/blog/2014/04/16/a-guide-to-extension-points-in-ocaml/ ppx perepsorcssor. ocamlp4 5 are both preprocessors too

https://tarides.com/blog/2019-05-09-an-introduction-to-ocaml-ppx-ecosystem.html

https://blog.janestreet.com/archive/ The jane street blog. They are very prominent users of ocaml.

https://opensource.janestreet.com/standards/ Jane Street style guide

Oleg Kiselyov half works in Haskell, half in OCaml, so that’s cool.

https://arxiv.org/pdf/1905.06544.pdf oleg effects without monads

Oleg metaocaml book. MetaOCaml is super cool. http://okmij.org/ftp/ML/MetaOCaml.html And the switch funtionality of opam makes it easy to install!

Oleg tagless final http://okmij.org/ftp/tagless-final/index.html

https://github.com/ocamllabs/higher

Cohttp, LWT and Async

https://github.com/backtracking/ocamlgraph ocaml graphs

https://mirage.io/ Mirage os. What the hell is this?

https://github.com/ocamllabs/fomega

https://github.com/janestreet/hardcaml

ppx_let monaidc let bindings

some of the awesome derivinig capabilites are given by ppx_jane. SExp seems to be a realy good one. It’s where generic printing is?

`dune build lambda.bc.js`

will make a javascript file. That’s pretty cool. Uses js_of_ocaml. The js_of_ocaml docs are intimidating https://ocsigen.org/js_of_ocaml/dev/manual/overview

http://ocsigen.org/js_of_ocaml/dev/api/

Note you need to install both the js_of_ocaml-compiler AND the library js_of_ocaml and also the js_of_ocaml-ppx.

(executable (name jsboy) (libraries js_of_ocaml) (preprocess (pps js_of_ocaml-ppx)) )

open Js_of_ocaml let _ = Js.export "myMathLib" (object%js method add x y = x +. y method abs x = abs_float x val zero = 0. end)

Go digging through your _build folder and you can find a completely mangled incomprehensible file `jsboy.bc.js`

. But you can indeed import and use it like so.

var mystuff = require("./jsboy.bc.js").myMathLib; console.log(mystuff) console.log(mystuff.add(1,2))

node test.js

`dune build --profile release lambda.bc.js`

putting it in the release profile makes an insane difference in build size. 10mb -> 100kb

There is also bucklescript for compiling to javascript. Outputs readable javascript. Old compiler fork?

Check out J.T. Paach’s snippets. Helpful

Dune:

https://gist.github.com/jtpaasch/ce364f62e283d654f8316922ceeb96db

Z3 ocaml

https://gist.github.com/jtpaasch/3a93a9e1bcf9cae86e9e7f7d3484734b

Ocaml new monadic let syntax

https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-syntax.html

#require “ppx_jane”;; in utop in order to import a thing using ppx

And argument could be made for working from a docker

Weird dsls that generate parsers and lexers. Also oddly stateful.

Took a bit of fiddling to figure out how to get dune to do.

(executable (name lisp) (modules ("lisp" "parse_lisp" "lex_lisp" "ast")) (preprocess (pps ppx_jane)) (libraries core) ) (ocamllex (modules lex_lisp)) (menhir (modules parse_lisp))

Otherwise pretty straight forward encoding

expereince rport: using f omega as a teaching language

Because they aren’t hidden behind a monadic interface (for better or for worse), OCaml has a lot more of imperative feel. You could program in a subset of the language and have it not feel all that different from Java or python or something. There are for loops and while loops, objects and classes, and mutable variables if you so choose. I feel like the community is trying to shy away from these features for most purposes however, sitcking to the functional core.

However, it does let you do for loops and has an interesting community anddifferent areas of strength.

Maybe more importantly it let’s you access a new set of literature and books. Sligthly different but familiar ideas

I think Core is bewildering for a newcomer.

lex_lisp.mll : simplistic usage of ocamllex and menhir

{ (* type token = RightParen | LeftParen | Id of string *) open Lexing open Parse_lisp exception SyntaxError of string let next_line lexbuf = let pos = lexbuf.lex_curr_p in lexbuf.lex_curr_p <- { pos with pos_bol = lexbuf.lex_curr_pos; pos_lnum = pos.pos_lnum + 1 } } let white = [' ' '\t']+ let newline = '\r' | '\n' | "\r\n" let id = ['a'-'z' 'A'-'Z' '_'] ['a'-'z' 'A'-'Z' '0'-'9' '_']* rule read = parse | white { read lexbuf } | newline { next_line lexbuf; read lexbuf } | '(' { LEFTPAREN } | ')' { RIGHTPAREN } | id { ID( Lexing.lexeme lexbuf ) } | eof { EOF }

parse_lisp.mly

%token <string> ID %token RIGHTPAREN %token LEFTPAREN %token EOF %start <Ast.tree list> prog %% prog: | s = sexpr; p = prog { s :: p } | EOF {[]} sexpr : | LEFTPAREN; l = idlist; RIGHTPAREN { Ast.Node( l ) } | s = ID { Ast.Atom(s) } (* inefficient because right recursive There are thingy's in menhir to ake this better? *) idlist : | (* empty *) { [] } | x = sexpr; l = idlist { x :: l } (* *)

Doinking around with some graphics

open Core (* Printf.printf "%b\n" status.keypressed *) let loop : Graphics.status -> unit = fun _ -> Graphics.draw_circle 200 200 50; Graphics.fill_rect 400 400 50 50 (* Graphics.synchronize () *) let main () = Graphics.open_graph ""; Graphics.set_window_title "My Fun Boy"; (* Graphics.auto_synchronize true; *) Graphics.set_color Graphics.black; Graphics.draw_circle 200 200 50; List.iter ~f:(fun i -> Graphics.fill_circle (200 + 20 * i) 200 50) [1;2;3;4]; (* Graphics.sound 500 5000; *) let img = Images.load "fish.jpg" [] in (* Images. *) Images.save "notfish.jpg" (Some Images.Jpeg) [] img; Graphic_image.draw_image img 0 0; Graphics.loop_at_exit [Graphics.Poll;Graphics.Key_pressed] loop (* let evt = Graphics.wait_next_event [Graphics.Key_pressed] in () *) (* let i = create_image 640 640 *) (** resize_window 640 640 *) let () = main ()

A couple Advent of code 2018

open Core_kernel (** if I want to try pulling input from the web *) (** https://adventofcode.com/2018/day/1/input *) let r = In_channel.read_lines "puzz.txt" let main () = Printf.printf "Hey\n"; let puzz = In_channel.read_lines "puzz.txt" |> List.map ~f:int_of_string in (* List.iter puzz ~f:(fun x -> Printf.printf "%d " x); *) let res = List.fold puzz ~init:0 ~f:(+) in Printf.printf "Sum: %d\n" res let () = main ()

open Core_kernel (** Obviously the way I'm doing it is not that efficient, nor all that clean really. *) (* let exists23 str = let charset = String.to_list str |> Set.of_list (module Char) |> Set.to_list in let counts = List.map ~f:(fun c -> String.count str ~f:(fun y -> y = c)) charset in (List.exists counts ~f:(fun i -> i = 3), List.exists counts ~f:(fun i -> i = 2)) *) let exists23 str = let charset = String.to_list str |> Set.of_list (module Char) in let counts = Set.map (module Int) ~f:(fun c -> String.count str ~f:((=) c)) charset in (Set.mem counts 2, Set.mem counts 3) let main () = Printf.printf "Hey\n"; let puzz = In_channel.read_lines "puzz2.txt" in let res = List.map ~f:exists23 puzz in let (c2,c3) = List.fold res ~init:(0,0) ~f:(fun (x,y) (a,b) -> (begin if a then (x + 1) else x end, begin if b then y + 1 else y end)) in Printf.printf "Prod: %d\n" (c2 * c3) let () = main ()

A little Owl usage

open Core_kernel open Owl module Plot = Owl_plplot.Plot let () = print_endline "Hello, World!" let greeting name = Printf.printf "Hello, %s%i \n%!" name 7 (* let x : int = 7 *) let () = greeting "fred" (* let () = match (In_channel.input_line In_channel.stdin) with | None -> () | Some x -> print_endline x let () = In_channel.with_file "dune" ~f:(fun t -> match In_channel.input_line t with | Some x -> print_endline x | None -> () ) *) (** type 'a mygadt = | Myint : int mygadt | Mybool : bool mygadt *) let kmat i j = if i = j then -2.0 else if abs (i - j) = 1 then 1.0 else 0.0 let main () = Mat.print (Mat.vector 10); Mat.print (Mat.uniform 5 5); Mat.print (Mat.zeros 5 5); let h = Owl_plplot.Plot.create "plot_003.png" in Plot.set_foreground_color h 0 0 0; Plot.set_background_color h 255 255 255; Plot.set_title h "Function: f(x) = sine x / x"; Plot.set_xlabel h "x-axis"; Plot.set_ylabel h "y-axis"; Plot.set_font_size h 8.; Plot.set_pen_size h 3.; (* Plot.plot_fun ~h f 1. 15.; *) let x = Mat.linspace 0.0 1.0 20 in (*let f x = Maths.sin x /. x in Plot.plot_fun ~h f 1. 15.; *) let y = (Mat.ones 1 20) in Mat.print (Mat.ones 1 20); Mat.print (Mat.ones 10 1); Mat.print y; Mat.print x; (* y.{0,10} <- 0.0; *) (* Mat.set y 10 1 0.0; *) Plot.plot ~h x (Mat.vector_ones 20); (* Owl_plplot.Plot.plot ~h x (Mat.vector_ones 20); *) Owl_plplot.Plot.output h; (* let q = Arr.create [|2;2;2|] 1.8 in *) let k = Mat.init_2d 10 10 kmat in Mat.print k; Linalg.D.inv k |> Mat.print; Plot.plot ~h x (Mat.row (Linalg.D.inv k) 5); Plot.plot ~h x (Mat.row (Linalg.D.inv k) 7); let r = Mat.zeros 1 10 in Mat.set r 0 0 (-2.0); Mat.set r 0 1 (1.0); let k2 = Mat.kron k k in let g2 = Linalg.D.inv k2 in let s = Mat.row g2 10 in let phi = Mat.reshape s [|10;10|] in Plot.plot ~h x (Mat.row phi 7); (** not convinecd this is actually doing what I want *) let k' = Mat.toeplitz r in (* also works. more cryptic though *) Mat.print k' let () = main ()

The post Notes on Getting Started in OCaml appeared first on Hey There Buddo!.

]]>The post The Classical Coulomb Gas as a Mixed Integer Quadratic Program appeared first on Hey There Buddo!.

]]>We ordinarily consider electric charge to be a continuum, but it isn’t. It comes in chunks of the electron charge. Historically, people didn’t even know that for quite a while. It is usually a reasonable approximation for most purposes to consider electric charge to be continuous

If you consider a network of capacitors cooled to the the level that there is not enough thermal energy to borrow to get an electron to jump, the charges on the capacitors will be observably discretized. With a sufficiently slow cooling to this state, the charges should arrange themselves into the lowest energy state.

The coulomb gas model also is of interest due to its connections to the XY model, which I’ve taken a stab at with mixed integer programming before. The coulomb gas models the energy of vortices in that model. I think the connection between the models actually requires a statistical or quantum mechanical context though, whereas we’ve been looking at the classical energy minimization.

We can formulate the classical coulomb gas problem very straightforwardly as a mixed integer quadratic program. We can easily include an externally applied field and a charge conservation constraint if we so desire within the framework.

We write this down in python using the cvxpy library, which has a built in free MIQP solver, albeit not a very good one. Commercial solvers are probably quite a bit better.

import cvxpy as cvx import numpy as np #grid size N = 5 # charge variables q = cvx.Variable( N*N ,integer=True) # build our grid x = np.linspace(0,1,N) y = np.linspace(0,1,N) X, Y = np.meshgrid(x,y, indexing='ij') x1 = X.reshape(N,N,1,1) y1 = Y.reshape(N,N,1,1) x2 = X.reshape(1,1,N,N) y2 = Y.reshape(1,1,N,N) eps = 0.1 / N #regularization factor for self energy. convenience mostly V = 1. / ((x1-x2)**2 + (y1-y2)**2 + eps**2)** ( 1 / 2) V = V.reshape( (N*N,N*N) ) U_external = 100 * Y.flatten() # a constant electric field in the Y direction energy = cvx.quad_form(q,V) + U_external*q # charge conservation constraint prob = cvx.Problem(cvx.Minimize(energy),[cvx.sum(q)==0]) res = prob.solve(verbose=True) print(q.value.reshape((N,N))) #plotting junk import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(X, Y, q.value.reshape((N,N))) plt.show()

The results seems reasonable. It makes sense for charge to go in the direction of the electric field. Going to the corners makes sense because then like charges are far apart. So this might be working. Who knows.

Prof Vanderbei shows how you can embed an FFT to enable making statements about both the time and frequency domain while keeping the problem sparse. The low time/memory complexity of the FFT is reflected in the spasity of the resulting linear program.

https://vanderbei.princeton.edu/tex/ffOpt/ffOptMPCrev4.pdf

Here’s a sketch about what this might look like. Curiously, looking at the actual number of nonzeros in the problem matrices, there are way too many. I am not sure what is going on. Something is not performing as i expect in the following code

import cvxpy as cvx import numpy as np import scipy.fftpack # import fft, ifft def swizzle(x,y): assert(x.size == y.size) N = x.size s = np.exp(-2.j * np.pi * np.arange(N) / N) #print(s) #ret = cvx.hstack( [x + s*y, x - s*y]) #print(ret.shape) return cvx.hstack( [x - s*y, x + s*y]) def fft(x): N = x.size #assert(2**int(log2(N)) == N) # power of 2 if N == 1: return x, [] else: y = cvx.reshape(x,(N//2,2)) c = [] even, ce = fft(y[:,0]) c += ce odd, co = fft(y[:,1]) c += co z = cvx.Variable(N, complex=True) c += [z == swizzle(even,odd)] return z, c N = 256 x = cvx.Variable(N, complex=True) z, c = fft(x) v = np.zeros(N) #np.ones(N) #np.random.rand(N) v[0]= 1 c += [x == v] prob = cvx.Problem( cvx.Minimize(1), c) #print(prob.get_problem_data(cvx.OSQP)) res = prob.solve(verbose=True) #print(x.value) print(z.value) print(scipy.fftpack.fft(v)) print(scipy.fftpack.fft(v) - z.value)

The equivalent dense DFT:

x = cvx.Variable(N, complex=True) fred = cvx.Variable(N, complex=True) c = [fred == np.exp(-2.j * np.pi * np.arange(N).reshape((N,1)) * np.arange(N).reshape((1,N)) / N) * x] prob = cvx.Problem( cvx.Minimize(1), c) print(prob.get_problem_data(cvx.OSQP))

It would be possible to use a frequency domain solution of the interparticle energy rather than the explicit coulomb law form. Hypothetically this might increase the sparsity of the problem.

It seems very possible to me in a similar manner to embed a fast multipole method or barnes-hut approximation within a MIQP. Introducing explicit charge summary variables for blocks would create a sparse version of the interaction matrix. So that’s fun.

The post The Classical Coulomb Gas as a Mixed Integer Quadratic Program appeared first on Hey There Buddo!.

]]>The post Doing Basic Ass Shit in Haskell appeared first on Hey There Buddo!.

]]>The Haskell phrase book is a new useful thingy. Nice and terse.

https://typeclasses.com/phrasebook

This one is also quite good https://lotz84.github.io/haskellbyexample/

I also like what FP complete is up to. Solid set of useful stuff, although a bit more emphasis towards their solutions than is common https://haskell.fpcomplete.com/learn

I was fiddling with making some examples for my friends a while ago, but I think the above do a similar better job.

https://github.com/philzook58/basic-ass-shit

Highlights include:

Makin a json request

-# LANGUAGE OverloadedStrings, DeriveGeneric #-} module JsonRequest where import Data.Aeson import Network.Wreq import GHC.Generics import Control.Lens data ToDo = ToDo { userId :: Int, id :: Int, title :: String, completed :: Bool } deriving (Generic, Show) instance ToJSON ToDo instance FromJSON ToDo my_url = "https://jsonplaceholder.typicode.com/todos/1" main = do r <- get my_url print $ ((decode $ r ^. responseBody) :: Maybe ToDo) -- ((decode $ r ^. responseBody) :: Maybe ToDo)

Showing a plot of a sine function

module Plot where import Graphics.Rendering.Chart.Easy import Graphics.Rendering.Chart.Backend.Cairo -- Chart-cairo import Graphics.Image as I -- hip -- https://github.com/timbod7/haskell-chart/wiki/example-1 filename = "example1_big.png" main = do toFile def filename $ plot (line "a sine" [[ (x :: Double, sin x) | x <- [0, 0.1 .. 2 * pi]]]) plotimg <- readImageRGB VU filename -- yeah,I want the plot to pop up displayImage plotimg print "Press Enter to Quit" getLine

Doing a least squares fit of some randomly created data

module LeastSquares where import Numeric.LinearAlgebra n = 20 x = linspace n (-3,7::Double) y0 = 3 * x main = do noise <- randn 1 n let y = (flatten noise) + y0 let sampleMatrix = (asColumn x) ||| (konst 1 (n,1)) let sol = (sampleMatrix <\> y) print $ "Best fit is y = " ++ show (sol ! 0) ++ " * x + " ++ (show (sol ! 1))

I love Power Serious. https://www.cs.dartmouth.edu/~doug/powser.html Infinite power series using the power of laziness in something like 20 lines

https://blog.plover.com/prog/haskell/monad-search.html Using the list monad to solve SEND+MORE=MONEY puzzle.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.8903&rep=rep1&type=pdf Jerzy Karczmarczuk doing automatic differentiation in Haskell before it was cool. Check out Conal Elliott’s stuff after.

Very simple symbolic differentiation example. When I saw this in SICP for the first time, I crapped my pants.

data Expr = X | Plus Expr Expr | Times Expr Expr | Const Double deriv :: Expr -> Expr deriv X = Const 1 deriv (Const _) = Const 0 deriv (Plus x y) = Plus (deriv x) (deriv y) deriv (Times x y) = (Times (deriv x) y) `Plus` (Times x (deriv y))

https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf Why functional Programming Matters by John Hughes

https://www.cs.cmu.edu/~crary/819-f09/Backus78.pdf John Backus emphasizing escaping the imperative mindset in his 1978 Turing Award speech. A call to arms of functional programming

https://www.cs.tufts.edu/~nr/cs257/archive/richard-bird/sudoku.pdf Richard Bird defining sudoku solutions and then using equation reasoning to build a more efficient solver

https://wiki.haskell.org/Research_papers/Functional_pearls – Functional Pearls

I google. I go to hackage (if I’m in a subpage, click on “contents” in the upper right hand corner). Click on a category that seems reasonable (like “web” or something) and then sort by Downloads (DL). This at least tells me what is popular-ish. I look for tutorials if I can find them. Sometimes there is a very useful getting started snippet in the main subfile itself. Some packages are overwhelming, others aren’t.

The Real World Haskell book is kind of intimidating although a lovely resource.

The wiki has a pretty rockin set of tutorials. Has some kind soul been improving it?

https://wiki.haskell.org/Category:Tutorials

I forgot learn you a Haskell has a chapter on basic io

http://learnyouahaskell.com/input-and-output

When you’re ready to sit down with Haskell more, the best intro is currently the Haskell Book

You may also be interested in https://www.edx.org/course/introduction-functional-programming-delftx-fp101x-0 this MOOC

https://github.com/data61/fp-course or this Data61 course

Then there is a *fun* infinitude of things to learn after that.

______

More ideas for simple examples?

This post is intentionally terse.

IO is total infective poison.

standard output io

main = do x <- getStrLn putStrLn "Hello" print [1,2,3] print (Just 19022.32) print x

mutation & loops. You probably don’t want these. They are not idiomatic Haskell, and you may be losing out on some of the best lessons Haskell has to offer.

file IO

web requests

http://www.serpentine.com/wreq/tutorial.html

web serving – scotty

image processing

basic data structures

command line arguments

plotting

Parallelism and Concurrency

https://nokomprendo.frama.io/tuto_fonctionnel/posts/tuto_fonctionnel_25/2018-08-25-en-README.html

The post Doing Basic Ass Shit in Haskell appeared first on Hey There Buddo!.

]]>The post CAV 2019 Notes: Probably Nothin Interestin’ for You. A bit of noodling with Liquid Haskell appeared first on Hey There Buddo!.

]]>Calin Belta http://sites.bu.edu/hyness/calin/.Has a new book. Control of Temporal logic systems. Automata. Optimized. Partition space into abstraction. Bisimulation https://www.springer.com/gp/book/9783319507620

Control Lyapunov Function (CLF) – guarantees you are going where you want to go

Control Barrier Function – Somehow controls regions you don’t want to go to.

Lyapunov function based trajectory optimization. You somehow have (Ames 2014) http://ames.gatech.edu/CLF_QP_ACC_final.pdf Is this it?

Differential flatness , input output linearization

Sadradiini worked there.

Temproal logic with

Linear Temporal Logic vs CTL

Fixpoint logic,

Buchi automata – visit accepting state infinite times

equivalency to first order logic

monadic logic, propositions only take 1 agrument. Decidable. Lowenheim. Quantifier elimination. Bounded Mondel property

Languages: ForSpec, SVA, LDL, PSL, Sugar

Monadic second order logic (MSO).

method of tableau

Polytopic regions. Can push forward the dynmaics around a trajectory and the polytope that you lie in. RRT/LQR polytopic tree. pick random poitn. Run.

Evauating branching heuristics

branch and prune icp. dreal.

branch and prune. Take set. Propagate constraints until none fire.

branching heuristics on variables

largest first, smearing, lookahead. Try different options see who has the most pruning. Non clear that helped that muhc

QF_NRA. dreal benchmarks. flyspeck, control, robotics, SMT-lib

http://capd.sourceforge.net/capdDynSys/docs/html/index.html

commands: saolver adied programming

verify – find an input on which the assertions fail. exists x. not safe

debug – Minimal unsat core if you give an unsat query. x=42/\ safe(s,P(x))$ we know thia is unsat because of previous step

solve – exists v si.t safe(v)

synthesis – exists e forall x safe(x,P(x))

define-symbolic, assert, verify, debug, solve, sythesis

Rosette. Alloy is also connected to her. Z Method. Is related to relational logic?

https://homes.cs.washington.edu/~emina/media/cav19-tutorial/index.html

http://emina.github.io/rosette/

Building solver aided programming tool.

symbolic compiler. reduce program all possible paths to a constraint

Cling – symbolic execution engine for llvm

implement intepreter in rosette

Symbolic virtual machine

layering of languages. DSL. library (shallow) embedding. interpreter (deep) embedding.

deep embedding for sythesis.

I can extract coq to rosette?

how does it work?

reverse and filter keeping only positive queries.

symbolic execution vs bounded model checking

symbolic checks every possible branch of the program. Cost is expoentntial

CBMC.

type driven state merging. Merge instances of primitiv types. (like BMC), value types structurally ()

instance Merge Int, Bool, Real — collect up SMT context

vs. Traversable f => Merge (f c) – do using Traversable

symbolic union a set of guarded values with diskoint guard.

merging union. at most one of any shape. bounded by number of possible shapes.

puts some branching in rosette and some branch (on primitives) in SMT.

symbolic propfiling. Repair the encdoing.

tools people have built.

veify radiation

strategy generation. That’s interesting. builds good rewrite rules.

serval.

certikso komodo keystone. fintie programss

IS rosette going to be useful for my work? coooooould be

https://ranjitjhala.github.io/

{-# LANGUAGE GADTs, DataKinds, PolyKinds #-} {-@ LIQUID "--reflection" @-} {-@ LIQUID "--short-names" @-} {-@ LIQUID "--ple" @-} {-@ type TRUE = {v: Bool | v = True }@-} {-@ type NonEmpty a = {v : [a] | len v > 0} @-} {-@ head :: NonEmpty a -> a @-} head (x : _) = x {-@ measure f :: Int -> Int @-} f x = 2 * x {-@ true :: TRUE @-} true = True -- impl x y = x ==> y -- {-@ congruence :: Int -> Int -> TRUE @-} -- congruence x y = (x == y) ==> (f x == f y) -- {-@ (==>) :: {x : Bool |} -> {y : Bool |} -> {z : Bool | z = x ==> y} @-} -- x ==> y = (not x) || y -- aws automated reaosning group {- nullaway uber. sadowski google static anaylis infer faecobook give programmer early refinement types why and how to use how to implement refinement types mostly like floyd hoare logic types + predicates = refinement type t := {x:b | p} erefinemed b x : t -> t -- refined function type linear arithemtic congruence axioms emit a bunch of verification conditions (VC) p1 => p2 => p3 ... SMT can tell if VC is alwasy true -} {-@ type Zero = {v : Int | v == 0} @-} {-@ zero :: Zero @-} zero = (0 :: Int) -- why? {-@ type NAT = {v: Int | v >= 0} @-} {-@ nats :: [Nat] @-} nats = [0,1,1,1] :: [Int] {- subtype in an environemnt Gamma, t1 is a subtype of t2 x are vairables are in scope. /\x |- {v | q} <= {v | r} True => -} {-@ plus :: x : Int -> y : Int -> {v : Int | v = x + y} @-} plus :: Int -> Int -> Int plus x y = x + y -- measure. uninterpeteyd function called measure. -- {-@ measure vlen :: Vector a -> Int @-} -- {-@ at :: vec :Vector a -> {i : Nat | i < vlen vec} -> @-} -- {-@ size :: vec : Vector a -> {n : Nat | n == vlen vec} @-} {- horn contrints infer Collections and Higher and order fucntions reduce :: (a -> b -> a) -> a -> [b] -> a type a is an invaraint that holds on initial acc and indictively on f Huh. That is true. -- Huh. I could prove sum formulas this way. sum'' = vec = let is = range 0 (size vec) add = \tot i -> ot + at vec i properties of data structures size of of a list allow uninterpetyed functions inside refinements { measre length} LISTNE a = {v : [a] | 0 < length v} measure yields refined constructions [] :: {v : [a] | legnth v = 0} --} {- Q: measure? Q : environment? Q : where is standard libraru? no foralls anywhere? All in decidable fragment p : ([a],[a]) | (fst p) + (snd p) == lenght xs measures fst and snd interpeter impossible :: {v : String | false} -> a imperative language interpolation /inlcude folder has prelude basic values {v : Int | lo <= v && v < hi } invaraint properties of structures encoe invraints in constructor data OrdPair = OP {opX :: Int, opY :: Int} {-@ data OrdPair = OP {opX :: Int, opY :: {v : Int | opX < v}@-} class Liquid Int class Liquid Bool class Liquid ... Real? {-@ @-} Liquid Relations? { } data OList {-@data OList a LB = Empt | :< {oHd :: {v : a | LB = v} oTl :: OList a oHd } @-} {-@ {oHd :: a, oTl :: OList {v : a | oHd < v} }@-} GADTs? -} data MyList a where Nil :: MyList a {-@ Cons :: v : a -> MyList {x : a | x < v} -> MyList a @-} Cons :: a -> MyList a -> MyList a test :: MyList Int test = Cons 2 (Cons 1 Nil) {- abstracting the invaraint from the data structure parametrzie by relations data [a]<rel :: a -> a -> Bool> where = [] | (:) {hd :: } rel != is unique list {\x y -> x >= y} type level lambdas!? .... uh.... maybe. reflecting singletons into liquid? termination metrics / [length xs + len ys] -- merge sort {-@ Half a s = }@-} Oncey ou have temrination proofs you have proofs of correctness Propositions as Types Plus commutes is trivial {n = n + n} -} {-@ easyProof :: {True} @-} easyProof = () -- hot damn. I mean this is in it's legerdomain. But prettttty sweet. {-@ commute :: x : Int -> y : Int -> {x + y = y + x} @-} commute :: Int -> Int -> () commute x y = () {-@ reflect mysum @-} {-@ mysum :: Nat -> Nat @-} mysum :: Int -> Int mysum 0 = 0 -- if n <= 0 then 0 else 2 * n + (mysum (n - 1)) mysum n = 2 * n + (mysum (n - 1)) -- what is going on here? why do I need _? {-@ mysumpf :: _ -> {mysum 0 = 0 } @-} -- mysumpf :: Proof mysumpf _ = let x = mysum 0 in x {-@ mysumpf' :: {mysum 3 = 12 } @-} -- mysumpf :: Proof mysumpf' = () {-@ reflect fastsum @-} {-@ fastsum :: Nat -> Nat @-} fastsum :: Int -> Int fastsum n = n * (n + 1) type Proof = () {- {-@ pfsum :: x : Nat -> {fastsum x = mysum x} @-} pfsum :: Int -> Proof pfsum 0 = () -- let _ = fastsum 0 in let _ = mysum 0 in () pfsum n = pfsum (n-1) -} {-@ pfsum :: x : Nat -> {fastsum x = mysum x} @-} pfsum :: Int -> Proof pfsum 0 = () -- let _ = fastsum 0 in let _ = mysum 0 in () pfsum n = pfsum (n-1) {- reflection reflect takes the prcondition of sum and dumps it as the poscondition sum3 _ = let s0 =sum 0 s1 = sum 1 s2 = sum 3 -- all are going to be in scope. z3 will connect the dots. using proof combinatos from Proof Combinators long chains of claculations reflection of singletons data SS s where {-@ SZero :: {v : Int | v = 0} -> SS 'Zero @-} SZero :: Int -> SS 'Zero {-@ SZero :: {v : Int | v = 0} -> SS 'S a @-} SZero :: Int -> SS 'Zero proof by induction sum n = n * (n + 1)/2 2 * sum n = n * (n + 1) point free liquid types (.) :: (a -> b) -> (a -> ) ? Can I abstract over predicates like this? ({v:a | p} -> {s:}) -> Vectors cauchy schwartz -} data V2 a = V2 a a {-@ reflect dot @-} dot (V2 x y) (V2 x' y') = x * x' + y * y' {-@ reflect vplus @-} vplus (V2 x y) (V2 x' y') = V2 (x + x') (y + y') {-@ reflect smul @-} smul s (V2 x' y') = V2 (s * x') (s * y') {- {-@ cauchy :: x : V2 Int -> y : V2 Int -> {(dot x y) * (dot x y) <= (dot x x) * (dot y y) } @-} cauchy :: V2 Int -> V2 Int -> Proof cauchy x y = let q = dotpos (vplus x y) in let r = dotpos (vplus x (smul (-1 :: Int) y)) in (\_ _ -> ()) q r -} -- {-@ square :: Int -> Nat @-} -- basiclly the same thing {-@ reflect square @-} square :: Int -> Int square x = x * x {-@ sqpos :: x: Int -> {square x >= 0} @-} sqpos :: Int -> () sqpos x = () {-@ dotpos :: x: V2 Int -> {dot x x >= 0} @-} dotpos :: V2 Int -> () dotpos x = () {-@ dotsym :: x: V2 Int -> y : V2 Int -> {dot x y = dot y x} @-} dotsym :: V2 Int -> V2 Int -> () dotsym x y = () {-@ vpluscomm :: x: V2 Int -> y : V2 Int -> {vplus x y = vplus y x} @-} vpluscomm :: V2 Int -> V2 Int -> () vpluscomm x y = () {-@ dotlin :: x: V2 Int -> y : V2 Int -> z : V2 Int -> {dot (vplus x y) z = dot x z + dot y z} @-} dotlin :: V2 Int -> V2 Int -> V2 Int -> () dotlin x y z = () {- What else is interesting to prove? verify stuff about ODEs? fold [1 .. t] where t = 10 could give little spiel about how dynamical systems are like imperative programming get some rationals. profunctor p a b a -> b are refined functions I should learn how to abstract over typeclasses. Verified typeclasses? SMT has built in rationals prob? -} data Rat = Rat Int Int {-@ reflect rplus @-} rplus :: Rat -> Rat -> Rat rplus (Rat x y) (Rat x' y') = Rat (x*y' + x'*y) (y * y') {-@ reflect rmul @-} rmul :: Rat -> Rat -> Rat rmul (Rat x y) (Rat x' y') = Rat (x*x') (y * y') data Nat' = S Nat' | Z {-@ measure nlen @-} {-@ nlen :: Nat' -> Nat @-} nlen :: Nat' -> Int nlen Z = 0 nlen (S x) = 1 + (nlen x) {- -- failing? -- crash: SMTLIB2 respSat = Error "line 31 column 169: unknown sort 'Main.SNat'" data SNat a where SZ :: SNat 'Z SS :: SNat x -> SNat ('S x) -} {-@ reflect conv @-} {-@ conv :: x : Nat -> {v : Nat' | nlen v = x} @-} conv :: Int -> Nat' conv 0 = Z conv x = S (conv (x-1)) -- It's an isomorphism {-@ pfconv :: x : Nat -> {nlen (conv x) = x} @-} pfconv :: Int -> Proof pfconv 0 = () pfconv x = pfconv (x - 1) {-@ pfconv' :: x : Nat' -> {conv (nlen x) = x} @-} pfconv' :: Nat' -> Proof pfconv' Z = () pfconv' (S x) = pfconv' x {-@ reflect plus' @-} plus' :: Nat' -> Nat' -> Nat' plus' Z x = x plus' (S x) y = S (plus' x y) {-@ plusz' :: x : Nat' -> {plus' x Z = plus' Z x} @-} plusz' :: Nat' -> Proof plusz' Z = () plusz' (S x) = plusz' x {-@ pluscomm' :: x : Nat' -> y : Nat' -> {plus' x y = plus' y x} / [nlen x, nlen y] @-} pluscomm' :: Nat' -> Nat' -> Proof pluscomm' Z y = plusz' y pluscomm' (S x) (S y) = const (pluscomm' (S x) y) $ const (pluscomm' x (S y)) $ pluscomm' x y -- const () $ const (plus' (S x) (S y)) $ const (plus' x (S y)) (plus' x y) -- const (pluscomm' (S x) y) $ const (pluscomm' x (S y)) $ pluscomm' x y -- flip const is proof combinator .== {-let q = pluscomm' x (S y) in let w = pluscomm' (S x) y in let r = pluscomm' x y in (\b n m -> ()) q w r -- ? Was this necessary? -} pluscomm' x Z = plusz' x -- {-@ data Iso = @-} data Iso a b = Iso { to :: a -> b, from :: b -> a, p1 :: Proof, p2 :: Proof} {- We also have type level lambdas. refinement polymorphism LH is somewhat like singletons in the sense there is a manual reflection step. In singletons the manual reflection is in the Sing type in LH it is kind of all over the place. (+) has a type. Where is it defined? How does it know that the Haskell function + is the same as the SMT solver function? Coq and Agda and Idris type checking is powered quite a bit by an internal unification engine explicit annotation may lessen the burden somewhat SMT solvers as a unification engine structure unification vs uninterpeted functions. f a ~ Int is not a valid Haskell constraint. Maybe with the unmatchable arrow it is? In a funny sense, there is a difference between Just and (+ 1). One being a constructor means we can match out of it Just :: a ->> b (+ 1) :: Int -> Int -} -- test' :: (f a ~ Int) => () -- test' = ()

Liquid Haskell – What is?

another thing we could do is galois connections between refinements. Pos, Zero, Neg <-> Int

Liquid Haskell uses SMT solvers to resolve it’s type checking requirements.

Agda et al also work very much via unification. Unification is a broad term but it’s true.

It also has a horn clause solver for inference. Every language needs some kind of inference or you’d go insane. Also it is piggybacking on haskell

It’s not as magical as I thought? Like seeing the magicians trick. It really does understand haskell code. Like it isn’t interpretting it. When it knows facts about how (+) works, that is because the refined type was put in by hand in the prelude connecting it to SMT facts. What is imported by liquid haskell?

The typing environment is clutch. You need to realize what variables are in scope and what their types are, because that is all the SMT can use to push through type checking requirements.

Installing the stack build worked for me. It takes a while . I couldn’t get cabal install to work, because I am not l33t.

Uninterpeted functions. Unmatchability?

It wouldn’t be haskell without a bunch of compiler directives. It is somewhat difficult to find in a single cohesive place what all the syntax, and directives are from liquid haskell. Poking around it best.

- ple
- reflection
- no-termination
- higherorder – what is this?

https://github.com/ucsd-progsys/230-wi19-web course notes

https://github.com/ucsd-progsys/liquid-sf some of software foundations

https://nikivazou.github.io/publications.html niki vazou’s pubs. Check out refinement reflection

https://nikivazou.github.io/static/Haskell17/law-abiding-instances.pdf draft work? Shows stuff about typeclasses. This is a haskell 2017 paper though

https://arxiv.org/pdf/1701.03320 intro to liquid haskell. Interesting to a see a different author’s take

http://goto.ucsd.edu/~nvazou/presentations/ presentations. They are fairly similar to one another.

Liquid haskell gives us the ability to put types on stuff that wasn’t possible before.

Linearity :: f :: {a -> b | f (s ^* a) == s ^* (f a) }

Pullback. {(a,b) | f a == g b}

Equalizer

Many things in categoruy theory rely on the exists unique. Do we have functiona extensionality in Liquid haskell?

product : {(a,b) | f q = x, g q = y, => }

Pushing the boundaries on what liquid haskell can do sounds fun.

Equalizer. The eqaulizer seems prominents in sheaves. Pre-sheaves are basically functors. Sheaves require extra conditions. Restriction maps have to work? Open covers seem important

type Equalizer f g a b = {(e :: a , eq :: a -> b) | f (eq e) = g (eq e) }

I think both the type a and eq are special. e is like an explcit parametrization.

type Eq f g a = {e :: a | f e = g e} I think this is more in the spirit. Use f and g both as measures.

presheaf is functor. But then sheaf is functor that

(a, Eq (F a) (G a)). typelevel equalizer? All types a that F and G agree on.

https://ncatlab.org/nlab/show/equalizer

https://blog.functorial.com/posts/2012-02-19-What-If-Haskell-Had-Equalizers.html

Records are sheaves – Jon Sterling. Records have subtyping. This gives you a toplogy feeling thing.

https://www.slideshare.net/jonsterling/galois-tech-talk-vinyl-records-in-haskell-and-type-theory

What about purescript records?

{foo | a} {bar | a} -> intersection = {foo bar | b} can inhabit either

union is

or do you want closed records? union is union of fields. intersection is intersection of fields.

In this case a cover would be a set of records with possibly overlapping fields whose combined labels cover the whle space we want to talk about. consistency condition of sheaf/equalizer is that overlapping records fields have to match. I guess { q.foo = r.foo } ?There is a way to combine all the stuff up. This is exactly what Ghrist was getting at with tables. Tables with shared columns.

data R1 = R1 {foo :: Int, bar :: Int}

{ (r1 :: R1, r2 :: R2) | (foo r1) = (foo r2) } — we manitain duplicates across records.

{. }

if you have a “cover” {foo bar |} {bar fred} {gary larry} whose in

https://www.sciencedirect.com/science/article/pii/S1571066108005264

Sheaves. As a model of concurrency? Gaguen paper.

sheaves as constraint satisfcation? sheafifcation. Constraint solving as a way of fusing the local constraints to be globally consistent.

sheaves as records

sheaves as data fusion

http://www.cs.bham.ac.uk/~mhe/papers/barbados.pdf

Escardo. Compact data types are those finitely searchable

Continuous funcitons are ~computable? Productive?

http://www.paultaylor.eu/ASD/foufct/

http://www.paultaylor.eu/~pt/prafm/

typed recursion theory toplogy

typed computatabiltity theory

Topological notions in computation. Dictionary of terms realted decidable, searchable, semi decidablee

cs.ioc.ee/ewscs/2012/escardo/slides.pdf

https://en.wikipedia.org/wiki/Computable_topology

Through NDArray overloading, a significant fragment of numpy code is probably verifiable.

Start with functional arithmetic programs.

Need to inspect function annotations to know how to build input type.

@verify() tag

Use (Writer a) style monad.

If statements are branching. We are again approaching inspecting functions via probing. But what if we lazily probe. At every __bool__ point, we run a z3 program to determine if there is an avaiable bool left possible (we don’t need to inspect dead code regions. Also would be cool to mention it is a dead region). Curious. We’re reflecting via Z3.

Loops present a problem. Fixed loops are fine. but what about loops that depend on the execution? for i in range(n). I guess again we can hack it…? Maybe. range only takes an integer. we don’t have overload access.

Maybe we need to go into a full eval loop. utterly deconstructing the function and evaluating it statelemnt by statement.

(compare :: a -> a -> Comparison). We could select a choice based on if there is a new one avaialable. Requires access to external store. We lose the thread. How can we know a choice was made? How can we know what the choice was? Did it ask var1 or var2? We can probably do it in python via access to a global store. But in haskell?

while loops take invariant annotations.

It would be cool to have a program that takes

pre conditions. Post ocnditions, but then also a Parameter keyword to declare const variables as deriveable. exists parameter. forall x precondition x => post condition.

Parameter could be of a type to take a dsl of reasonable computations. Perhaps with complexity predicates. and then interpretting the parameter defines the computation.

Or simpler case is parameter is an integer. a magic number.

@pre(lambda x: None) @post(lambda r: r >= 0) def square(x): return x**2 @verify(pre, post) # Easier. because now we can also verify the individual function. Call Z3 at function definition time. def pre(f,cond): if(VERIFCAIOTN_ON) return fnew def fnew(x): if(VERIFICATION_ON): if(x == VerificationEnv): newenv = x.copy new.add_pre(cond(x.var)) newVar = Z3.variable() newenv.add(newVar == f(x.var)) else: return f(x) def post(f, cond): def fnew(x): if x == VerifcationEnv: Z3.findmodel(not cond(x.var), x.env) if can't find one: we're good. x.env.append(cond(x.var)) return x else: assert(False, model, function name, function postcondition code.) #overloading assignment. isn't a problem. class VerifyArray(): #numpy Z3 shim. #termination requires measure decreasing at every recursive call. #arbaitrary loops? How to deal with those? #maybe a hierarchy of envs to make it easier on z3. Like it doesn't need to give the whole program history if the local use is good enough. class VerificationEnv(): self.var = [] self.pre = [] self.post = []

The post CAV 2019 Notes: Probably Nothin Interestin’ for You. A bit of noodling with Liquid Haskell appeared first on Hey There Buddo!.

]]>The post Dump of Nonlinear Algebra / Algebraic geometry Notes. Good Links Though appeared first on Hey There Buddo!.

]]>—

Systems of multivariable polynomial equations are more solvable than people realize. There are algebraic and numeric methods. Look at Macaulay, Singular, Sympy for algebraic methods. phcpack and bertini and homotopycontinuation.jl for numerical .

Algebraic methods are fixated on Groebner bases, which are a special equvialent form your set of equations can be manipulated to. You can disentangle the variables using repeated polynomial division (buchberger’s algorithm) turning your set of equations into an equivalent set that has one more variable per equation. This is like gaussian elimination which is actually the extremely simple version of buchberger for linear equations.

The numerical methods use perturbation theory to take a system of equations you know how to solve and smoothly perturb them to a new system. Each small perturbation only moves the roots a little bit, which you can track with a differential equation solver. Then you can fix it up with some Newton steps. People who really care about this stuff make sure that there are no pathological cases and worry about roots merging or going off to infinity and other things.

You need to know how many roots to build and track in your solvable system. For that two theorems are important

bezout thereom – for dense systems, number of solutions is bound by product of total degree of equations.

bernstein bound – newton polytope gives bound of number of solutions of polynomial system. useful for sparse

One could make an argument for the homotopy continuation methods being the analog of iterative solutions for linear equations if grobner basis are gaussian elimibnation. Take equation we know how to solve (~preconditioner) and perform some iterative thing on it.

add enough random linear equations to make system full (points).

Then you have a membership algorithm due to sweeping of planes. Once you have points on actual varieites, pairwise compare them.

Cox OShea book is often reccomended. It’s really good.

https://www.springer.com/us/book/9781441922571

More advanced Cox et al book

https://www.springer.com/us/book/9780387207063

Bernd Sturmfels, Mateusz Michalek (including video lectures)

https://personal-homepages.mis.mpg.de/michalek/ringvorlesung.html

https://personal-homepages.mis.mpg.de/michalek/NonLinearAlgebra.pdf

(Bernd is da man!)

https://math.berkeley.edu/~bernd/math275.html

Maculay 2 book

https://faculty.math.illinois.edu/Macaulay2/Book/

Singular books

https://www.singular.uni-kl.de/index.php/publications/singular-related-publications.html

https://www.springer.com/us/book/9783662049631

https://www.ima.umn.edu/2006-2007

Planning Algorithms, in particular chapter 6

Gröbner bases in Haskell: Part I

Summer school on tensor methods

https://www.mis.mpg.de/calendar/conferences/2018/nc2018.html

Extensions of

https://ieeexplore.ieee.org/document/4399968

Numerical Polynomial Algebra by Hans Stetter

https://epubs.siam.org/doi/book/10.1137/1.9780898717976?mobileUi=0&

Introduction to Non-Linear Algebra V. Dolotin and A. Morozov. A high energy physics perspective

https://arxiv.org/pdf/hep-th/0609022.pdf

Nonlinear algebra can also be approach via linear algebra surprisingly. Resultants. As soon as you see any nonlinearity, the linear part of your brain shuts down, but a good question is linear in WHAT? Consider least squares fitting, which works via linear algebra. Even though you’re fitting nonlinear functions, the expressions are linear in the parameters/coefficients so you’re all good.

Similarly you can encode root finding into a linear algebra problem. A matrix has the same eigenvalues as it’s characterstic polynomial has roots, so that already shows that it is plausible to go from linear algebra to a polynomial root finding problem. But also you can encode multiplying a polynomial by x has a linear operation on the coefficients. In this way we can .

[1 x x^2 x^3 …] dot [a0 a1 a2 a3 …] = p(x)

Multiplying by x is the shift matrix. However, we also are assuming p(x)=0 which gives use the ability to truncate the matrix. x * [1 x x^2 x^3 …] = Shift @ xbar. This is somewhat similar to how it feels to do finite differnce equations. The finite difference matrix is rectangular, but then boundary conditions give you an extra row. Multiplication by x returns the same polynomial back only when p(x)=0 or x = 0. The eigenvalues of this x matrix will be the value of x at these poisitions (the roots). This is the companion matrix https://en.wikipedia.org/wiki/Companion_matrix

We can truncate the space by using the zero equation.

It’s a pretty funky construction, I’ll admit.

To take it up to multivariable, we bring in a larger space [1 x y x^2 xy y^2 …] = xbar kron ybar

We now need two equations to reduce it to points. The X matrix is lifted to X kron I. and we can pad it with ?

Multiplying by an entire polynomial. Sylvester matrix for shared roots. Double root testing.

Sylvester matrix is based on something similar to bezout’s identity. To find out if some things p q has common factors you can find 2 things r s such that r*p + q*s = 0

https://en.wikipedia.org/wiki/Polynomial_greatest_common_divisor#B%C3%A9zout’s_identity_and_extended_GCD_algorithm

Sum of Squares is somewhat related material on systems of polynomial inequalities which can be translated to semidefinite matrix constraints. If you want to include equalities, you can use groebner bases to presolve them out.

Parrilo course material on Sum of Squares.

https://learning-modules.mit.edu/materials/index.html?uuid=/course/6/sp16/6.256#materials

Paper on using greobner and CAD (cylindrical algebraic decomposition) for opitmization and control

Using groebner basis for constraint satisfaction problems: x^n=1 gives a root of unity. There are n solutions. This gives a finite set to work with. Then you can add more equations. This is related to the max-cut thing. I saw this on Cox webpage.

You can require neighbors to have different vertices by 0=(xi^k – xj^k)/(xi – xj). You can encode many constraints using clever algebra.

an example using the same technique to solve sudoku

Sympy tutorial solving geoemtric theorems and map coloring

explicitly mentions toric groebner as integer programming.

other interesting exmaples

http://www.scholarpedia.org/article/Groebner_basis

Noncommutative grobner basis have application to solving differential equations? The differential operators are noncommutative. Not just silly quantum stuff. I mean the simple exmaple of non commutativty are the shcordinger momentum operators.

Automatic loop invariant finding

Geometric theorem rpvong

robotic kinematics

Optics? Envelopes, exchange of coordinates. Legendre transformations. Thermodynamics?

Global optimization? Find all local minima.

Nonlinear finite step.

Dynamic Prgramming. Add implicit V variab le for the vlaue function. Constrain via equations of modtion. Perform extermization keeping x0 v0 fixed. dx0=0 dv0=0 and dV=0. Grobner with ordering that removes x1 v1 V1. Iterate. Can keep dt as variable. Power series in t? Other integration schemes.

Probably need some method to simplify that left over relations so that they don’t get too complex. Smoothing? Dropping terms? Minimization may require factoring to find global minimum.

Differentiation. Add to every variable a dx. Collect up first order as a seperate set of constraints. Add conditions df=0 and dy=0 for fixed variables to perform partial differentiation and extremization. A very similar feel to automatic differentiation. Functions tend to not be functions, just other wriables related by constraints

Variable ordering

lex – good for elimination

deglex – total degree then a lex to tie break

grevlex – total degree + reverse lexicographic. The cheapest variable is so cheap that it goes last

block ordering, seperate variables into blocks and pick orderings inside blocks

general matrix ordering. Apply a matrix to the exponent vectors and lex comparse the results. Others are a subset.

Can’t I have a don’t care/ partial order? would be nice for blockwise elimination I feel like.

Non-commutative

http://sheaves.github.io/Noncommutative-Sage/

Physicsy

https://arxiv.org/pdf/hep-th/0609022

CAD book

https://link.springer.com/book/10.1007%2F978-3-7091-9459-1

Rings have addition and multiplication but not division necessarily. Polynomials, integers, aren’t guarenteed to have inverses that remain polynomials or integers.

ideal = a subset of a ring that absorbs multiplication. Also closed under addition

All polynomial conseqeunces of a system of equations

HIlbert Basis theorem – all ideals are egenrated by a finite set

ideal generated from set – any element of ring that can be generated via addition and multiplication by arbitary element. Is ideal because if you multiplied it by another object, it is still and sum of multiples.

Ideals are sometimes kind of a way of talking about factors without touching factors. Once something is a multiple of 5, no matter what you multiply it with, it is still a multiple of 5. If (x – 7) is a factor of a polynomial, then no matter what you multiply it with, (x-7) is still a factor. Zeros are preserved.

Principal ideal domain – every ideal is generated by a single object

Prime ideal. if a*b is in ideal then either a or b is in ideal. Comes from prime numbers ideal (all number divislable by prime number). If ab has a factor of p then either a or b had a factor of p. whereas consider all mutiples of 4. if a = b =2 then ab is a mutiple of 4, but neither a nor b are a multiple of 4.

1d polynomials. Everything is easy.

Polynomial division is doable. You go power by power. Then you may have a remained left over. It’s pretty weird.

You can perform the gcd of two polynomials using euclidean algorithm.

The ideal generated by a couple of them is generated by the multipolynomial gcd?

a = cx + dy + r

multivariate division: we can do the analog of polynomial division in the multivariate case. But we need an ordering of terms. reaminder is not unique.

But for certain sets of polynomials, remainder is unique.

Why the fixation on leading monomials?

The S-polynomial is the analog of one step of the euclidean algorithm. It also has the flavor of a wronskian or an anticommutator.

The bag euclidean algorithm. Grab the two things (biggest?). Take remainder between them, add remainder into bag.

This is the shape of the buchberger algorithm.

Finding homology or cohomology of solutions. Good question. One can see how this could lead to categorical nonsense since Category theory was invented for topological questions.

The variety is where a set of polynomials is 0. Roots and zero surfaces

List gives set of polynomials.

[forall a. Field a => (a,a,a) -> a ]

Or explicit

union and intersection can be achieved via multiplication and combining the sets

Krull dimension – Definition of dimension of algebraic variety. Maximal length of inclusion chain of prime ideals.

Ideals and Varieites have a relation that isn’t quite trivial.

The ideal of a variety

Envelopes – parametrized set of varieties f(x,t)=0 and partial_t f(x,t)=0. Eliminate t basically to draw the thing. Or trace out t?

Wu’s method for geometric theorem proving. You don’t need the full power of a grobner basis.

Polynomial maps. Talk about in similar language to differential geometry.

Boxes are a simple way to talk about subsets. Or lines, planes. Or polytopes.

Also any function that gives a true false value. But this is very limited in what you can actually do.

Varieties give us a concrete way to talk about subsets. Grothendieck schemes give unified languages supposedly using categorical concepts. Sounds like a good fit for Haskell.

class Variety

use powser. Functor composition makes multivariable polynomials. Tuples or V3 with elementwise multiplication

-- give variables names newtype X a = X [a] newtype Y a = Y [a] -- from k^n -> k^m type family PolyFun n m k where PolyFun n (S m) k = (PolyFun n 1, PolyFun n m) PolyFun (S n) 1 k = [PolyFun n 1 k] PolyFun 1 1 k = k -- Gonna need to Make a typeclass to actually do this. Yikes -- it's just not as simple as Cat a b type. You really need to do computation -- and input a is not same as class PolyF f where pcompose :: PolyFun b c k -> PolyFun a b k -> PolyFun a b k pid :: Num k => PolyFun b c k -- related to ideal of n generators on a space k^m -- this functions will compose some incoming -- or is this better thought of as a variety? type Idealish :: (PolyFun n 1) -> PolyFun m 1 makeidealish :: PolyFun m n -> Ideal makeidealish f = flip pcompose f -- apply turns polynomial into haskell function apply :: (PolyFun n m) -> V n -> V m -- somehow should be able to replace points with varieties. It's like a whole thing type VarietyFun = (PolyFun n 1 k) -> (PolyFun m 1 k) (PolyFun n 1 k -> PolyFun m 1 k) -> (PolyFun m 1 k -> PolyFun l)

The polynomial as a type parameter for agda. Regular Functions are functions from one variety to another. They are the same as the polynomial ring quotiented out by the ideal of the variety.

Ring Space and Geometric Space (affine space)

Maximal ideals can be thought of as points. (ideal of x-a, y-b, …).

Free Polynomials ~ Free Num. Sparse representation. Uses Ordering of a. We should not assume that the are a power like in http://hackage.haskell.org/package/polynomial-0.7.3/docs/Math-Polynomial.html

Ord is monomial ordering. Think of a as [X,Y,X,X,X]

divmod :: (Integral a, Ord a) => Poly r a -> Poly r a -> Poly r a

newtype Monomial a = Monomial [a]

— different monomial newtype orderings for lex, etc.

Monomial (Either X Y)

divmod as bs = remove bs from as. if can’t remainder = as, div = 0

Intuition pumps : algebraic geometry, differential geoemtry, ctaegory theory, haskell agda.

In differential geometry, embedding sucks. We get around it by defining an atlas and differential maps.

There is a currying notion for polynomials. We can consider a polynomial as having coefficinets which themselves are polynomials in other variables or all at once.

What can be solved linearly? The Nullstullensatz certificate can be solved using linear equations

Resultants. What are they? Linear sums of monomials powers * the orginal polynomials. Det = 0 implies that we can find a polynomial combination

What is the deal with resultants

Toric Varieties. C with hole in it is C*. This is the torus because it is kind of like a circle. (Homologically?). There is some kind of integer lattice lurking and polytopes. Gives discrete combinatorial flavor to questions somehow. Apparently one of the more concrete/constructive arenas to work in.

binomaial ideals. the variety will be given by binomials

maps from one space to another which are monomial. can be implicitized into a variety. map is described by integer matrix. Integer programming?

Similar “cones” have been discussed in teh tropical setting is this related?

Algebraic statistics. Factor graph models. Probablisitc graphical models. Maybe tihs is why a PGM lady co taught that couse with Parillo

Modules

Tropical geometry

http://www.cmap.polytechnique.fr/~gaubert/papers.html

Loits of really intriguing sounding applications. Real time verification

gfan

How does the polynomial based optimization of the EDA course relate to this stuff? https://en.wikipedia.org/wiki/Logic_optimization

Mixed volume methods? Polytopes.

cdd and other polytopic stuff. Integration of polynomials over polytopes

Software of interest

Sage

Sympy

Singular – Plural non-commutative?

FGb – Faugiere’s implmentation of Grobner basis algorithms

Macaulay

CoCoa

tensorlab – https://en.wikipedia.org/wiki/Tensor_software

sostools

PolyBori – polynomials over boolean rings http://polybori.sourceforge.net/doc/tutorial/tutorial.html#tutorialli1.html

LattE

4ti2

normaliz

polymake – https://polymake.org/doku.php/tutorial/start slick

http://hep.itp.tuwien.ac.at/~kreuzer/CY/CYpalp.html Calabi Yau Palp????

TOPCOM

frobby – can get euler charactersitics of monomial ideals? http://www.broune.com/frobby/index.html

gfan

https://www.swmath.org/browse/msc

Homotopy continuation:

Bertini

http://homepages.math.uic.edu/~jan/phcpy_doc_html/index.html

phcpy and phcpack

hom4ps

https://www.juliahomotopycontinuation.org/

certification:

http://www.math.tamu.edu/~sottile/research/stories/alphaCertified/

cadenza

Jan

http://homepages.math.uic.edu/~jan/mcs563s14/index.html

www.math.uic.edu/~jan/tutorial.pdf

bezout thereom – for dense systems, number of solutions is bound by product of total degree of equations.

bernstein bound – newton polytope gives bound of number of solutions of polynomial system. useful for sparse

One could make an argument for the homotopy continuation methods being the analog of iterative solutions for linear equations if grobner basis are gaussian elimibnation. Take equation we know how to solve (~preconditioner) and perform some iterative thing on it.

add enough random linear equations to make system full (points).

Then you have a membership algorithm due to sweeping of planes. Once you have points on actual varieites, pairwise compare them.

Suggestion that “linear program” form helps auto differentiation?

local rings. thickening? Infinite power series modded out by local relation. One maximal ideal.

differential geometry on algebaric surfaces.

modules are like vector spaces.

Ring linear

Canonical example, a vector of polynomials.

1-d space of polynomials.

Module morphism – respects linearity with sresepct to scalar multiplacation and addition Can be specified compoent wise. But has to be specified in such a way that resepct.

Basis – Linearly Independent set that spans the whole module. May not exist.

So were are kind of stuck always working in overcomplete basis to make thje vector space analogy. The generators have non trivial relations that equal zero. These coefficients form their own vector space. The space whole image is zero because of the relations is called the first syzygy module.

But then do we have a complete basis of all the relations? Or is it over complete?

If you ignore that the entries of a vectors are polynomials it becomes vector space. But but because they are they have secret relations.

even 1 dimensional vector space has some funky structure because of the polynomial nautre of the ring.

Somehow fields save us?

Paramaetrized vector curves, surfaces.

Parametrzied matrices.

Noncommutative polynomials. We could perhaps consider the process of normal ordering something related to a grobner basis calcaultion. Perhaps a multi polynomial division process? Consider the ordering where dagger is greaer than no dagger. Canonical basis also has i<j (more important for fermion).

SOS gives you the exact minimum of 1-d polynomial. You could also imagine encoding this as a semidefintier program. H-lam>=0. Min lam. Where H is the characterstic matrix.

We can diagonalize to the sos form, and then take each individual term = 0 to solve for x*.

While integer programming does that funky toric variety stuff with the objective vector deswcribing the grobner basis, binary programming is simple. x^2=x + linear eequations and constraints

haskell grobener

1. Monomials. Exponent vectors. Logarithmic representation. Mutiplication is addition. Composition is elementwise multiplication. Type level tag for ordering.

newtype Mon3 ord = V3 Int

data Lex

data DegLex

Ordering of monomials is important. Map is perfect

Map (Mon3 ord) ring

Groebner bases can be used to describe many familiar operations. Linear algerba, gaussian elminiation. Using commutators. Building power series assuming terms are relatively irrelevant.

Can I get a power series solution for x^2 + ax + 1=0 by using a negative ordering for a? I need another equation. x = \sum c_n * a^n. (x+dx)? How do I get both solutions?

Dual numbers for differential equations. dx is in a ring such that dx^n = 0.

Subset sum. Find some of subset of numebrs that add up to 0.

s um variables s_i

Solutions obey

s_0 = 0

(s_i – s_{i-1})(s_i – s_{i-1}-a_{i-1})=0

s_N = 0

Factors give OR clauses. Sepearte oplynomials give AND clauses. pseudo CNF form. Can’t always write polys as factors though? This pattern also matches the graph coloring.

More interesting books:

Some fun with algebraic numbers

https://mattpap.github.io/masters-thesis/html/src/algorithms.html

https://en.wikipedia.org/wiki/Factorization_of_polynomials

Numerical vs Symbolic

Numeric

https://en.wikipedia.org/wiki/Root-finding_algorithm

Pick a random point. Then apply Newton’s method. Do this over and over. If you find N unique factors, you’ve done it. A little unsatisfying, right? No guarantee you’re going to find the roots.

2. Perturbation theory / Holonomy continuation. Start with a polynomial with the same number of total roots that you know how to factor. x^N – 1 = 0 seems like an easy choice. Given , . . You can use this ODE to track the roots. At every step use Newton’s method to cleanup the result. Problems can still arise. Do roots collapse? Do they smack into each other? Do they run off to infinity?

3. The Companion matrix. You can convert finding the roots into an eigenvalue problem. The determinant of a (A – \lambda) is a polynomial with roots at the eigenvalues. So we need tyo construct a matrix whose deteerminant equals the one we want. The companion matrix simulates multiplication by x. That is what the 1 above the diagonal do. Then the final row replaces x^(N+1) with the polynomial. In wikipedia, this matrix is written as the transpose. https://en.wikipedia.org/wiki/Companion_matrix

4. Stetter Numerical Polynomial Algebra. We can form representations basically of the Quotient Rings of an Ideal. We can make matrices A(j) that implement multiplication by monomials x^j in F[x]/I. Then we can take joint eigensolutions to diagonalize these multiplications. Something something lagrange polynomials. Then if the solutions respect some kind of symmettry, it makes sense that we can use Representation theory proper to possibly solve everything. This might be the technique of Galois theory metnoined in that Lie Algebra book. This is not unconnected with the companion matrix technique above. These matrices are going to grow very higher dimensional.

Thought. Could you use holonomy continuation to get roots, then interpolate those roots into a numerical grobner basis? Are the Lagrange polynomials of the zero set a grobner basis?

Symbolic

Part of what makes it seem so intimidating is that it isn’t obvious how to brute force the answer. But if we constrain ourselves to certain kinds of factors, they are brute forceable.

Given a suggested factor, we can determine whether it actually is a factor by polynomial division. If the remainder left over from polynomial division is 0, then it is a factor.

If we have an enumerable set of possibilities, even if large, then it doesn’t feel crazy to find them.

Any root of a polynomial with rational coefficients can be converted to integer coefficients by multiplying out all the denominators.

Let’s assume the polynomial has factors of integer coefficients.

Rational Root Test

Kronecker’s method

Finite Fields. It is rather remarkable that there exists finite thingies that have the algerbaic properties of the rationals, reals, and complex numbers. Typically when discretizing continuum stuff, you end up breaking some of the nice properties, like putting a PDE on a grid screws over rotational symmetry. Questions that may be hard to even see how to go about them become easy in finite fields in principle, because finite fields are amenable to brute force search. In addition, solutions in finite fields may simply extend to larger fields, giving you good methods for calculations over integers or rationals or what have you.

SubResultant. A curious property that if two polynomials share roots/common factors, it is pretty easy to seperate that out. The GCD of the polynomials.

Kind of the gold standard of root finding is getting a formula in terms of square roots. This is an old question. Galois Theory is supposedly the answer.

The post Dump of Nonlinear Algebra / Algebraic geometry Notes. Good Links Though appeared first on Hey There Buddo!.

]]>