Categorical LQR Control with Linear Relations

Last time, I described and built an interesting expansion of describing systems via linear functions, that of using linear relations. We remove the requirement that mappings be functional (exactly one output vector for any given input vector), which extends our descriptive capability. This is useful for describing “open” pieces of a linear system like electric circuits. This blog post is about another application: linear dynamical systems and control.

Linear relations are an excellent tool for reachability/controllability. In a control system, it is important to know where it is possible to get the system to. With linear dynamics x_{t+1} = Ax_{t} + B u_{t}, an easy controllability question is “can my system reach into some subspace”. This isn’t quite the most natural question physically, but it is natural mathematically. My first impulse would be to ask “given the state starts in this little region over here, can it get to this little region over here”, but the subspace question is a bit easier to answer. It’s a little funky but it isn’t useless. It can detect if the control parameter only touches 1 of 2 independent dynamical systems for example. We can write the equations of motion as a linear relation and iterate composition on them. See Erbele for more.

There is a set of different things (and kind of more interesting things) that are under the purview of linear control theory, LQR Controllers and Kalman filters. The LQR controller and Kalman filter are roughly (exactly?) the same thing mathematically. At an abstract mathematical level, they both rely on the fact that optimization of a quadratic objective x^T Q x + r^T x + c with a linear constraints Ax=b is solvable in closed form via linear algebra. The cost of the LQR controller could be the squared distance to some goal position for example. When you optimize a function, you set the derivative to 0. This is a linear equation for quadratic objectives. It is a useful framework because it has such powerful computational teeth.

The Kalman filter does a similar thing, except for the problem of state estimation. There are measurements linear related to the state that you want to match with the prior knowledge of the linear dynamics of the system and expected errors of measurement and environment perturbations to the dynamics.

There are a couple different related species of these filters. We could consider discrete time or continuous time. We can also consider infinite horizon or finite horizon. I feel that discrete time and finite horizon are the most simple and fundamental so that is what we’ll stick with. The infinite horizon and continuous time are limiting cases.

We also can consider the dynamics to be fixed for all time steps, or varying with time. Varying with time is useful for the approximation of nonlinear systems, where there are different good nonlinear approximation (taylor series of the dynamics) depending on the current state.

There are a couple ways to solve a constrained quadratic optimization problem. In some sense, the most conceptually straightforward is just to solve for a span of the basis (the “V-Rep” of my previous post) and plug that in to the quadratic objective and optimize over your now unconstrained problem. However, it is commonplace and elegant in many respects to use the Lagrange multiplier method. You introduce a new parameter for every linear equality and add a term to the objective \lambda^T (A x - b). Hypothetically this term is 0 for the constrained solution so we haven’t changed the objective in a sense. It’s a funny slight of hand. If you optimize over x and \lambda, the equality constraint comes out as your gradient conditions on \lambda. Via this method, we convert a linearly constrained quadratic optimization problem into an unconstrained quadratic optimization problem with more variables.

Despite feeling very abstract, the value these Lagrangian variables take on has an interpretation. They are the “cost” or “price” of the equality they correspond to. If you moved the constraint by an amount Ax - b + \epsilon, you would change the optimal cost by an amount \lambda \epsilon. (Pretty sure I have that right. Units check out. See Boyd https://www.youtube.com/watch?v=FJVmflArCXc)

The Lagrange multipliers enforcing the linear dynamics are called the co-state variables. They represent the “price” that the dynamics cost, and are derivatives of the optimal value function V(s) (The best value that can be achieved from state s) that may be familiar to you from dynamic programming or reinforcement learning. See references below for more.

Let’s get down to some brass tacks. I’ve suppressed some terms for simplicity. You can also add offsets to the dynamics and costs.

A quadratic cost function with lagrange multipliers. Q is a cost matrix associated with the x state variables, R is a cost matrix for the u control variables.

C = \lambda_0 (x_0 - \tilde{x}_0) + \sum_t x_t^T Q x_t + u_t^T R u_t + \lambda_{t+1}^T (x_{t+1} - A x_t - B u_t  )

Equations of motion results from optimizing with respect to \lambda by design.

\nabla_{\lambda_{t+1}} C = x_{t+1} - A x_t - B u_t  = 0.

The initial conditions are enforced by the zeroth multiplier.

\nabla_{\lambda_0} C = x_i - x_{0} = 0

Differentiation with respect to the state x has the interpretation of backwards iteration of value derivative, somewhat related to what one finds in the Bellman equation.

\nabla_{x_t} C = Q x_{t} + A^T \lambda_{t+1} - \lambda_{t} = 0 \Longrightarrow \lambda_{t} =  A^T \lambda_{t+1} + Q x_{t}

The final condition on value derivative is the one that makes it the most clear that the Lagrange multiplier has the interpretation of the derivative of the value function, as it sets it to that.

\nabla_{x_N} C = Q x_N - \lambda_{N} = 0

Finally, differentiation with respect to the control picks the best action given knowledge of the value function at that time step.

\nabla_{u_t} C = R u_{t} - B^T \lambda_t  = 0

Ok. Let’s code it up using linear relations. Each of these conditions is a separate little conceptual box. We can build the optimal control step relation by combining these updates together

The string diagram for a single time step of control

The following is a bit sketchy. I am confident it makes sense, but I’m not confident that I have all of the signs and other details correct. I also still need to add the linear terms to the objective and offset terms to the dynamics. These details are all kind of actually important. Still, I think it’s nice to set down in preliminary blog form.

Initial conditions and final conditions. Final conditions fix the final value derivative to Qx. Initial conditions set x to some value. Should there be a leg for lambda on the initial conditions?

Bits and Bobbles

  • The code in context https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs
  • Some of the juicier stuff is nonlinear control. Gotta walk before we can run though. I have some suspicions that a streaming library may be useful here, or a lens-related approach. Also ADMM.
  • Reminiscent of Keldysh contour. Probably a meaningless coincidence.
  • In some sense H-Rep is the analog of (a -> b -> Bool) and V-rep [(a,b)]
  • Note that the equations of motion are relational rather than functional for a control systems. The control parameters u describe undetermined variables under our control.
  • Loopback (trace) the value derivative for infinite horizon control.
  • Can solve the Laplace equation with a Poincare Steklov operator. That should be my next post.
  • There is something a touch unsatisfying even with the reduced goals of this post. Why did I have to write down the quadratic optimization problem and then hand translate it to linear relations? It’s kind of unacceptable. The quadratic optimization problem is really the most natural statement, but I haven’t figured out to make it compositional. The last chapter of Rockafellar?

References

https://arxiv.org/abs/1405.6881 Baez and Erbele – Categories in Control

http://www.scholarpedia.org/article/Optimal_control

http://www.argmin.net/2016/05/18/mates-of-costate/ – interesting analogy to backpropagation. Lens connection?

https://courses.cs.washington.edu/courses/cse528/09sp/optimality_chapter.pdf

Linear Relation Algebra of Circuits with HMatrix

Oooh this is a fun one.

I’ve talked before about relation algebra and I think it is pretty neat. http://www.philipzucker.com/a-short-skinny-on-relations-towards-the-algebra-of-programming/. In that blog post, I used finite relations. In principle, they are simple to work with. We can perform relation algebra operations like composition, meet, and join by brute force enumeration.

Unfortunately, brute force may not always be an option. First off, the finite relations grow so enormous as to be make this infeasible. Secondly, it is not insane to talk about relations or regions with an infinite number of elements, such as some continuous blob in 2D space. In that case, we can’t even in principle enumerate all the points in the region. What are we to do? We need to develop some kind of finite parametrization of regions to manipulate. This parametrization basically can’t possibly be complete in some sense, and we may choose more or less powerful systems of description for computational reasons.

In this post, we are going to be talking about linear or affine subspaces of a continuous space. These subspaces are hyperplanes. Linear subspaces have to go through the origin, while affine spaces can have an offset from the origin.

In the previous post, I mentioned that the finite relations formed a lattice, with operations meet and join. These operations were the same as set intersection and union so the introduction of the extra terminology meet and join felt a bit unwarranted. Now the meet and join aren’t union and intersection anymore. We have chosen to not have the capability to represent the union of two vectors, instead we can only represent the smallest subspace that contains them both, which is the union closed under vector addition. For example, the join of a line and point will be the plane that goes through both.

Linear/Affine stuff is great because it is so computational. Most questions you cant to ask are answerable by readily available numerical linear algebra packages. In this case, we’ll use the Haskell package HMatrix, which is something like a numpy/scipy equivalent for Haskell. We’re going to use type-level indices to denote the sizes and partitioning of these spaces so we’ll need some helper functions.

Matrices are my patronum. They make everything good. Artwork courtesy of David

In case I miss any extensions, make typos, etc, you can find a complete compiling version here https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs

type BEnum a = (Enum a, Bounded a) 

-- cardinality. `size` was already taken by HMatrix :(
card :: forall a. (BEnum a) => Int
card = (fromEnum (maxBound @a)) - (fromEnum (minBound @a)) + 1

In analogy with sets of tuples for defining finite relations, we partition the components of the linear spaces to be “input” and “output” indices/variables \begin{bmatrix} x_1 & x_2 & x_3 & ... & y_1 & y_2 & y_3 & ... \end{bmatrix}. This partition is somewhat arbitrary and easily moved around, but the weakening of strict notions of input and output as compared to functions is the source of the greater descriptive power of relations.

Relations are extensions of functions, so linear relations are an extension of linear maps. A linear map has the form y = Ax. A linear relation has the form Ax + By = 0. An affine map has the form y = Ax + b and an affine relation has the form Ax + By = b.

There are at least two useful concrete representation for subspaces.

  1. We can write a matrix A and vector b down that corresponds to affine constraints. Ax = b. The subspace described is the nullspace of A plus a solution of the equation. The rows of A are orthogonal to the space.
  2. We can hold onto generators of subspace. x = A' l+b where l parametrizes the subspace. In other words, the subspace is generated by / is the span of the columns of A'. It is the range of A'.

We’ll call these two representations the H-Rep and V-Rep, borrowing terminology from similar representations in polytopes (describing a polytope by the inequalities that define it’s faces or as the convex combination of it’s vertices). https://inf.ethz.ch/personal/fukudak/lect/pclect/notes2015/PolyComp2015.pdf These two representations are dual in many respects.

-- HLinRel holds A x = b constraint
data HLinRel a b = HLinRel (Matrix Double) (Vector Double) deriving Show

-- x = A l + b. Generator constraint. 
data VLinRel a b = VLinRel (Matrix Double) (Vector Double) deriving Show

It is useful to have both reps and interconversion routines, because different operations are easy in the two representations. Any operations defined on one can be defined on the other by sandwiching between these conversion functions. Hence, we basically only need to define operations for one of the reps (if we don’t care too much about efficiency loss which, fair warning, is out the window for today). The bulk of computation will actually be performed by these interconversion routines. The HMatrix function nullspace performs an SVD under the hood and gathers up the space with 0 singular values.

-- if A x = b then x is in the nullspace + a vector b' solves the equation
h2v :: HLinRel a b -> VLinRel a b
h2v (HLinRel a b) = VLinRel a' b' where
        b' = a <\> b -- least squares solution
        a' = nullspace a

-- if x = A l + b, then A' . x = A' A l + A' b = A' b because A' A = 0
v2h :: VLinRel a b -> HLinRel a b
v2h (VLinRel a' b') = HLinRel a b where
        b = a #> b' -- matrix multiply
        a = tr $ nullspace (tr a') -- orthogonal space to range of a.
-- tr is transpose and not trace? A little bit odd, HMatrix.

These linear relations form a category. I’m not using the Category typeclass because I need BEnum constraints hanging around. The identity relations is x = y aka Ix - Iy = 0.

hid :: forall a. BEnum a => HLinRel a a
hid =  HLinRel (i ||| (- i)) (vzero s) where 
                            s = card @a
                            i = ident s

Composing relations is done by combining the constraints of the two relations and then projecting out the interior variables. Taking the conjunction of constraints is easiest in the H-Rep, where we just need to vertically stack the individual constraints. Projection easily done in the V-rep, where you just need to drop the appropriate section of the generator vectors. So we implement this operation by flipping between the two.

hcompose :: forall a b c. (BEnum a, BEnum b, BEnum c) => HLinRel b c -> HLinRel a b -> HLinRel a c
hcompose (HLinRel m b) (HLinRel m' b') = let a'' = fromBlocks [[       ma',           mb' ,    0       ],
                                                               [         0 ,    mb,        mc          ]] in
                                         let b'' = vjoin [b', b] in 
                                         let (VLinRel q p) = h2v (HLinRel a'' b'') in -- kind of a misuse
                                         let q' = (takeRows ca q)  -- drop rows belonging to @b
                                                       === 
                                                  (dropRows (ca + cb) q) in
                                         let [x,y,z] =  takesV [ca,cb,cc] p in
                                         let p'=  vjoin [x,z] in -- rebuild without rows for @b
                                         v2h (VLinRel q' p') -- reconstruct HLinRel
                                       where 
                                           ca = card @a
                                           cb = card @b 
                                           cc = card @c
                                           sb = size b -- number of constraints in first relation
                                           sb' = size b' -- number of constraints in second relation
                                           ma' = takeColumns ca m'
                                           mb' = dropColumns ca m'
                                           mb = takeColumns cb m
                                           mc = dropColumns cb m

(<<<) :: forall a b c. (BEnum a, BEnum b, BEnum c) => HLinRel b c -> HLinRel a b -> HLinRel a c
(<<<) = hcompose

We can implement the general cadre of relation operators, meet, join, converse. I feel the converse is the most relational thing of all. It makes inverting a function nearly a no-op.

hjoin :: HLinRel a b -> HLinRel a b -> HLinRel a b
hjoin v w = v2h $ vjoin' (h2v v) (h2v w)

-- hmatrix took vjoin from me :(
-- joining means combining generators and adding a new generator
-- Closed under affine combination l * x1 + (1 - l) * x2 
vjoin' :: VLinRel a b -> VLinRel a b -> VLinRel a b
vjoin' (VLinRel a b) (VLinRel a' b') = VLinRel (a ||| a' ||| (asColumn (b - b'))) b

-- no constraints, everything
-- trivially true
htop :: forall a b. (BEnum a, BEnum b) => HLinRel a b 
htop = HLinRel (vzero (1,ca + cb)) (konst 0 1) where 
                                      ca = card @a
                                      cb = card @b 

-- hbottom?
                              
hconverse :: forall a b. (BEnum a, BEnum b) => HLinRel a b -> HLinRel b a 
hconverse (HLinRel a b) = HLinRel ( (dropColumns ca a) |||  (takeColumns ca a)) b where 
    ca = card @a
    cb = card @b  

Relational inclusion is the question of subspace inclusion. It is fairly easy to check if a VRep is in an HRep (just see plug the generators into the constraints and see if they obey them) and by using the conversion functions we can define it for arbitrary combos of H and V.


-- forall l. A' ( A l + b) == b'
-- is this numerically ok? I'm open to suggestions.
vhsub :: VLinRel a b -> HLinRel a b -> Bool
vhsub (VLinRel a b) (HLinRel a' b') = (naa' <=  1e-10 * (norm_2 a') * (norm_2 a)  ) && ((norm_2 ((a' #> b) - b')) <= 1e-10 * (norm_2 b')  ) where
          naa' = norm_2 (a' <> a)

hsub :: HLinRel a b -> HLinRel a b -> Bool
hsub h1 h2 = vhsub (h2v h1) h2

heq :: HLinRel a b -> HLinRel a b -> Bool
heq a b = (hsub a b) && (hsub b a)


instance Ord (HLinRel a b) where
  (<=) = hsub
  (>=) = flip hsub 

instance Eq (HLinRel a b) where
  (==) = heq

It is useful the use the direct sum of the spaces as a monoidal product.

hpar :: HLinRel a b -> HLinRel c d -> HLinRel (Either a c) (Either b d)
hpar (HLinRel mab v) (HLinRel mcd v') = HLinRel (fromBlocks [ [mab, 0], [0 , mcd]]) (vjoin [v, v']) where

hleft :: forall a b. (BEnum a, BEnum b) => HLinRel a (Either a b)
hleft = HLinRel ( i ||| (- i) ||| (konst 0 (ca,cb))) (konst 0 ca) where 
    ca = card @a
    cb = card @b  
    i = ident ca

hright :: forall a b. (BEnum a, BEnum b) => HLinRel b (Either a b)
hright = HLinRel ( i ||| (konst 0 (cb,ca)) ||| (- i) ) (konst 0 cb) where 
    ca = card @a
    cb = card @b  
    i = ident cb

htrans :: HLinRel a (Either b c) -> HLinRel (Either a b) c 
htrans (HLinRel m v) = HLinRel m v

hswap :: forall a b. (BEnum a, BEnum b) => HLinRel (Either a b) (Either b a)
hswap = HLinRel (fromBlocks [[ia ,0,0 ,-ia], [0, ib,-ib,0]]) (konst 0 (ca + cb)) where 
        ca = card @a
        cb = card @b  
        ia = ident ca
        ib = ident cb


hsum :: forall a. BEnum a => HLinRel (Either a a) a
hsum = HLinRel ( i ||| i ||| - i ) (konst 0 ca)  where 
        ca = card @a 
        i= ident ca

hdup :: forall a. BEnum a => HLinRel a (Either a a)
hdup = HLinRel (fromBlocks [[i, -i,0 ], [i, 0, -i]]) (konst 0 (ca + ca))  where 
        ca = card @a 
        i= ident ca

hdump :: HLinRel a Void
hdump = HLinRel 0 0

hlabsorb ::forall a. BEnum a => HLinRel (Either Void a) a
hlabsorb = HLinRel m v where (HLinRel m v) = hid @a 

A side note: Void causes some consternation. Void is the type with no elements and is the index type of a 0 dimensional space. It is the unit object of the monoidal product. Unfortunately by an accident of the standard Haskell definitions, actual Void is not a BEnum. So, I did a disgusting hack. Let us not discuss it more.

Circuits

Baez and Fong have an interesting paper where they describe building circuits using a categorical graphical calculus. We have the pieces to go about something similar. What we have here is a precise way in which circuit diagrams can be though of as string diagrams in a monoidal category of linear relations.

An idealized wire has two quantities associated with it, the current flowing through it and the voltage it is at.

-- a 2d space at every wire or current and voltage.
data IV = I | V deriving (Show, Enum, Bounded, Eq, Ord)

When we connect wires, the currents must be conserved and the voltages must be equal. hid and hcompose from above still achieve that. Composing two independent circuits in parallel is achieve by hpar.

Independent resistors in parallel.

We will want some basic tinker toys to work with.

A resistor in series has the same current at both ends and a voltage drop proportional to the current

resistor :: Double -> HLinRel IV IV
resistor r = HLinRel ( (2><4)  [ 1,0,-1,   0,
                                 r, 1, 0, -1]) (konst 0 2) 

Composing two resistors in parallel adds the resistance. (resistor r1) <<< (resistor r2) == resistor (r1 + r2))

A bridging resistor allows current to flow between the two branches

bridge :: Double -> HLinRel (Either IV IV) (Either IV IV)
bridge r = HLinRel (  (4><8) [ 1,0, 1,  0, -1, 0, -1,  0, -- current conservation
                               0, 1, 0, 0, 0, -1 , 0,  0, --voltage maintained left
                               0, 0, 0, 1, 0,  0,  0, -1, -- voltage maintained right
                               r, 1, 0,-1, -r,  0,  0, 0  ]) (konst 0 4)  
A bridging resistor

Composing two bridge circuits is putting the bridge resistors in parallel. The conductance G=\frac{1}{R} of resistors in parallel adds. hcompose (bridge r1) (bridge r2) == bridge 1 / (1/r1 + 1/r2).

parallel resistors compose

An open circuit allows no current to flow and ends a wire. open ~ resistor infinity

open :: HLinRel IV Void
open = HLinRel (fromList [[1,0]]) (konst 0 1)

At branching points, the voltage is maintained, but the current splits.

cmerge :: HLinRel (Either IV IV) IV
cmerge = HLinRel (fromList [[1, 0, 1, 0, -1, 0],
                           [0,1,0,0,0 ,-1  ],
                           [0,0,0,1, 0, -1]])  (konst 0 3)

This cmerge combinator could also be built using a short == bridge 0 , composing a branch with open, and then absorbing the Void away.

We can bend wires up or down by using a composition of cmerge and open.

cap :: HLinRel  (Either IV IV) Void
cap  = hcompose open cmerge

cup :: HLinRel Void (Either IV IV)
cup = hconverse cap

ground :: HLinRel IV Void
ground = HLinRel ( (1><2) [ 0 , 1 ]) (vzero 1) 

Voltage and current sources enforce current and voltage to be certain values

vsource :: Double -> HLinRel IV IV
vsource v = HLinRel ( (2><4) [ 1,0,-1,   0,
                               0, 1, 0, -1]) (fromList [0,v])  

isource :: Double -> HLinRel IV IV
isource i = HLinRel (fromList [ [1,0, -1,   0], -- current conservation
                                [1, 0, 0,  0]]) (fromList [0,i])  

Measurements of circuits proceed by probes.

type VProbe = ()
vprobe :: HLinRel IV VProbe
vprobe = HLinRel ( (2><3)  [1,0,0,
                            0,1,-1]) (konst 0 2)  

Inductors and capacitors could be included easily, but would require the entries of the HMatrix values to be polynomials in the frequency \omega, which it does not support (but it could!). We'll leave those off for another day.

We actually can determine that the rules suggested above are being followed by computation.

r20 :: HLinRel IV IV
r20 = resistor 20

main :: IO ()
main = do
            print (r20 == (hid <<< r20))
            print (r20 == r20 <<< hid)
            print (r20 == (hmeet r20 r20))
            print $ resistor 50 == r20 <<< (resistor 30)
            print $ (bridge 10) <<< (bridge 10) == (bridge 5)
            print $ v2h (h2v r20) == r20
            print $ r20 <= htop
            print $ hconverse (hconverse r20) == r20
            print $ (open <<< r20) == open

Bits and Bobbles

  • Homogenous systems are usually a bit more elegant to deal with, although a bit more unfamiliar and abstract.
  • Could make a pandas like interface for linear relations that uses numpy/scipy.sparse for the computation. All the swapping and associating is kind of fun to design, not so much to use. Labelled n-way relations are nice for users.
  • Implicit/Lazy evaluation. We should let the good solvers do the work when possible. We implemented our operations eagerly. We don't have to. By allowing hidden variables inside our relations, we can avoid the expensive linear operations until it is useful to actually compute on them.
  • Relational division = quotient spaces?
  • DSL. One of the beauties of the pointfree/categorical approach is that you avoid the need for binding forms. This makes for a very easily manipulated DSL. The transformations feel like those of ordinary algebra and you don't have to worry about the subtleties of index renaming or substitution under binders.
  • Sparse is probably really good. We have lots of identity matrices and simple rearrangements. It is very wasteful to use dense operations on these.
  • Schur complement https://en.wikipedia.org/wiki/Schur_complement are the name in the game for projecting out pieces of linear problems. We have some overlap.
  • Linear relations -> Polyhedral relations -> Convex Relations. Linear is super computable, polyhedral can blow up. Rearrange a DSL to abuse Linear programming as much as possible for queries.
  • Network circuits. There is an interesting subclass of circuits that is designed to be pretty composable.

https://en.wikipedia.org/wiki/Two-port_network Two port networks are a very useful subclass of electrical circuits. They model transmission lines fairly well, and easily composable for filter construction.

It is standard to describe these networks by giving a linear function between two variables and the other two variables. Depending on your choice of which variables depend on which, these are called the z-parameters, y-parameters, h-parameters, scattering parameters, abcd parameters. There are tables of formula for converting from one form to the others. The different parameters hold different use cases for composition and combining in parallel or series. From the perspective of linear relations this all seems rather silly. The necessity for so many descriptions and the confusing relationship between them comes from the unnecessary and overly rigid requirement of have a linear function-like relationship rather than just a general relation, which depending of the circuit may not even be available (there are degenerate configurations where two of the variables do not imply the values of the other two). A function relationship is always a lie (although a sometimes useful one), as there is always back-reaction of new connections.

-- voltage divider
divider :: Double -> Double -> HLinRel (Either IV IV) (Either IV IV)
divider r1 r2 = hcompose (bridge r2) (hpar (resistor r1) hid) 

The relation model also makes clearer how to build lumped models out of continuous ones. https://en.wikipedia.org/wiki/Lumped-element_model

https://en.wikipedia.org/wiki/Transmission_line https://en.wikipedia.org/wiki/Distributed-element_model

    null
  • Because the type indices have no connection to the actual data types (they are phantom) it is a wise idea to use smart constructors that check that the sizes of the matrices makes sense.

-- smart constructors
hLinRel :: forall a b. (BEnum a, BEnum b) => Matrix Double -> Vector Double -> Maybe (HLinRel a b) 
hLinRel m v | cols m == (ca + cb) &&  (size v == rows m)  = Just (HLinRel m v)
            |  otherwise = Nothing  where 
                 ca = card @a
                 cb = card @b  
  • Nonlinear circuits. Grobner Bases and polynomial relations?
  • Quadratic optimization under linear constraints. Can't get it to come out right yet. Clutch for Kalman filters. Nice for many formulations like least power, least action, minimum energy principles. Edit: I did more in this direction here http://www.philipzucker.com/categorical-lqr-control-with-linear-relations/
  • Quadratic Operators -> Convex operators. See last chapter of Rockafellar.
  • Duality of controllers and filters. It is well known (I think) that for ever controller algorithm there is a filter algorithm that is basically the same thing.
    • LQR - Kalman
    • Viterbi filter - Value function table
    • particle filter - Monte Carlo control
    • Extended Kalman - iLQR-ish? Use local approximation of dynamics
    • unscented kalman - ?

Refs

Failing to Bound Kissing Numbers

https://en.wikipedia.org/wiki/Kissing_number

Cody brought up the other day the kissing number problem.Kissing numbers are the number of equal sized spheres you can pack around another one in d dimensions. It’s fairly self evident that the number is 2 for 1-d and 6 for 2d but 3d isn’t so obvious and in fact puzzled great mathematicians for a while. He was musing that it was interesting that he kissing numbers for some dimensions are not currently known, despite the fact that the first order theory of the real numbers is decidable https://en.wikipedia.org/wiki/Decidability_of_first-order_theories_of_the_real_numbers

I suggested on knee jerk that Sum of Squares might be useful here. I see inequalities and polynomials and then it is the only game in town that I know anything about.

Apparently that knee jerk was not completely wrong

https://arxiv.org/pdf/math/0608426.pdf

Somehow SOS/SDP was used for bounds here. I had an impulse that the problem feels SOS-y but I do not understand their derivation.

One way the problem can be formulated is by finding or proving there is no solution to the following set of equations constraining the centers x_i of the spheres. Set the central sphere at (0,0,0,…) . Make the radii 1. Then\forall i. |x_i|^2 = 2^2 and \forall i j.  |x_i - x_j|^2 \ge 2^2

I tried a couple different things and have basically failed. I hope maybe I’ll someday have a follow up post where I do better.

So I had 1 idea on how to approach this via a convex relaxation

Make a vector x = \begin{bmatrix} x_0 & y _0 & x_1 & y _1 & x_2 & y _2 & ... \end{bmatrix} Take the outer product of this vector x^T x = X Then we can write the above equations as linear equalities and inequalities on X. If we forget that we need X to be the outer product of x (the relaxation step), this becomes a semidefinite program. Fingers crossed, maybe the solution comes back as a rank 1 matrix. Other fingers crossed, maybe the solution comes back and says it’s infeasible. In either case, we have solved our original problem.

import numpy as np
import cvxpy as cvx


d = 2
n = 6
N = d * n 
x = cvx.Variable((N+1,N+1), symmetric=True)
c = []
c += [x >> 0]
c += [x[0,0] == 1]
# x^2 + y^2 + z^2 + ... == 2^2 constraint
x1 = x[1:,1:]
for i in range(n):
    q = 0
    for j in range(d):
        q += x1[d*i + j, d*i + j]
    c += [q == 4] #[ x1[2*i + 1, 2*i + 1] + x[2*i + 2, 2*i + 2] == 4]

#c += [x1[0,0] == 2, x1[1,1] >= 0]
#c += [x1[2,2] >= 2]

# (x - x) + (y - y) >= 4
for i in range(n):    
    for k in range(i):
        q = 0
        for j in range(d):
            q += x1[d*i + j, d*i + j] + x1[d*k + j, d*k + j] - 2 * x1[d*i + j, d*k + j] # xk ^ 2 - 2 * xk * xi 
        c += [q >= 4]
print(c)
obj = cvx.Maximize(cvx.trace(np.random.rand(N+1,N+1) @ x ))
prob = cvx.Problem(obj, c)
print(prob.solve(verbose=True))
u, s, vh = np.linalg.svd(x.value)
print(s)

import matplotlib.pyplot as plt
xy = vh[0,1:].reshape(-1,2)
print(xy)
plt.scatter(xy[0], xy[1] )
plt.show()

Didn’t work though. Sigh. It’s conceivable we might do better if we start packing higher powers into x?

Ok Round 2. Let’s just ask z3 and see what it does. I’d trust z3 with my baby’s soft spot.

It solves for 5 and below. Z3 grinds to a halt on N=6 and above. It ran for days doin nothing on my desktop.

from z3 import *
import numpy as np

d = 2 # dimensions
n = 6 # number oif spheres

x = np.array([ [ Real("x_%d_%d" % (i,j))     for j in range(d) ] for i in range(n)])
print(x)

c = []
ds = np.sum(x**2, axis=1)
c += [ d2 == 4 for d2 in ds] # centers at distance 2 from origin


ds = np.sum( (x.reshape((-1,1,d)) - x.reshape((1,-1,d)))**2, axis = 2)

c += [ ds[i,j]  >= 4  for i in range(n) for j in range(i)] # spheres greater than dist 2 apart
c += [x[0,0] == 2]
print(c)
print(solve(c))

Ok. A different tact. Try to use a positivstellensatz proof. If you have a bunch of polynomial inequalities and equalities if you sum polynomial multiples of these constraints, with the inequalities having sum of square multiples, in such a way to = -1, it shows that there is no real solution to them. We have the distance from origin as equality constraint and distance from each other as an inequality constraint. I intuitively think of the positivstellensatz as deriving an impossibility from false assumptions. You can’t add a bunch of 0 and positive numbers are get a negative number, hence there is no real solution.

I have a small set of helper functions for combining sympy and cvxpy for sum of squares optimization. I keep it here along with some other cute little constructs https://github.com/philzook58/cvxpy-helpers

import cvxpy as cvx
from sympy import *
import random
'''
The idea is to use raw cvxpy and sympy as much as possible for maximum flexibility.

Construct a sum of squares polynomial using sospoly. This returns a variable dictionary mapping sympy variables to cvxpy variables.
You are free to the do polynomial operations (differentiation, integration, algerba) in pure sympy
When you want to express an equality constraint, use poly_eq(), which takes the vardict and returns a list of cvxpy constraints.
Once the problem is solved, use poly_value to get back the solution polynomials.

That some polynomial is sum of squares can be expressed as the equality with a fresh polynomial that is explicility sum of sqaures.

With the approach, we get the full unbridled power of sympy (including grobner bases!)

I prefer manually controlling the vardict to having it auto controlled by a class, just as a I prefer manually controlling my constraint sets
Classes suck.
'''


def cvxify(expr, cvxdict): # replaces sympy variables with cvx variables in sympy expr
     return lambdify(tuple(cvxdict.keys()), expr)(*cvxdict.values()) 

def sospoly(terms, name=None):
    ''' returns sum of squares polynomial using terms, and vardict mapping to cvxpy variables '''
    if name == None:
        name = str(random.getrandbits(32))
    N = len(terms)
    xn = Matrix(terms)
    Q = MatrixSymbol(name, N,N)
    p = (xn.T * Matrix(Q) * xn)[0]
    Qcvx = cvx.Variable((N,N), PSD=True)
    vardict = {Q : Qcvx} 
    return p, vardict



def polyvar(terms,name=None):
    ''' builds sumpy expression and vardict for an unknown linear combination of the terms '''
    if name == None:
        name = str(random.getrandbits(32))
    N = len(terms)
    xn = Matrix(terms)
    Q = MatrixSymbol(name, N, 1)
    p = (xn.T * Matrix(Q))[0]
    Qcvx = cvx.Variable((N,1))
    vardict = {Q : Qcvx} 
    return p, vardict

def scalarvar(name=None):
    return polyvar([1], name)

def worker(x ):
    (expr,vardict) = x
    return cvxify(expr, vardict) == 0
def poly_eq(p1, p2 , vardict):
    ''' returns a list of cvxpy constraints '''
    dp = p1 - p2
    polyvars = list(dp.free_symbols - set(vardict.keys()))
    print("hey")
    p, opt = poly_from_expr(dp, gens = polyvars, domain = polys.domains.EX) #This is brutalizing me
    print(opt)
    print("buddo")
    return [ cvxify(expr, vardict) == 0 for expr in p.coeffs()]
    '''
    import multiprocessing
    import itertools
    pool = multiprocessing.Pool()

    return pool.imap_unordered(worker, zip(p.coeffs(),  itertools.repeat(vardict)))
    '''
    
def vardict_value(vardict):
    ''' evaluate numerical values of vardict '''
    return {k : v.value for (k, v) in vardict.items()}

def poly_value(p1, vardict):
    ''' evaluate polynomial expressions with vardict'''
    return cvxify(p1, vardict_value(vardict))

if __name__ == "__main__":
    x = symbols('x')
    terms = [1, x, x**2]
    #p, cdict = polyvar(terms)
    p, cdict = sospoly(terms)
    c = poly_eq(p, (1 + x)**2 , cdict)
    print(c)
    prob = cvx.Problem(cvx.Minimize(1), c)
    prob.solve()

    print(factor(poly_value(p, cdict)))

    # global poly minimization
    vdict = {}
    t, d = polyvar([1], name='t')
    vdict.update(d)

    p, d = sospoly([1,x,x**2], name='p')
    vdict.update(d)
    constraints = poly_eq(7 + x**2 - t, p, vdict)
    obj = cvx.Maximize( cvxify(t,vdict) )
    prob = cvx.Problem(obj, constraints)
    prob.solve()
    print(poly_value(t,vdict))

and here is the attempted positivstellensatz.

import sos
import cvxpy as cvx
from sympy import *
import numpy as np

d = 2
N = 7

# a grid of a vector field. indices = (xposition, yposition, vector component)
'''xs = [ [symbols("x_%d_%d" % (i,j)) for j in range(d)] for i in range(N) ]
gens = [x for l in xs for x in l ]
xs = np.array([[poly(x,gens=gens, domain=polys.domains.EX) for x in l] for l in xs])
'''
xs = np.array([ [symbols("x_%d_%d" % (i,j)) for j in range(d)] for i in range(N) ])

c1 = np.sum( xs * xs, axis=1) - 1
c2 = np.sum((xs.reshape(-1,1,d) - xs.reshape(1,-1,d))**2 , axis=2) - 1

print(c1)
print(c2)
terms0 = [1]
terms1 = terms0 + list(xs.flatten())
terms2 = [ terms1[i]*terms1[j] for j in range(N+1) for i in range(j+1)]
#print(terms1)
#print(terms2)
vdict = {}
psatz = 0
for c in c1:
    lam, d = sos.polyvar(terms2)
    vdict.update(d)
    psatz += lam*c
for i in range(N):
    for j in range(i):
        c = c2[i,j]
        lam, d = sos.sospoly(terms2)
        vdict.update(d)
        psatz += lam*c
#print(type(psatz))
print("build constraints")
constraints = sos.poly_eq(psatz, -1, vdict)
#print("Constraints: ", len(constraints))
obj = cvx.Minimize(1) #sum([cvx.sum(v) for v in vdict.values()]))
print("build prob")
prob = cvx.Problem(obj, constraints)
print("solve")
prob.solve(verbose=True, solver= cvx.SCS)

It worked in 1-d, but did not work in 2d. At order 3 polynomials N=7, I maxed out my ram.

I also tried doing it in Julia, since sympy was killing me. Julia already has a SOS package

using JuMP
using SumOfSquares
using DynamicPolynomials
using SCS
N = 10
d = 2
@polyvar x[1:N,1:d]
X = monomials(reshape(x,d*N), 0:2)
X1 = monomials(reshape(x,d*N), 0:4)

model = SOSModel(with_optimizer(SCS.Optimizer))

acc = nothing
for t in sum(x .* x, dims=2)
    #print(t)
    p = @variable(model, [1:1], Poly(X1))
    #print(p)
    if acc != nothing
        acc += p * (t - 1)
    else
        acc = p * (t - 1)
    end
end

for i in range(1,stop=N)
    for j in range(1,stop=i-1)
        d  = x[i,:] - x[j,:]
        p = @variable(model, [1:1], SOSPoly(X))
        acc += p * (sum(d .* d) - 1)
    end
end

#print(acc)
print(typeof(acc))
@constraint(model, acc[1] == -1 )
optimize!(model)

It was faster to encode, but it’s using the same solver (SCS), so basically the same thing.

I should probably be reducing the system with respect to equality constraints since they’re already in a Groebner basis. I know that can be really important for reducing the size of your problem

I dunno.

Blah blah blah blah A bunch of unedited trash

https://github.com/peterwittek/ncpol2sdpa Peter Wittek has probably died in an avalanche? That is very sad.

These notes

https://web.stanford.edu/class/ee364b/lectures/sos_slides.pdf

Positivstullensatz.

kissing number

Review of sum of squares

minimimum sample as LP. ridiculous problem
min t
st. f(x_i) – t >= 0

dual -> one dual variable per sample point
The only dual that will be non zero is that actually selecting the minimum.

Hm. Yeah, that’s a decent analogy.

How does the dual even have a chance of knowing about poly airhtmetic?
It must be during the SOS conversion prcoess. In building the SOS constraints,
we build a finite, limittted version of polynomial multiplication
x as a matrix. x is a shift matrix.
In prpducing the characterstic polynomial, x is a shift matrix, with the last line using the polynomial
known to be zero to
eigenvectors of this matrix are zeros of the poly.

SOS does not really on polynomials persay. It relies on closure of the suqaring operaiton

maybe set one sphere just at x=0 y = 2. That breaks some symmettry

set next sphere in plane something. random plane through origin?

order y components – breaks some of permutation symmettry.

no, why not order in a random direction. That seems better for symmettry breaking

Learn Coq in Y

Edit: It’s up! https://learnxinyminutes.com/docs/coq/

I’ve been preparing a Learn X in Y tutorial for Coq. https://learnxinyminutes.com/

I’ve been telling people this and been surprised by how few people have heard of the site. It’s super quick intros to syntax and weirdness for a bunch of languages with inline code tutorials.
I think that for me, a short description of that mundane syntactic and programming constructs of coq is helpful.
Some guidance of the standard library, what is available by default. And dealing with Notation scopes, which is a pretty weird feature that most languages don’t have.
The manual actually has all this now. It’s really good. Like check this section out https://coq.inria.fr/refman/language/coq-library.html . But the manual is an intimidating documents. It starts with a BNF description of syntax and things like that. The really useful pedagogical stuff is scattered throughout it.

Anyway here is my draft (also here https://github.com/philzook58/learnxinyminutes-docs/blob/master/coq.html.markdown where the syntax highlighting isn’t so janked up). Suggestions welcome. Or if this gets accepted, you can just make pull requests


---
language: Coq
filename: learncoq.v
contributors:
    - ["Philip Zucker", "http://www.philipzucker.com/"]
---

The Coq system is a proof assistant. It is designed to build and verify mathematical proofs. The Coq system contains the functional programming language Gallina and is capable of proving properties about programs written in this language.

Coq is a dependently typed language. This means that the types of the language may depend on the values of variables. In this respect, it is similar to other related languages such as Agda, Idris, F*, Lean, and others. Via the Curry-Howard correspondence, programs, properties and proofs are formalized in the same language.

Coq is developed in OCaml and shares some syntactic and conceptual similiarity with it. Coq is a language containing many fascinating but difficult topics. This tutorial will focus on the programming aspects of Coq, rather than the proving. It may be helpful, but not necessary to learn some OCaml first, especially if you are unfamiliar with functional programming. This tutorial is based upon its OCaml equivalent

The standard usage model of Coq is to write it with interactive tool assistance, which operates like a high powered REPL. Two common such editors are the CoqIDE and Proof General Emacs mode.

Inside Proof General `Ctrl+C ` will evaluate up to your cursor.


```coq
(*** Comments ***)

(* Comments are enclosed in (* and *). It's fine to nest comments. *)

(* There are no single-line comments. *)

(*** Variables and functions ***)

(* The Coq proof assistant can be controlled and queried by a command language called 
   the vernacular. Vernacular keywords are capitalized and the commands end with a period.
   Variable and function declarations are formed with the Definition vernacular. *)

Definition x := 10.

(* Coq can sometimes infer the types of arguments, but it is common practice to annotate
   with types. *)

Definition inc_nat (x : nat) : nat := x + 1.

(* There exists a large number of vernacular commands for querying information. 
   These can be very useful. *)

Compute (1 + 1). (* 2 : nat *) (* Compute a result. *)

Check tt. (* tt : unit *) (* Check the type of an expressions *)

About plus. (* Prints information about an object *)

(* Print information including the definition *)
Print true. (* Inductive bool : Set := true : Bool | false : Bool *)  

Search nat. (* Returns a large list of nat related values *)
Search "_ + _". (* You can also search on patterns *)
Search (?a -> ?a -> bool). (* Patterns can have named parameters  *)
Search (?a * ?a).

(* Locate tells you where notation is coming from. Very helpful when you encounter
   new notation. *)
Locate "+". 

(* Calling a function with insufficient number of arguments
   does not cause an error, it produces a new function. *)
Definition make_inc x y := x + y. (* make_inc is int -> int -> int *)
Definition inc_2 := make_inc 2.   (* inc_2 is int -> int *)
Compute inc_2 3. (* Evaluates to 5 *)

(* Definitions can be chained with "let ... in" construct.
   This is roughly the same to assigning values to multiple
   variables before using them in expressions in imperative
   languages. *)
Definition add_xy : nat := let x := 10 in
                         let y := 20 in
                         x + y.


(* Pattern matching is somewhat similar to switch statement in imperative
   languages, but offers a lot more expressive power. *)
Definition is_zero (x : nat) :=
    match x with
    | 0 => true
    | _ => false  (* The "_" pattern means "anything else". *)
    end.


(* You can define recursive function definition using the Fixpoint vernacular.*)
Fixpoint factorial n := match n with
                        | 0 => 1
                        | (S n') => n * factorial n'
                        end.


(* Function application usually doesn't need parentheses around arguments *)
Compute factorial 5. (* 120 : nat *)

(* ...unless the argument is an expression. *)
Compute factorial (5-1). (* 24 : nat *)

(* You can define mutually recursive functions using "with" *)
Fixpoint is_even (n : nat) : bool := match n with
  | 0 => true
  | (S n) => is_odd n
end with
  is_odd n := match n with
  | 0 => false
  | (S n) => is_even n
              end.

(* As Coq is a total programming language, it will only accept programs when it can
   understand they terminate. It can be most easily seen when the recursive call is
   on a pattern matched out subpiece of the input, as then the input is always decreasing
   in size. Getting Coq to understand that functions terminate is not always easy. See the
   references at the end of the artice for more on this topic. *)

(* Anonymous functions use the following syntax: *)

Definition my_square : nat -> nat := fun x => x * x.

Definition my_id (A : Type) (x : A) : A := x.
Definition my_id2 : forall A : Type, A -> A := fun A x => x.
Compute my_id nat 3. (* 3 : nat *)

(* You can ask Coq to infer terms with an underscore *)
Compute my_id _ 3. 

(* An implicit argument of a function is an argument which can be inferred from contextual
   knowledge. Parameters enclosed in {} are implicit by default *)

Definition my_id3 {A : Type} (x : A) : A := x.
Compute my_id3 3. (* 3 : nat *)

(* Sometimes it may be necessary to turn this off. You can make all arguments explicit
   again with @ *)
Compute @my_id3 nat 3.

(* Or give arguments by name *)
Compute my_id3 (A:=nat) 3.

(*** Notation ***)

(* Coq has a very powerful Notation system that can be used to write expressions in more
   natural forms. *)
Compute Nat.add 3 4. (* 7 : nat *)
Compute 3 + 4. (* 7 : nat *)

(* Notation is a syntactic transformation applied to the text of the program before being
   evaluated. Notation is organized into notation scopes. Using different notation scopes allows for a weak notion of overloading. *)

(* Imports the Zarith module containing definitions related to the integers Z *)
Require Import ZArith. 

(* Notation scopes can be opened *)
Open Scope Z_scope.

(* Now numerals and addition are defined on the integers. *)
Compute 1 + 7. (* 8 : Z *)

(* Integer equality checking *)
Compute 1 =? 2. (* false : bool *) 

(* Locate is useful for finding the origin and definition of notations *)
Locate "_ =? _". (* Z.eqb x y : Z_scope *) 
Close Scope Z_scope.

(* We're back to nat being the default interpetation of "+" *)
Compute 1 + 7. (* 8 : nat *)

(* Scopes can also be opened inline with the shorthand % *)
Compute (3 * -7)%Z. (* -21%Z : Z *)

(* Coq declares by default the following interpretation scopes: core_scope, type_scope, 
   function_scope, nat_scope, bool_scope, list_scope, int_scope, uint_scope. You may also
   want the numerical scopes Z_scope (integers) and Q_scope (fractions) held in the ZArith
   and QArith module respectively. *)

(* You can print the contents of scopes *)
Print Scope nat_scope.
(*
Scope nat_scope
Delimiting key is nat
Bound to classes nat Nat.t
"x 'mod' y" := Nat.modulo x y
"x ^ y" := Nat.pow x y
"x ?= y" := Nat.compare x y
"x >= y" := ge x y
"x > y" := gt x y
"x =? y" := Nat.eqb x y
"x  a
                                                  end.

(* A destructuring let is available if a pattern match is irrefutable *)
Definition my_fst2 {A B : Type} (x : A * B) : A := let (a,b) := x in
                                                   a.

(*** Lists ***)

(* Lists are built by using cons and nil or by using notation available in list_scope. *)
Compute cons 1 (cons 2 (cons 3 nil)). (*  (1 :: 2 :: 3 :: nil)%list : list nat *)
Compute (1 :: 2 :: 3 :: nil)%list. 

(* There is also list notation available in the ListNotations modules *)
Require Import List.
Import ListNotations. 
Compute [1 ; 2 ; 3]. (* [1; 2; 3] : list nat *)


(* 
There are a large number of list manipulation functions available, lncluding:

• length
• head : first element (with default) 
• tail : all but first element
• app : appending
• rev : reverse
• nth : accessing n-th element (with default)
• map : applying a function
• flat_map : applying a function returning lists 
• fold_left : iterator (from head to tail)
• fold_right : iterator (from tail to head) 

 *)

Definition my_list : list nat := [47; 18; 34].

Compute List.length my_list. (* 3 : nat *)
(* All functions in coq must be total, so indexing requires a default value *)
Compute List.nth 1 my_list 0. (* 18 : nat *) 
Compute List.map (fun x => x * 2) my_list. (* [94; 36; 68] : list nat *)
Compute List.filter (fun x => Nat.eqb (Nat.modulo x 2) 0) my_list. (*  [18; 34] : list nat *)
Compute (my_list ++ my_list)%list. (*  [47; 18; 34; 47; 18; 34] : list nat *)

(*** Strings ***)

Require Import Strings.String.

Open Scope string_scope.

(* Use double quotes for string literals. *)
Compute "hi"%string.

(* Strings can be concatenated with the "++" operator. *)
Compute String.append "Hello " "World". (* "Hello World" : string *)
Compute "Hello " ++ "World". (* "Hello World" : string *)

(* Strings can be compared for equality *)
Compute String.eqb "Coq is fun!"%string "Coq is fun!"%string. (* true : bool *)
Compute ("no" =? "way")%string. (* false : bool *)

Close Scope string_scope.

(*** Other Modules ***)

(* Other Modules in the standard library that may be of interest:

• Logic : Classical logic and dependent equality
• Arith : Basic Peano arithmetic
• PArith : Basic positive integer arithmetic
• NArith : Basic binary natural number arithmetic
• ZArith : Basic relative integer arithmetic
• Numbers : Various approaches to natural, integer and cyclic numbers (currently axiomatically and on top of 2^31 binary words)
• Bool : Booleans (basic functions and results)
• Lists : Monomorphic and polymorphic lists (basic functions and results), Streams (infinite sequences
defined with co-inductive types)
• Sets : Sets (classical, constructive, finite, infinite, power set, etc.)
• FSets : Specification and implementations of finite sets and finite maps (by lists and by AVL trees)
• Reals : Axiomatization of real numbers (classical, basic functions, integer part, fractional part, limit, derivative, Cauchy series, power series and results,...)
• Relations : Relations (definitions and basic results)
• Sorting : Sorted list (basic definitions and heapsort correctness)
• Strings : 8-bits characters and strings
• Wellfounded : Well-founded relations (basic results)
 *)

(*** User-defined data types ***)

(* Because Coq is dependently typed, defining type aliases is no different than defining
   an alias for a value. *)

Definition my_three : nat := 3.
Definition my_nat : Type := nat.

(* More interesting types can be defined using the Inductive vernacular. Simple enumeration
   can be defined like so *)
Inductive ml := OCaml | StandardML | Coq.
Definition lang := Coq.  (* Has type "ml". *)

(* For more complicated types, you will need to specify the types of the constructors. *)

(* Type constructors don't need to be empty. *)
Inductive my_number := plus_infinity
                     | nat_value : nat -> my_number.
Compute nat_value 3. (* nat_value 3 : my_number *)


(* Record syntax is sugar for tuple-like types. It defines named accessor functions for
   the components *)
Record Point2d (A : Set) := mkPoint2d { x2 : A ; y2 : A }. 
Definition mypoint : Point2d nat :=  {| x2 := 2 ; y2 := 3 |}.
Compute x2 nat mypoint. (* 2 : nat *)
Compute mypoint.(x2 nat). (* 2 : nat *) 

(* Types can be parameterized, like in this type for "list of lists
   of anything". 'a can be substituted with any type. *)
Definition list_of_lists a := list (list a).
Definition list_list_nat := list_of_lists nat.

(* Types can also be recursive. Like in this type analogous to
   built-in list of naturals. *)

Inductive my_nat_list := EmptyList | NatList : nat -> my_nat_list -> my_nat_list.
Compute NatList 1 EmptyList. (*  NatList 1 EmptyList : my_nat_list *)

(** Matching type constructors **)

Inductive animal := Dog : string -> animal | Cat : string -> animal.

Definition say x :=
    match x with
    | Dog x => (x ++ " says woof")%string
    | Cat x => (x ++ " says meow")%string
    end.

Compute say (Cat "Fluffy"). (* "Fluffy says meow". *)

(** Traversing data structures with pattern matching **)

(* Recursive types can be traversed with pattern matching easily.
   Let's see how we can traverse a data structure of the built-in list type.
   Even though the built-in cons ("::") looks like an infix operator,
   it's actually a type constructor and can be matched like any other. *)
Fixpoint sum_list l :=
    match l with
    | [] => 0
    | head :: tail => head + (sum_list tail)
    end.

Compute sum_list [1; 2; 3]. (* Evaluates to 6 *)


(*** A Taste of Proving ***)
(* Explaining the proof language is out of scope for this tutorial, but here is a taste to
   whet your appetite. Check the resources below for more. *)

(* A fascinating feature of dependently type based theorem provers is that the same
  primitive constructs underly the proof language as the programming features.
  For example, we can write and prove the proposition A and B implies A in raw Gallina *)

Definition my_theorem : forall A B, A /\ B -> A := fun A B ab => match ab with
                                            | (conj a b) => a
                                                       end.

(* Or we can prove it using tactics. Tactics are a macro language to help build proof terms
   in a more natural style and automate away some drudgery. *)
Theorem my_theorem2 : forall A B, A /\ B -> A.
Proof.
  intros A B ab.  destruct ab as [ a b ]. apply a.
Qed.

(* We can prove easily prove simple polynomial equalities using the automated tactic ring. *)
Require Import Ring.
Require Import Arith.
Theorem simple_poly : forall (x : nat), (x + 1) * (x + 2) = x * x + 3 * x + 2.
  Proof. intros. ring. Qed.

(* Here we prove the closed form for the sum of all numbers 1 to n using induction *)

Fixpoint sumn (n : nat) : nat :=
  match n with
  | 0 => 0
  | (S n') => n + (sumn n')
  end.

Theorem sum_formula : forall n, 2 * (sumn n) = (n + 1) * n.
Proof. intros. induction n.
       - reflexivity. (* 0 = 0 base case *)
       - simpl. ring [IHn]. (* induction step *)
Qed.
```
                                
With this we have only scratched the surface of Coq. It is a massive ecosystem with many interesting and peculiar topics leading all the way up to modern research.

## Further reading

* [The Coq reference manual](https://coq.inria.fr/refman/)
* [Software Foundations](https://softwarefoundations.cis.upenn.edu/)
* [Certfied Programming with Dependent Types](http://adam.chlipala.net/cpdt/)
* [Mathematical Components](https://math-comp.github.io/mcb/)
* [Coq'Art: The Calculus of Inductive Constructions](http://www.cse.chalmers.se/research/group/logic/TypesSS05/resources/coq/CoqArt/)
* [FRAP](http://adam.chlipala.net/frap/)


Bonus. An uneditted list of tactics. You’d probably prefer https://pjreddie.com/coq-tactics/



(*** Tactics ***)
 (* Although we won't explain their use in detail, here is a list of common tactics. *)

(* 

    * exact
    * simpl
    * intros
    * apply
    * assert
    * destruct
    * induction
    * reflexivity
    * rewrite
    * inversion
    * injection
    * discriminate
    * fold
    * unfold


        Tacticals
    * try
    * ;
        * repeat
        *
      

    
 Automatic
    * auto
    * eauto
    * tauto
    * ring
    * ring_simplify
    * psatz
    * lia
    * ria


LTac is a logic programming scripting language for tactics
     
       From Tatics chapter of LF
        intros: move hypotheses/variables from goal to context
reflexivity: finish the proof (when the goal looks like e = e)
apply: prove goal using a hypothesis, lemma, or constructor
apply... in H: apply a hypothesis, lemma, or constructor to a hypothesis in the context (forward reasoning)
apply... with...: explicitly specify values for variables that cannot be determined by pattern matching
simpl: simplify computations in the goal
simpl in H: ... or a hypothesis
rewrite: use an equality hypothesis (or lemma) to rewrite the goal
rewrite ... in H: ... or a hypothesis
symmetry: changes a goal of the form t=u into u=t
symmetry in H: changes a hypothesis of the form t=u into u=t
unfold: replace a defined constant by its right-hand side in the goal
unfold... in H: ... or a hypothesis
destruct... as...: case analysis on values of inductively defined types
destruct... eqn:...: specify the name of an equation to be added to the context, recording the result of the case analysis
induction... as...: induction on values of inductively defined types
injection: reason by injectivity on equalities between values of inductively defined types
discriminate: reason by disjointness of constructors on equalities between values of inductively defined types
assert (H: e) (or assert (e) as H): introduce a "local lemma" e and call it H
                                              generalize dependent x: move the variable x (and anything else that depends on it) from the context back to an explicit hypothesis in the goal formula
                                                                                                                                                                                        
clear H: Delete hypothesis H from the context.
subst x: For a variable x, find an assumption x = e or e = x in the context, replace x with e throughout the context and current goal, and clear the assumption.
subst: Substitute away all assumptions of the form x = e or e = x (where x is a variable).
rename... into...: Change the name of a hypothesis in the proof context. For example, if the context includes a variable named x, then rename x into y will change all occurrences of x to y.
assumption: Try to find a hypothesis H in the context that exactly matches the goal; if one is found, behave like apply H.
contradiction: Try to find a hypothesis H in the current context that is logically equivalent to False. If one is found, solve the goal.
constructor: Try to find a constructor c (from some Inductive definition in the current environment) that can be applied to solve the current goal. If one is found, behave like apply c.
                                                                                       (* Dependent types. Using dependent types for programming tasks tends to be rather unergonomic in Coq. 
We briefly mention here as an advanced topic that there exists a more sophistictaed match statement that is needed for dependently typed. See for example the "Convoy" pattern.
*)

(*** Other topics ***)

(* Dependently Typed Programming - Most of the above syntax has its equivalents in OCaml. Coq also has the capability for full dependently typed programming. There is an extended pattern matching syntax available for this purpose.
 
   Extraction - Coq can be extracted to OCaml and Haskell code for their more performant runtimes and ecosystems
   Modules / TypeClasses - Modules and Typeclasses are methods for organizing code. They allow a different form of overloading than Notation
   Setoids - 
   Termination - Gallina is a total functional programming language. It will not allow you to write functions that do not obviously terminate. For functions that do terminate but non-obviously, it requires some work to get Coq to understand this.
   Coinductive - Coinductive types such as streams are possibly infinite values that stay productive.


 *)