## Compiling to Categories 3: A Bit Cuter

Ordinary Haskell functions form a cartesian closed category. Category means you can compose functions. Cartesian basically means you can construct and deconstruct tuples and Closed means that you have first class functions you can pass around.

Conal Elliott’s Compiling to Categories is a paradigm for reinterpreting ordinary functions as the equivalent in other categories. At an abstract level, I think you could describe it as a mechanism to build certain natural law abiding Functors from Hask to other categories. It’s another way to write things once and have them run many ways, like polymorphism or generic programming. The ordinary function syntax is human friendly compared to writing raw categorical definitions, which look roughly like point-free programming (no named variables). In addition, by embedding it as a compiler pass within GHC, he gets to leverage existing GHC optimizations as optimizations for other categories. Interesting alternative categories include the category of graphs, circuits, and automatically differentiable functions. You can find his implementation here

I’ve felt hesitance at using a GHC plugin plus I kind of want to do it in a way I understand, so I’ve done different versions of this using relatively normal Haskell (no template haskell, no core passes, but a butt ton of hackery).

The first used explicit tags. Give them to the function and see where they come out. That is one way to probe a simple tuple rearrangement function.

The second version worked almost entirely at the typelevel. It worked on the observation that a completely polymorphic tuple type signature completely specifies the implementation. You don’t have to look at the term level at all. It unified the polymorphic values in the input with a typelevel number indexed Tag type. Then it searched through the input type tree to find the piece it needed. I did end up passing stuff in to the term level because I could use this mechanism to embed categorical literals. The typeclass hackery I used to achieve this all was heinous.

I realized today another way related to both that is much simpler and fairly direct. It has some pleasing aesthetic properties and some bad ones. The typeclass hackery is much reduced and the whole thing fits on a screen, so I’m pleased.

Here are the basic categorical definitions. FreeCat is useful for inspection in GHCi of what comes out of toCCC.

{-# LANGUAGE GADTs, StandaloneDeriving, NoImplicitPrelude, FlexibleInstances  #-}

module Cat where
import Control.Category
import Prelude hiding ((.), id)

class Category k => Monoidal k where
parC :: k a c -> k b d -> k (a,b) (c,d)

class Monoidal k => Cartesian k where
fstC :: k (a,b) a
sndC :: k (a,b) b
dupC :: k a (a,a)

fanC f g = (parC f g) . dupC

idC :: Category k => k a a
idC = id

data FreeCat a b where
Comp :: FreeCat b c -> FreeCat a b -> FreeCat a c
Id :: FreeCat a a
Fst :: FreeCat (a,b) a
Snd :: FreeCat (a,b) b
Dup :: FreeCat a (a,a)
Par :: FreeCat a b -> FreeCat c d -> FreeCat (a,c) (b,d)
Mul :: FreeCat (a,a) a

deriving instance Show (FreeCat a b)

instance Category FreeCat where
(.) = Comp
id = Id

instance Monoidal FreeCat where
parC = Par

instance Cartesian FreeCat where
fstC = Fst
sndC = Snd
dupC = Dup

instance Monoidal (->) where
parC f g = \(x,y) -> (f x, g y)

instance Cartesian (->) where
fstC (x,y) = x
sndC (x,y) = y
dupC x = (x,x)

class Cartesian k => NumCat k where
mulC :: Num a => k (a,a) a
negateC :: Num a => k a a
addC :: Num a => k (a,a) a
subC :: Num a => k (a,a) a
absC :: Num a => k a a

instance NumCat (->) where
mulC = uncurry (*)
negateC = negate
subC = uncurry (-)
absC = abs

instance NumCat FreeCat where
mulC = Mul
negateC = error "TODO"
subC = error "TODO"
absC = error "TODO"

instance (NumCat k, Num a) => Num (k z a) where
f + g = addC . (fanC f g)
f * g = mulC . (fanC f g)
negate f = negateC . f
f - g = subC . (fanC f g)
abs f = absC . f
signum = error "TODO"
fromInteger = error "TODO"



And here is the basic toCCC implementation

{-# LANGUAGE DataKinds,
AllowAmbiguousTypes,
TypeFamilies,
TypeOperators,
MultiParamTypeClasses,
FunctionalDependencies,
PolyKinds,
FlexibleInstances,
UndecidableInstances,
TypeApplications,
NoImplicitPrelude,
ScopedTypeVariables #-}
module CCC (
toCCC )where

import Control.Category
import Prelude hiding ((.), id)
import Cat

class IsTup a b | a -> b
instance {-# INCOHERENT #-} (c ~ 'True) => IsTup (a,b) c
instance {-# INCOHERENT #-} (b ~ 'False) => IsTup a b

class BuildInput tup (flag :: Bool) path where
buildInput :: path -> tup

instance (Cartesian k,
IsTup a fa,
IsTup b fb,
BuildInput a fa (k x a'),
BuildInput b fb (k x b'),
((k x (a',b')) ~ cat)) =>  BuildInput (a,b) 'True cat where
buildInput path = (buildInput @a @fa patha, buildInput @b @fb pathb) where
patha = fstC . path
pathb = sndC . path

instance (Category k, a ~ k a' b') => BuildInput a  'False (k a' b') where
buildInput path = path

class FanOutput out (flag :: Bool) cat | out flag -> cat where
fanOutput :: out -> cat

instance (Cartesian k,
IsTup a fa,
IsTup b fb,
FanOutput a fa (k x a'),
FanOutput b fb (k x b'),
k x (a', b') ~ cat
)
=> FanOutput (a, b) 'True cat where
fanOutput (x,y) = fanC (fanOutput @a @fa x) (fanOutput @b @fb y)

instance (Category k, kab ~ k a b) => FanOutput kab 'False (k a b) where
fanOutput f = f

toCCC :: forall k a b a' b' fa fb x. (IsTup a fa, IsTup b fb, Cartesian k, BuildInput a fa (k a' a'), FanOutput b fb (k a' b')) => (a -> b) -> k a' b'
toCCC f = fanOutput @b @fb output where
input = buildInput @a @fa (idC @k @a')
output = f input



Here is some example usage

example2 = toCCC @FreeCat (\(x,y)->(y,x))

-- You need to give the type signature unfortunately if you want polymorphism in k. k is too ambiguous otherwise and makes GHC sad.
example3 :: Cartesian k => k _ _
example3 = toCCC (\(z,y)->(z,z))

example4 = toCCC @FreeCat (\((x,y),z) -> x)

example5 = toCCC @FreeCat (\(x,y) -> x + y)

example6 = toCCC @FreeCat (\(x,y) -> y + (x * y))

example7 :: Cartesian k => k _ _
example7 = toCCC (\(x,(y,z)) -> (x,(y,z)))

What we do is generate a tuple to give to your function. The function is assumed to be polymorphic again but now not necessarily totally polymorphic (this is important because Num typeclass usage will unify variables). Once we hit a leaf of the input tuple, we place the categorical morphism that would extract that piece from the input. For example for the input type (a,(b,c)) we pass it the value (fstC ,(fstC . sndC, sndC . sndC )). Detecting when we are at a leaf again requires somehow detecting a polymorphic location, which is a weird thing to do. We use the Incoherent IsTup instance from last time to do this. It is close to being safe, because we immediately unify the polymorphic variable with a categorial type, so if something goes awry, a type error should result. We could make it more ironclad by unifying immediately to something that contains the extractor and a user inaccessible type.

We apply the function to this input. Now the output is a tuple tree of morphisms. We recursively traverse down this tree with a fanC for every tuple. This all barely requires any typelevel hackery. The typelevel stuff that is there is just so that I can traverse down tuple trees basically.

One benefit is that we can now use some ordinary typeclasses. We can make a simple implementation of Num for (k z a) like how we would make it for (z -> a). This let’s us use the regular (+) and (*)  operators for example.

What is not good is the performance. As it is, the implementation takes many global duplication of the input to create all the fanCs. In many categories, this is very wasteful.This may be a fixable problem, either via passing in more sophisticated objects that just the bare extraction morphisms to to input (CPS-ified? Path Lists?) or via the GHC rewrite rules mechanism. I have started to attempt that, but have not been successful in getting any of my rewrite rules to fire yet, because I have no idea what I’m doing. If anyone could give me some advice, I’d be much obliged. You can check that out here. For more on rewrite rules, check out the GHC user manual and this excellent tutorial by Mark Karpov here.

Another possibility is to convert to FreeCat, write regular Haskell function optimization passes over the FreeCat AST and then interpret it. This adds interpretation overhead, which may or may not be acceptable depending on your use case. It would probably not be appropriate for automatically differentiable functions, but may be for outputting circuits or graphs.

interp (Comp f g) = (interp f) . (interp g)
interp FstC = fstC
interp Dup = dupC
interp (Par f g) = parC (interp f) (interp g)
-- etc

Another problem is dealing with boolean operations. The booleans operators and comparison operators are not sufficiently polymorphic in the Prelude. We could define new operators that work as drop in replacements in the original context, but I don’t really have the ability to overload the originals. It is tough because if we do things like this, it feels like we’re really kind of building a DSL more than we are compiling to categories. We need to write our functions with the DSL in mind and can’t just import and use some function that had no idea about the compiling to categories stuff.

I should probably just be using Conal’s concat project. This is all a little silly.

## Bouncing a Ball with Mixed Integer Programming

Edit: A new version.

Here I made a bouncing ball using mixed integer programming in cvxpy. Currently we are just simulating the bouncing ball internal to a mixed integer program. We could turn this into a control program by making the constraint that you have to shoot a ball through a hoop and have it figure out the appropriate initial shooting velocity.

import numpy as np
import cvxpy as cvx
import matplotlib.pyplot as plt

N = 100
dt = 0.05

x = cvx.Variable(N)
v = cvx.Variable(N)
collision = cvx.Variable(N-1,boolean=True)

constraints = []
M = 20 # Big M trick

#initial conditions
constraints += [x[0] == 1, v[0] == 0]

for t in range(N-1):
predictedpos = x[t] + v[t] * dt
col = collision[t]
notcol = 1 - collision[t]
constraints += [ -M * col <= predictedpos ,  predictedpos <= M * notcol]
#enforce regular dynamics if col == 0
constraints +=  [  - M * col <= x[t+1] - predictedpos, x[t+1] - predictedpos  <= M * col   ]
constraints +=  [  - M * col <= v[t+1] - v[t] + 9.8*dt, v[t+1] - v[t] + 9.8*dt  <= M * col   ]

# reverse velcotiy, keep position the same if would collide with x = 0
constraints +=  [  - M * notcol <= x[t+1] - x[t], x[t+1] - x[t]  <= M * notcol   ]
constraints +=  [  - M * notcol <= v[t+1] + 0.8*v[t], v[t+1] + 0.8*v[t]  <= M * notcol   ] #0.8 restitution coefficient

objective = cvx.Maximize(1)

prob = cvx.Problem(objective, constraints)
res = prob.solve(solver=cvx.GLPK_MI, verbose=True)
print(x.value)
print(v.value)
plt.plot(x.value, label='x')
plt.plot(v.value, label= 'v')
plt.plot(collision.value, label = 'collision bool')
plt.legend()
plt.xlabel('time')
plt.show()


Pretty cool.

The trick I used this time is to make boolean indicator variables for whether a collision will happen or not. The big M trick is then used to actually make the variable reflect whether the predicted position will be outside the wall at x=0. If it isn’t, it uses regular gravity dynamics. If it will, it uses velocity reversing bounce dynamics

Just gonna dump this draft out there since I’ve moved on (I’ll edit this if I come back to it). You can embed collisions in mixed integer programming.  I did it below using a strong acceleration force that turns on when you enter the floor. What this corresponds to is a piecewise linear potential barrier.

Such a formulation might be interesting for the trajectory optimization of shooting a hoop, playing Pachinko, Beer Pong, or Pinball.

using JuMP
using Cbc
using Plots
N = 50
T = 5
dt = T/N

m = Model(solver=CbcSolver())

@variable(m, x[1:N]) # , Bin
@variable(m, v[1:N]) # , Bin
@variable(m, f[1:N-1])
@variable(m, a[1:N-1], Bin) # , Bin

@constraint(m, x[1] == 1)
@constraint(m, v[1] == 0)
M = 10
for t in 1:N-1
@constraint(m, x[t+1] == x[t] + dt*v[t])
@constraint(m, v[t+1] == v[t] + dt*(10*(1-a[t])-1))
#@constraint(m, v[t+1] == v[t] + dt*(10*f[t]-1))
@constraint(m, M * a[t] >= x[t+1]) #if on the next step projects into the earth
@constraint(m, M * (1-a[t]) >= -x[t+1])
#@constraint(m, f[t] <=  M*(1-a[t])) # we allow a bouncing force

end

k = 10
# @constraint(m, f .>= 0)
# @constraint(m, f .>= - k * x[2:N])

# @constraint(m, x[:] .>= 0)

E = 1 #sum(f) # 1 #sum(x) #sum(f) # + 10*sum(x) # sum(a)

@objective(m, Min, E)
solve(m)

println(x)
println(getvalue(x))
plotly()
plot(getvalue(x))
#plot(getvalue(a))
gui()

More things to consider:

Is this method trash? Yes. You can actually embed the mirror law of collisions directly without needing to using a funky barrier potential.

You can extend this to ball trapped in polygon, or a ball that is restricted from entering obstacle polygons. Check out the IRIS project – break up region into convex regions

https://github.com/rdeits/ConditionalJuMP.jl Gives good support for embedding conditional variables.

https://github.com/joehuchette/PiecewiseLinearOpt.jl On a related note, gives a good way of defining piecewise linear functions using Mixed Integer programming.

Pajarito is another interesting Julia project. A mixed integer convex programming solver.

Russ Tedrake papers – http://groups.csail.mit.edu/locomotion/pubs.shtml

Break up obstacle objects into delauney triangulated things.

www.mit.edu/~jvielma/presentations/MINLPREPSOLJUL_NORTHE18.pdf

## A Simple Interior Point Linear Programming Solver in Python

This solver is probably not useful for anything. For almost all purposes, let me point you to cvxpy.

If you want an open source solver CBC/CLP and GLPK and OSQP are good.

If you want proprietary, you can get a variable number constrained trial license to Gurobi for free.

Having said that, here we go.

The simplex method gets more press, and certainly has it’s advantages, but the interior point method makes much more sense to me. What follows is the basic implementation described in Stephen Boyd’s course and book http://web.stanford.edu/~boyd/cvxbook/

In the basic interior point method, you can achieve your inequality constraints $\phi(x) \ge 0$ by using a logarithmic potential to punish getting close to them $-\gamma \ln (\phi(x))$ where $\gamma$ is a parameter we’ll talk about in a bit.  From my perspective, the logarithm is a somewhat arbitrary choice. I believe some properties of the logarithmic potential is necessary for some convergence guarantees.

The basic unconstrained newton step takes a locally quadratic approximation to the function you’re trying to optimize and finds the minimum of that. This basically comes down to taking a step that is the inverse hessian applied to the gradient.

$\min_{dx} f(x_0+dx) \approx f(x_0) + \nabla f(x_0)dx + \frac{1}{2} dx^T H dx$

$(H)_{ij} = \partial_{ij}f(x_0)$

$\nabla f(x_0) +H dx = 0 \rightarrow dx =- H^{-1}\nabla f$

We can maintain a linear constraint on the variable x during this newton step. Instead of setting the gradient to zero, we set it so that it is perpendicular to the constraint plane using the Lagrange multiplier procedure.

$\nabla f(x_0) +H dx = -A^T \lambda \rightarrow Hdx + A^T \lambda = - \nabla f$

$A(x_0 + dx) = b$

This is a block linear system

$\begin{bmatrix} H & A^T \\ A & 0 \\ \end{bmatrix} \begin{bmatrix} dx \\ \lambda \end{bmatrix} = \begin{bmatrix} -\nabla f \\ b - Ax_0 \end{bmatrix}$

Despite the logarithm potential, there is no guarantee that the newton step would not take us outside the allowed region. This is why we need a line search on top of the newton step. We scale the newton dx to $\alpha dx$. Because the function we’re optimizing is convex and the region we’re in is convex, there is some step length in that newton direction that will work. So if we keep decreasing the overall step size, we’ll eventually find something acceptable.

As part of the interior point method, once it has converged we decrease the parameter $\gamma$ applied to the logarithm potential. This will allow the inequality constraints to satisfied tighter and tighter with smaller gamma.

The standard form of an LP is

$\min c^T x$

$A x = b$

$x \ge 0$

This doesn’t feel like the form you’d want. One way you can construct this is by adding slack variables and splitting regular variables into a positive and negative piece

$x = x_+ - x_-$

$Ax \ge b \rightarrow Ax +s = b, s \ge 0$

The interior point formulation of this is

$\min c^T x- \gamma \sum_i \ln(x_i)$

$Ax = b$

The Hessian and gradient are quite simple here

$\nabla f = -\frac{\gamma}{x_i}$

$(H)_{ij} = \delta_{ij} \frac{\gamma}{x_i^2}$

The optimum conditions for this are

$\nabla (c^T x - \gamma \ln(x))= c - \gamma \frac{1}{x} = 0$

$Ax=b$

Now in the above, I’m not sure I got all the signs right, but I did implement it in python. The result seems to be correct and does work. I haven’t tested extensively, YMMV. It’s a useful starting point.

#simple lp

import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg as linalg
# min cx
# x >= 0
# Ax = b

def newtonDec(df, dx):
return np.dot(df,dx)

# assumes that x + alpha*dx can be made positive
def linesearch(x, dx):
alpha = 1.
while not np.all( x + alpha*dx > 0):
alpha *= 0.1
return alpha

# min cx

def solve_lp2(A, b, c, gamma, xstart=None):
#x = np.ones(A.shape[1])
#lam = np.zeros(b.shape)
xsize = A.shape[1]
if xstart is not None:
x = xstart
else:
#xlam = np.ones(xsize + b.size)
x = np.ones(xsize) # xlam[:xsize]
#lam = xlam[xsize:]
while True :
print("Iterate")
H = sparse.bmat( [[ sparse.diags(gamma / x**2)   ,   A.T ],
[ A  ,                         0 ]]  )

dfdx = c - gamma / x #+  A.T@lam
dfdlam = A@x - b
df = np.concatenate((dfdx, dfdlam))#np.zeros(b.size))) # dfdlam))
#np.concatenate( , A@x - b)
dxlam = linalg.spsolve(H,df)
dx = - dxlam[:xsize]
lam = dxlam[xsize:]

alpha = linesearch(x,dx)
x += alpha * dx
#lam += dlam
if newtonDec(dfdx,dx) >= -1e-10:
print("stop")
break

return x, lam

def solve_lp(A,b,c, xstart=None):
gamma = 1.0
xsize = A.shape[1]
x = np.ones(xsize)
for i in range(8):
x, lam = solve_lp2(A, b, c, gamma, xstart=x)
gamma *= 0.1
return x, lam

N = 12
A = np.ones(N).reshape(1,-1)
b = np.ones(1)*2
c = np.zeros(N)
c[0] = -1

#print(solve_lp(A,b,c, 0.000001))
print(solve_lp(A,b,c))

def BB(A, b, c, best, xhint = None):
picked = np.zeros(xsize)
picked[pickvar] = 1
Anew = sparse.hstack((A, picked))
bnew = np.concatenate((b,choice))
x, lam =
np.dot(c,x)
if lp_solve(Anew, bnew, c) < best:
best, x = BB(Anew, bnew , c, best, xhint)
return best, x

'''
#min  cx + gamma * ln(x)
# s.t. Ax = b

# cx + gamma * ln(x) + lambda (Ax - b)

delx = c + gamma * 1/x + lambda A
dellam = Ax - b
# hess
dlx = A
dxl = A.T
dxx = - gamma (1/x**2)

H @ (x l) = (delx dell)
'''


Musings:

I wanted to build this because I’ve been getting really into mixed integer programming and have been wondering how much getting deep in the guts of the solver might help. Given my domain knowledge of the problems at hand, I have probably quite good heuristics. In addition, I’ve been curious about a paper that has pointed out an interesting relatively unexploited territory, combining machine learning with mixed integer programming https://arxiv.org/pdf/1811.06128

For these purposes, I want a really simple optimization solver.

But this is silly. I should use CLP or OSQP as a black box if I really want to worry about the mixed integer aspect.

MIOSQP is interesting.

It is interesting how the different domains of discrete optimization and search seem to have relatively similar sets of methods. Maybe I’m crazy. Maybe at the loose level I’m gonna talk almost anything is like almost anything else.

Clause learning and Cutting plane addition feel rather similar.

Relaxation to LP and unit propagation are somewhat similar. Or is unit propagation like elimination?

Mixed integer programs build their own heuristics.

Fourier Motzkin and resolution are similar methods. In Fourier motzkin, you eliminate variables in linear inequalities by using algebra to bring that variable by itself on one side of the inequality and then matching up all the <= to all the uses of >= of that variable. There are packages that compute these things. See CDD or Polyhedra.jl

Resolution takes boolean formula. You can eliminate a variable q from a CNF formula by taking all the negated instances $\not q$ and combining them with all positive instances.

## Nand2Tetris in Verilog and FPGA and Coq

Publishing these draft notes because it has some useful info in it and trying to reboot the project. It’s very ambitious. We’ll see where we get with it.

https://github.com/philzook58/nand2coq

Old Stuff (Last Edited 6/23/18):

So my friends and I are starting the nand2tetris course. I feel like I have some amount of familiarity with the topics involved, so I’d like to put it into challenge mode for me.

Week 1 is about basic combinatorial logic gate constructions and sort of the ideas of an HDL.

I was trying to keep up in Verilog and failing. The verilog syntax is a little bit more verbose.

The easiest thing to use was assign statements.  The difference between = and <= in verilog is still a little opaque to me

I compiled them and ran them using Icarus verilog (iverilog and the vpp the output file).

I started using MyHDL but I’m not sure I saw why it was going to be easier? But the MyHdl docs did help me understand a bit why verilog is the way it is.

Here is a big list of interesting projects:

MyHDL – A python hardware description language. Can output VHDL and Verilog. based around python generators and some decorators.

Icarus Verilog – http://iverilog.wikia.com/wiki/Main_Page. iverilog Compiles verilog into a assembly format which can be run with vvp command.

example

iverilog myor.v not.v

vpp a.out

Verilator – Compiles Verilog to C for simulation

GTKWave – A Waveform viewer

IceStick – A cheap 20\$ ish fpga usb board that can be programmed easily

IceStorm http://www.clifford.at/icestorm/ – An OpenSource toolchain for compiling to and programming ice40 fpga chips

IceStudio – a graphical block editor. Last I checked it was still a little clunky

EdaPlayground – online web app for writing code and giving to  simulators

Formal tools:

yosys-smtbmc

symbiyosys

http://www.clifford.at/papers/2016/yosys-smtbmc/

http://zipcpu.com/blog/2017/10/19/formal-intro.html

icestick floorplan – https://knielsen.github.io/ice40_viewer/ice40_viewer.html

ZipCPU

https://opencores.org/

Learning Verilog for FPGAs: The Tools and Building an Adder

Upduino – interesting set of boards. Cheap.

http://gnarlygrey.atspace.cc/development-platform.html#upduino

Questionable: Clash?

installing icestick on the mac

https://github.com/ddm/icetools

https://github.com/Homebrew/homebrew-core/issues/9229

Had to pip uninstall enum34. Weird.

Verilog

end lines with semicolons.

You need to name instantiated elements

yosys -p “synth_ice40 -top not1 -blif not.blif” not.v

https://mcmayer.net/first-steps-with-the-icestorm-toolchain/

../icetools/arachne-pnr/bin/arachne-pnr  -d 1k -P tq144 -o not.asc -p not.pcf not.blif

../icetools/icestorm/icepack/icepack not.asc not.bin

iceprog not.bin

sudo kextunload -b com.FTDI.driver.FTDIUSBSerialDriver  

The ftdi isn’t working

module alu(
input [15:0] x
, input [15:0] y
, output [15:0] out
, input zx // zero x
, input zy // zero y
, input nx // negate result on x
, input ny // """
, input f // Plus is 1 or and if 0
, input no // negate result?
, output zr // is it exactly zero
, output ng // is out < 0
);

wire [15:0] zerox;
wire [15:0] zeroy;
wire [15:0] notx;
wire [15:0] noty;
wire [15:0] andplus;

assign zerox = zx ? 0 : x;
assign notx = nx ? ~zerox : zerox;
assign zeroy = zy ? 0 : y;
assign noty = ny ? ~zeroy : zeroy;
assign andplus = f ? x + y : x & y;
assign out = no ? ~andplus : andplus;

assign zr = out == 0;
assign ng = out[15] == 1; // check sign bit

endmodule

## Trajectory Optimization of a Pendulum with Mixed Integer Linear Programming

There is a reasonable piecewise linear approximation for the pendulum replacing the the sinusoidal potential with two quadratic potentials (one around the top and one around the bottom). This translates to a triangle wave torque.

Cvxpy curiously has support for Mixed Integer Programming.

Cbc is probably better than GLPK MI. However, GLPK is way easier to get installed. Just brew install glpk and pip install cvxopt.

Getting cbc working was a bit of a journey. I had to actually BUILD Cylp (god forbid) and fix some problems.

Special Ordered Set constraints are useful for piecewise linear approximations. The SOS2 constraints take a set of variables and make it so that only two consecutive ones can be nonzero at a time. Solvers often have built in support for them, which can be much faster than just blasting them with general constraints. I did it by adding a binary variable for every consecutive pair. Then these binary variables suppress the continuous ones. Setting the sum of the binary variables to 1 makes only one able to be nonzero.

def SOS2(n):
z = cvx.Variable(n-1,boolean=True)
x = cvx.Variable(n)
constraints = []
constraints += [0 <= x]
constraints += [x[0] <= z[0]]
constraints += [x[-1] <= z[-1]]
constraints += [x[1:-1] <= z[:-1]+z[1:]]
constraints += [cvx.sum(z) == 1]
constraints += [cvx.sum(x) == 1]
return x, z, constraints

One downside is that every evaluation of these non linear functions requires a new set of integer and binary variables, which is possibly quite expensive.

For some values of total time steps and step length, the solver can go off the rails and never return.

At the moment, the solve is not fast enough for real time control with CBC (~ 2s). I wonder how much some kind of warm start might or more fiddling with heuristics, or if I had access to the built in SOS2 constraints rather than hacking it in. Also commercial solvers are usually faster. Still it is interesting.

import cvxpy as cvx
import numpy as np
import matplotlib.pyplot as plt

def SOS2(n):
z = cvx.Variable(n-1,boolean=True)

x = cvx.Variable(n)
constraints = []
constraints += [0 <= x]
constraints += [x[0] <= z[0]]
constraints += [x[-1] <= z[-1]]
constraints += [x[1:-1] <= z[:-1]+z[1:]]
constraints += [cvx.sum(z) == 1]
constraints += [cvx.sum(x) == 1]
return x, z, constraints

N = 20
thetas = cvx.Variable(N)
omegas = cvx.Variable(N)
torques = cvx.Variable(N)
tau = 0.3
c = [thetas[0] == +0.5, omegas[1] == 0, torques <= tau, torques >= -tau]
dt = 0.5

D = 5
thetasN = np.linspace(-np.pi,np.pi, D, endpoint=True)
zs = []
mods = []
xs = []
print(thetasN)
reward = 0
for t in range(N-1):
c += [thetas[t+1] == thetas[t] + omegas[t]*dt]

mod = cvx.Variable(1,integer=True)
mods.append(mod)
#xs.append(x)
x, z, c1 = SOS2(D)
c += c1
xs.append(x)
c += [thetas[t+1] == thetasN@x + 2*np.pi*mod]
sin = np.sin(thetasN)@x
reward += -np.cos(thetasN)@x
c += [omegas[t+1] == omegas[t] - sin*dt + torques[t]*dt ]

objective = cvx.Maximize(reward)
constraints = c

prob = cvx.Problem(objective, constraints)
res = prob.solve(solver=cvx.CBC, verbose=True)

print([mod.value for mod in mods ])
print([thetasN@x.value for x in xs ])
print([x.value for x in xs ])

plt.plot(thetas.value)
plt.plot(torques.value)
plt.show()

# Other things to do: BigM.
# Disjunctive constraints.
# Implication constraints.



Blue is angle, orange is the applied torque. The torque is running up against the limits I placed on it.