Networkx also has it’s own flow solvers, but cvxpy gives you some interesting flexibility, like turning the problem mixed integer, quadratic terms, and other goodies. Plus it is very easy to get going as you’ll see.
So here’s a basic example of putting these two together. Very straightforward and cool.
import networkx asnx
import cvxpy ascvx
import matplotlib.pyplot asplt
import numpy asnp
from scipy.sparse import lil_matrix
#graph is an networkx graph from somewhere
# split flow into positive and negative parts so we can talk about absolute value.
# Perhaps I should let cvxpy do it for me
# just some random source placement.
# cvxpy needs sparse matrices wrapped.
#sourcecvx = cvx.Constant(source)
# flow conservation
# can put other funky inequality constraints on things.
This solver is probably not useful for anything. For almost all purposes, let me point you to cvxpy.
If you want an open source solver CBC/CLP and GLPK and OSQP are good.
If you want proprietary, you can get a variable number constrained trial license to Gurobi for free.
Having said that, here we go.
The simplex method gets more press, and certainly has it’s advantages, but the interior point method makes much more sense to me. What follows is the basic implementation described in Stephen Boyd’s course and book http://web.stanford.edu/~boyd/cvxbook/
In the basic interior point method, you can achieve your inequality constraints by using a logarithmic potential to punish getting close to them where is a parameter we’ll talk about in a bit. From my perspective, the logarithm is a somewhat arbitrary choice. I believe some properties of the logarithmic potential is necessary for some convergence guarantees.
The basic unconstrained newton step takes a locally quadratic approximation to the function you’re trying to optimize and finds the minimum of that. This basically comes down to taking a step that is the inverse hessian applied to the gradient.
We can maintain a linear constraint on the variable x during this newton step. Instead of setting the gradient to zero, we set it so that it is perpendicular to the constraint plane using the Lagrange multiplier procedure.
This is a block linear system
Despite the logarithm potential, there is no guarantee that the newton step would not take us outside the allowed region. This is why we need a line search on top of the newton step. We scale the newton dx to . Because the function we’re optimizing is convex and the region we’re in is convex, there is some step length in that newton direction that will work. So if we keep decreasing the overall step size, we’ll eventually find something acceptable.
As part of the interior point method, once it has converged we decrease the parameter applied to the logarithm potential. This will allow the inequality constraints to satisfied tighter and tighter with smaller gamma.
The standard form of an LP is
This doesn’t feel like the form you’d want. One way you can construct this is by adding slack variables and splitting regular variables into a positive and negative piece
The interior point formulation of this is
The Hessian and gradient are quite simple here
The optimum conditions for this are
Now in the above, I’m not sure I got all the signs right, but I did implement it in python. The result seems to be correct and does work. I haven’t tested extensively, YMMV. It’s a useful starting point.
I wanted to build this because I’ve been getting really into mixed integer programming and have been wondering how much getting deep in the guts of the solver might help. Given my domain knowledge of the problems at hand, I have probably quite good heuristics. In addition, I’ve been curious about a paper that has pointed out an interesting relatively unexploited territory, combining machine learning with mixed integer programming https://arxiv.org/pdf/1811.06128
For these purposes, I want a really simple optimization solver.
But this is silly. I should use CLP or OSQP as a black box if I really want to worry about the mixed integer aspect.
MIOSQP is interesting.
It is interesting how the different domains of discrete optimization and search seem to have relatively similar sets of methods. Maybe I’m crazy. Maybe at the loose level I’m gonna talk almost anything is like almost anything else.
Clause learning and Cutting plane addition feel rather similar.
Relaxation to LP and unit propagation are somewhat similar. Or is unit propagation like elimination?
Mixed integer programs build their own heuristics.
Fourier Motzkin and resolution are similar methods. In Fourier motzkin, you eliminate variables in linear inequalities by using algebra to bring that variable by itself on one side of the inequality and then matching up all the <= to all the uses of >= of that variable. There are packages that compute these things. See CDD or Polyhedra.jl
Resolution takes boolean formula. You can eliminate a variable q from a CNF formula by taking all the negated instances and combining them with all positive instances.
There is a fun idea of using Linear Programming to do dynamic programming I originally saw in the underactuated robotics textbook.
In my experience reinforcement learning is finicky and depressing. It usually doesn’t work and is very hard to troubleshoot. Do you just need to run it for 10 minutes? 10 years? Is there a bug? God knows. I end up wriggling hyperparameters and praying a lot.
One part of this is the relative finickiness of neural network optimization compared to the technology of convex optimization. Convex optimization solvers are quite reliable and fast.
There is a way of phrasing Q learning as a linear programming problem
The linear programming approach relaxes the Bellman equations.
We can approach this forall in a couple ways, one of which is just sampling actions somehow. To make the constraint tight in places you minimize a weighting of Q
If Q is written as a linear combination of basis functions
The all of this put together is a linear program in the variables .
For ease, I used cvxpy. I don’t even store my state action pairs, which is quite lazy of me. Even here, compiling the linear program via cvxpy is kind of slow. This preprocessing step takes longer than the actual solve does. You could avoid cvxpy and directly interface a linear programming solver much faster, if that is your thing.
The whole process is still model free. I didn’t plug in pendulum dynamics anywhere. I run openAI gym and use the resulting state-action-state tuples to add inequalities to my cvxpy model. I weight where I want the inequalities to be tightest by using the actual states experienced.
Unfortunately, it still took a couple hours of hyper parameter tuning and fiddling to get the thing to work. So not a grand success on that point.
I made a lot of guesswork for what seemed reasonable
I parametrized the dependence of Q on a by a quadratic so that it is easy to maximize analytically. That is what the polyfit stuff is about. Maximum of is at . I really should be checking the sign of the a coefficient. I am just assuming it is positive. Naughty boy.
m assuming that it
Chebyshev polynomials are probably good.
It seemed to help to use a slight perturbation of the actual action used on the right hand side of the Bellman inequality. My reasoning here is that the pendulum is actually a continuous system, so we should be using the differential Bellman equation really.
Should I allow for some kind of slack in the equations? Getting a bad reward or data point or one weird unusual state could ruin things for everyone. Inequalities are unforgiving.
Gamma seemed to matter a decent amount
The regularization of alpha seemed largely irrelevant.
Epsilon greediness seems to not matter much either.
Might be good to replace the sampling of a with a Sum of Squares condition over the variable a.
Should I damp the update in some way? Add a cost the changing alpha from it’s previous value. A kind of damped update / using a prior.
Least squares fitting is very commonly used and well loved. Sum of squared fitting can be solved using just linear algebra. One of the most convincing use cases to me of linear programming is doing sum of absolute value fits and maximum deviation fits . These two quality of fits are basically just as tractable as least squares, which is pretty cool.
The trick to turning an absolute value into an LP is to look at the region above the graph of absolute value.
This region is defined by and . So you introduce a new variable y. Then the LP subject to those constraints will minimize the absolute value. For a sum of absolute values, introduce a variable for each absolute value you have. Then minimize . If you want to do min max optimization, use the same y value for every absolute value function.
Let’s change topic a bit. Chebyshev polynomials are awesome. They are basically the polynomials you want to use in numerics.
Chebyshev polynomials are sines and cosines in disguise. They inherit tons of properties from them. One very important property is the equioscillation property. The Chebyshev polynomials are the polynomials that stay closest to zero while keeping the x^n coefficient nonzero (2^(n-2) by convention). They oscillate perfectly between -1 and 1 on the interval just like sort of a stretched out sine. It turns out this equioscillation property defines the Chebyshev polynomials
We can approximate the Chebyshev polynomials via sampling many points between [-1,1]. Then we do min of the max absolute error optimization using linear programming. What we get out does approximate the Chebyshev polynomials.
# try finding the 3 through 5 chebyshev polynomial
Red is the actual Chebyshev polynomials and green is the solved for polynomials. It does a decent job. More samples will do even better, and if we picked the Chebyshev points it would be perfect.
Can we do better? Yes we can. Let’s go on a little optimization journey.
Semidefinite programming allows you to optimize matrix variables with the constraint that they have all positive eigenvalues. In a way it lets you add an infinite number of linear constraints. Another way of stating the eigenvalue constraint is that
You could sample a finite number of random q vectors and take the conjunction of all these constraints. Once you had enough, this is probably a pretty good approximation of the Semidefinite constraint. But semidefinite programming let’s you have an infinite number of the constraints in the sense that is referencing an infinite number of possible q , which is pretty remarkable.
Finite Sampling the qs has similarity to the previously discussed sampling method for absolute value minimization.
Sum of Squares optimization allows you to pick optimal polynomials with the constraint that they can be written as a sum of squares polynomials. In this form, the polynomials are manifestly positive everywhere. Sum of Squares programming is a perspective to take on Semidefinite programming. They are equivalent in power. You solve SOS programs under the hood by transforming them into semidefinite ones.
You can write a polynomial as a vector of coefficients .
Instead we represent the polynomial with the matrix
If the matrix is positive semidefinite, then it can be diagonalized into the sum of squares form.
In all honesty, this all sounds quite esoteric, and it kind of is. I struggle to find problems to solve with this stuff. But here we are! We’ve got one! We’re gonna find the Chebyshev polynomials exactly by translating the previous method to SOS.
The formulation is a direct transcription of the above tricks.
by which I mean is SOS and is SOS.
There are a couple packages available for python already that already do SOS, .
SumofSquares.jl for Julia and SOSTools for Matlab. YalMip too I think. Instead of using those packages, I want to roll my own, like a doofus.
Sympy already has very useful polynomial manipulation functionality. What we’re going to do is form up the appropriate expressions by collecting powers of x, and then turn them into cvxpy expressions term by term. The transcription from sympy to cvxpy isn’t so bad, especially with a couple helper functions.
One annoying extra thing we have to do is known as the S-procedure. We don’t care about regions outside of . We can specify this with a polynomial inequality . If we multiply this polynomial by any manifestly positive polynomial (a SOS polynomial in particular will work), it will remain positive in the region we care about. We can then add this function into all of our SOS inequalities to make them easier to satisfy. This is very similar to a Lagrange multiplier procedure.
Now all of this seems reasonable. But it is not clear to me that we have the truly best polynomial in hand with this s-procedure business. But it seems to works out.
#build corresponding cvx variable for sympy variable
defcvxify(expr,cvxdict):# replaces sympy variables with cvx variables
Ooooooh yeah. Those curves are so similar you can’t even see the difference. NICE. JUICY.
There are a couple interesting extension to this. We could find global under or over approximating polynomials. This might be nice for a verified compression of a big polynomial to smaller, simpler polynomials for example. We could also similar form the pointwise best approximation of any arbitrary polynomial f(x) rather than the constant 0 like we did above (replace for in the above). Or perhaps we could use it to find a best polynomial fit for some differential equation according to a pointwise error.
I think we could also extend this method to minimizing the mean absolute value integral just like we did in the sampling case.
I came across an interesting thing, that finding the minimizer of the Ising model is encodable as a mixed integer linear program.
The Ising model is a simple model of a magnet. A lattice of spins that can either be up or down. They want to align with an external magnetic field, but also with their neighbors (or anti align, depending on the sign of the interaction). At low temperatures they can spontaneously align into a permanent magnet. At high temperatures, they are randomized. It is a really great model that contains the essence of many physics topics.
Linear Programs minimize linear functions subject to linear equality and inequality constraints. It just so happens this is a very solvable problem (polynomial time).
MILP also allow you to add the constraint that variables take on integer values. This takes you into NP territory. Through fiendish tricks, you can encode very difficult problems. MILP solvers use LP solvers as subroutines, giving them clues where to search, letting them step early if the LP solver returns integer solutions, or for bounding branches of the search tree.
How this all works is very interesting (and very, very roughly explained), but barely matters practically since other people have made fiendishly impressive implementations of this that I can’t compete with. So far as I can tell, Gurobi is one of the best available implementations (Hans Mittelman has some VERY useful benchmarks here http://plato.asu.edu/bench.html), and they have a gimped trial license available (2000 variable limit. Bummer.). Shout out to CLP and CBC, the Coin-Or Open Source versions of this that still work pretty damn well.
Interesting Connection: Quantum Annealing (like the D-Wave machine) is largely based around mapping discrete optimization problems to an Ising model. We are traveling that road in the opposite direction.
So how do we encode the Ising model?
Each spin is a binary variable
We also introduce a variable for every edge. which we will constrain to actually be the product of the spins. . This is the big trick.
We can compute the And/Multiplication (they coincide for 0/1 binary variables) of the spins using a couple linear constraints. I think this does work for the 4 cases of the two spins.
The xor is usually what we care about for the Ising model, we want aligned vs unaligned spins to have different energy. It will have value 1 if they are aligned and 0 if they are anti-aligned. This is a linear function of the spins and the And.
Then the standard Hamiltonian is
Well, modulo some constant offset. You may prefer making spins , but that leads to basically the same Hamiltonian.
The Gurobi python package actually let’s us directly ask for AND constraints, which means I don’t actually have to code much of this.
We are allowed to use spatially varying external field B and coupling parameter J. The Hamiltonian is indeed linear in the variables as promised.
After already figuring this out, I found this chapter where they basically do what I’ve done here (and more probably). There is nothing new under the sun. The spatially varying fields B and J are very natural in the field of spin glasses.
Here we’ve got the basic functionality. Getting 10,000 takes about a minute. This is somewhat discouraging when I can see that we haven’t even got to very interesting ones yet, just single spin and double spin excitations. But I’ve got some ideas on how to fix that. Next time baby-cakes.
(A hint: recursion with memoization leads to some brother of a cluster expansion.)
So pytorch does have some capability towards higher derivatives, with the caveat that you have to dot the gradients to turn them back into scalars before continuing. What this means is that you can sample a single application of the Hessian (the matrix of second derivatives) at a time.
One could sample out every column of the hessian for example. Performance-wise I don’t know how bad this might be.
For a banded hessian, which will occur in a trajectory optimization problem (the bandedness being a reflection of the finite difference scheme), you don’t need that many samples. This feels more palatable. You only need to sample the hessian roughly the bandwidth number of times, which may be quite small. Plus, then you can invert that banded hessian very quickly using special purpose banded matrix solvers, which are also quite fast. I’m hoping that once I plug this into the trajectory optimization, I can use a Newton method (or SQP?) which will perform better than straight gradient descent.
If you pulled just a single column using [1,0,0,0,0,0..] for example, that would be wasteful, since there are so many known zeros in the banded matrix. Instead something like [1,0,0,1,0,0,1,0,0..] will not have any zeros in the result. This gets us every 3rd row of the matrix. Then we can sample with shifted versions like [0,1,0,0,1,0,0,1,0,0..]. until we have all the rows somewhere. Then there is some index shuffling to put the thing into a sane ordering, especially so that we can use https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solveh_banded.html which requires the banded matrix to be given in a particular form.
An alternative approach might be to use an fft with some phase twiddling. Also it feels like since the Hessian is hermitian we ought to be able to use about half the samples, since half are redundant, but I haven’t figured out a clean way to do this yet. I think that perhaps sampling with random vectors and then solving for the coefficients would work, but again how to organize the code for such a thing?
Here’s a snippet simulatng extracting the band matrix from matrix products.
h=h+h.T# symmetrize our matrix
and here is the full pytorch implementation including a linear banded solve.