## System Identification of a Pendulum with scikit-learn

System identification is the name of the problem of finding the dynamics of a physical system. It is an old topic. For some interesting material on system identification, check out Steve Brunton’s video here https://www.youtube.com/watch?v=6F2YVsT9dOs

We’ve been building a raspberry pi controlled pendulum to be controlled from the internet and the problem came up of trying to get a simulation to match the physical pendulum.

We weighed the pendulum and calculated the torque due to gravity $mg\frac{l}{2} \sin(\theta)$ (you can think of it as the full force of gravity acting on the level arm of the center of the pole $\frac{l}{2}\sin(\theta)$) and moment of inertia of a rod about it’s end $\frac{1}{3}m L^2$.

However, It is difficult to estimate the torque supplied by the motor. Motors have surprisingly complicated behavior. It is also difficult from first principles to estimate damping or friction terms.

There are a couple different experimental stratagems for a pendulum. One thing we tried was setting the pendulum on it’s side and setting the motor duty cycle to different values. From this you can fit a parabola to those curves and get a acceleration constant for the different motor settings. Experimentally speaking, it seemed roughly linear acceleration to motor PWM duty cycle.

Another stratagem is to take resonance curves for the pendulum. Try exciting it with different sinusoidal torques at a sweep of frequencies. From this curve you can recover a resonance frequency and damping coefficients.

These all make sense as kind of ersatz methods. We’re taking our intuitive understanding of the system and other results from simpler or related systems and combining them together.

An interesting alternative approach to the above is to drive the pendulum with a random torque and then fit a parameterized model of the equations of motion to the observed acceleration. The model should include at the least the gravity $\beta_1 sin(\theta)$ term, motor torque term $\beta_2 u$, and a damping terms $\beta_3 \dot{\theta}$. A simple start is $\alpha = \beta_1 sin(\theta) + \beta_2 u + \beta_3 \dot{\theta}$. This is a linear model with respect to the coefficients $\beta_i$ and can be solved by least squares.

I’ve come to appreciate sci-kit learn for fitting. It doesn’t have the hottest most high falutin’ fads, but it’s got a lot of good algorithms in it that just work, are damn easy to use, and are easy to swap different possibilities out of there. Even though I know how to more manually set up a least squares system or solve a LASSO problem via cvxpy, it makes it really easy and clean. I’ve started reaching for it for fast attacks on fitting problems.

We mocked out our interface to behave similarly to an OpenAI gym interface. Because of this, the observations already have the cosine and sine terms that might be of interest and the angular velocity value that would be used for a simple damping term $\beta \dot{\theta}$.



import gym
import time
import numpy as np
env = gym.make('pendulum-v0')
observation = env.reset()

action = 0
dt = 0.05
obs = []
rews = []
actions = []
for i in range(1000):

# A random walk for actions.
# we need the actions to be slow changing enough to see trends
# but fast enough to see interesting behavior
# tune this by hand
action += np.random.randn() * dt
action = max( min(action, 2 ), -2)
observation, reward, done, info = env.step([action])
obs.append(observation)
actions.append(action)
rews.append(reward)
time.sleep(0.05)

obs = np.array(obs) # obs includes thetadot, cos(theta), sin(theta). A good start.
actions = np.array(actions) # the pwm value used

# data to predict alpha from. Each row is a data point from one time step.
X = np.hstack( (obs[:-1, :] , actions[:-1].reshape(-1,1)) )

alphas = (obs[1:,2] - obs[:-1,2]  ) / dt  #angular acceleration

# feel free to swap in LASSO or other regressors
from sklearn.linear_model import LinearRegression

# fit the observed angular acceleration as a function of X
reg = LinearRegression().fit(X, alphas)
print(f"intercept : {reg.intercept_},  coeffs : {reg.coef_} ")

The number that came out for gravity term matched the number calculated from first principles by within 10%. Not bad!

A thing that is nice about this approach is that one is able to add terms into the dynamics for which we don’t have good intuitive models to compare to like your good ole Physics I Coulombic friction term $F \propto -sign(v)$ or nonlinearities.

## Flappy Bird as a Mixed Integer Program

Mixed Integer Programming is a methodology where you can specify convex (usually linear) optimization problems that include integer/boolean variables.

Flappy Bird is a game about a bird avoiding pipes.

We can use mixed integer programming to make a controller for Flappy Bird. Feel free to put this as a real-world application in your grant proposals, people.

While thinking about writing a MIP for controlling a lunar lander game, I realized how amenable to mixed integer modeling flappy bird is. Ben and I put together the demo on Saturday. You can find his sister blog post here.

The bird is mostly in free fall, on parabolic trajectories. This is a linear dynamic, so it can directly be expressed as a linear constraint. It can discretely flap to give itself an upward impulse. This is a boolean force variable at every time step. Avoiding the ground and sky is a simple linear constraint. The bird has no control over its x motion, so that can be rolled out as concrete values. Because of this, we can check what pipes are relevant at time points in the future and putting the bird in the gap is also a simple linear constraint.

There are several different objectives one might want to consider and weight. Perhaps you want to save the poor birds energy and minimize the sum of all flaps cvx.sum(flap). Or perhaps you want to really be sure it doesn’t hit any pipes by maximizing the minimum distance from any pipe. Or perhaps minimize the absolute value of the y velocity, which is a reasonable heuristic for staying in control. All are expressible as linear constraints. Perhaps you might want a weighted combo of these. All things to fiddle with.

There is a pygame flappy bird clone which made this all the much more slick. It is well written and easy to understand and modify. Actually figuring out the appropriate bounding boxes for pipe avoidance was finicky. Figuring out the right combo of bird size and pipe size is hard, combined with computer graphics and their goddamn upside down coordinate system.

We run our solver in a model predictive control configuration. Model predictive control is where you roll out a trajectory as an optimization problem and resolve it at every action step. This turns an open loop trajectory solve into a closed loop control, at the expense of needing to solve a perhaps very complicated problem in real time. This is not always feasible.

My favorite mip modeling tool is cvxpy. It gives you vectorized constraints and slicing, which I love. More tools should aspire to achieve numpy-like interfaces. I’ve got lots of other blog posts using this package which you can find in my big post list the side-bar 👀.

The github repo for the entire code is here: https://github.com/philzook58/FlapPyBird-MPC

And here’s the guts of the controller:

import cvxpy as cvx
import numpy as np
import matplotlib.pyplot as plt

N = 20 # time steps to look ahead
path = cvx.Variable((N, 2)) # y pos and vel
flap = cvx.Variable(N-1, boolean=True) # whether or not the bird should flap in each step
last_solution = [False, False, False]
last_path = [(0,0),(0,0)]

PIPEGAPSIZE  = 100
PIPEWIDTH = 52
BIRDWIDTH = 34
BIRDHEIGHT = 24
BIRDDIAMETER = np.sqrt(BIRDHEIGHT**2 + BIRDWIDTH**2)
SKY = 0
GROUND = (512*0.79)-1
PLAYERX = 57

def getPipeConstraints(x, y, lowerPipes):
constraints = []
for pipe in lowerPipes:
dist_from_front = pipe['x'] - x - BIRDDIAMETER
dist_from_back = pipe['x'] - x + PIPEWIDTH
if (dist_from_front < 0) and (dist_from_back > 0):
#print(pipe['y'] + BIRDDIAMETER,  pipe['y'] + PIPEGAPSIZE)
constraints += [y <= (pipe['y'] - BIRDDIAMETER)] # y above lower pipe
constraints += [y >= (pipe['y'] - PIPEGAPSIZE)] # y below upper pipe
#if len(constraints) > 0:
#print(constraints)
return constraints

def solve(playery, playerVelY, lowerPipes):
global last_path, last_solution

print(last_path)
pipeVelX = -4
playerAccY    =   1   # players downward accleration
playerFlapAcc =  -14   # players speed on flapping

# unpack variables
y = path[:,0]

vy = path[:,1]

c = [] #constraints
c += [y <= GROUND, y >= SKY]
c += [y[0] == playery, vy[0] == playerVelY]

x = PLAYERX
xs = [x]
for t in range(N-1):
dt = t//10 + 1
#dt = 1
x -= dt * pipeVelX
xs += [x]
c += [vy[t + 1] ==  vy[t] + playerAccY * dt + playerFlapAcc * flap[t] ]
c += [y[t + 1] ==  y[t] + vy[t + 1]*dt ]
c += getPipeConstraints(x, y[t+1], lowerPipes)

#objective = cvx.Minimize(cvx.sum(flap)) # minimize total fuel use
objective = cvx.Minimize(cvx.sum(flap) + 10* cvx.sum(cvx.abs(vy))) # minimize total fuel use

prob = cvx.Problem(objective, c)
try:
prob.solve(verbose = False, solver="GUROBI")
#print(np.round(flap.value).astype(bool))
#plt.plot(y.value)
#plt.show()
last_path = list(zip(xs, y.value))
last_solution = np.round(flap.value).astype(bool)
return last_solution[0], last_path
except:
last_solution = last_solution[1:]
last_path = [((x-4), y) for (x,y) in last_path[1:]]
return last_solution[0], last_path



I think it is largely self explanatory but here are some notes. The dt = t//10 + 1 thing is about decreasing our time resolution the further out from the current time step. This increases the time horizon without the extra computation cost. Intuitively modeling accuracy further out in time should matter less. The last_solution stuff is for in case the optimization solver fails for whatever reason, in which case it’ll follow open-loop the last trajectory it got.

### Bits and Bobbles

• We changed the dynamics slightly from the python original to make it easier to model. We found this did not change the feel of the game. The old dynamics were piecewise affine though, so are also modelable using mixed integer programming. http://groups.csail.mit.edu/robotics-center/public_papers/Marcucci18.pdf . Generally check out the papers coming out of the Tedrake group. They are sweet. Total fanboy over here.
• The controller as is is not perfect. It fails eventually, and it probably shouldn’t. A bug? Numerical problems? Bad modeling of the pipe collision? A run tends to get through about a hundred pipes before something gets goofy.
• Since we had access to the source code, we could mimic the dynamics very well. How robust is flappy bird to noise and bad modeling? We could add wind, or inaccurate pipe data.
• Unions of Convex Region. Giving the flappy bird some x position control would change the nature of the problem. We could easily cut up the allowable regions of the bird into rectangles, and represent the total space as a union of convex regions, which is also MIP representable.
• Verification – The finite difference scheme used is crude. It is conceivable for the bird to clip a pipe. Since basically we know the closed form of the trajectories, we could verify that the parabolas do not intersect the regions. For funzies, maybe use sum of squares optimization?
• Realtime MIP. Our solver isn’t quite realtime. Maybe half real time. One might pursue methods to make the mixed integer program faster. This might involve custom branching heuristics, or early stopping. If one can get the solver fast enough, you might run the solver in parallel and only query a new path plan every so often.
• 3d flappy bird? Let the bird rotate? What about a platformer (Mario) or lunar lander? All are pretty interesting piecewise affine systems.
• Is this the best way to do this? Yes and no. Other ways to do this might involve doing some machine learning, or hardcoding a controller that monitors the pipe locations and has some simple feedback. You can find some among the forks of FlapPyBird. I have no doubt that you could write these quickly, fiddle with them and get them to work better and faster than this MIP controller. However, for me there is a difference between could work and should work. You can come up with a thousand bizarre schemes that could work. RL algorithms fall in this camp. But I have every reason to believe the MIP controller should work, which makes it easier to troubleshoot.

## Casadi – Pretty Damn Slick

Casadi is something I’ve been aware of and not really explored much. It is a C++ / python / matlab library for modelling optimization problems for optimal control with bindings to IPOpt and other solvers. It can produce C code and has differentiation stuff. See below for some examples after I ramble.

I’ve enjoyed cvxpy, but cvxpy is designed specifically for convex problems, of which many control problems are not.

Casadi gives you a nonlinear modelling language and easy access to IPOpt, an interior point solver that works pretty good (along with some other solvers, many of which are proprietary however).

While the documentation visually looks very slick I actually found it rather confusing in contents at first. I’m not sure why. Something is off.

It also has a bunch of helper classes for DAE building and other things. They honestly really put me off. The documentation is confusing enough that I am not convinced they give you much.

The integrator classes give you access to external smart ode solvers from the Sundials suite. They give you good methods for difficult odes and dae (differential algebraic equations, which are ODEs with weird constraints like x^1 + y^1 == 1) Not clear to me if you can plug those in to an optimization, other than by a shooting method.

Casadi can also output C which is pretty cool.

I kind of wondered about Casadi vs Sympy. Sympy has lot’s of general purpose symbolic abilities. Symbolic solving, polynomial smarts, even some differential equation understanding. There might be big dividends to using it. But it is a little harder to get going. I feel like there is an empty space for a mathemtical modelling language that uses sympy as it’s underlying representation. I guess monkey patching sympy expressions into casadi expression might not be so hard. Sympy can also output fast C code. Sympy doesn’t really have any support for sparseness that I know of.

As a side note, It can be useful to put these other languages into numpy if you need extended reshaping abilities. The other languages often stop at matrices, which is odd to me.

Hmm. Casadi actually does have access to mixed integer programs via bonmin (and commercial solvers). That’s interesting. Check out lotka volterra minlp example

The optim interface makes some of this look better. optim.minimize and subject_to. Yeah, this is more similar to the interfaces I’m used to. It avoids the manual unpacking of the solution and the funky feel of making everything into implicit == 0 expressions.

Here is a simple harmonic oscillator example using the more raw casadi interface. x is positive, v is velocity, u is a control force. I’m using a very basic leap frog integration. You tend to have to stack things into a single vector with vertcat when building the final problem.

from casadi import *
import matplotlib.pyplot as plt

g = 9.8
N = 100

x = SX.sym('x',N)
v = SX.sym('v', N)
u = SX.sym('u', N-1)
#theta = SX('theta', N)

dt = 0.1
constraints = [x[0]-1, v[0]] # expressions that must be zero
for i in range(N-1):
constraints += [x[i+1] - (x[i] + dt * v[i]) ]
constraints += [v[i+1] - (v[i] - dt * x[i+1] + u[i] * dt)]

cost = sum([x[i]*x[i] for i in range(N)]) + sum([u[i]*u[i] for i in range(N-1)])

nlp = {'x':vertcat(x,v,u), 'f':cost, 'g':vertcat(*constraints)}
S = nlpsol('S', 'ipopt', nlp)
r = S(lbg=0, ubg=0) # can also give initial solutiuon hint, some other things
x_opt = r['x']
x = x_opt[:N]
v = x_opt[N:2*N]
u = x_opt[2*N:]
#u_opt = r['u']
print('x_opt: ', x_opt)
print(S)
plt.plot(x)
plt.plot(u)
plt.plot(v)
plt.show()

Let’s use the opti interface, which is pretty slick. Here is a basic cartpole https://web.casadi.org/blog/opti/

from casadi import *
import matplotlib.pyplot as plt

g = 9.8
N = 100

x = opti.variable(N)
v = opti.variable(N)
theta = opti.variable(N)
dtheta = opti.variable(N)
u = opti.variable(N-1)

opti.subject_to( u <= 1)
opti.subject_to( -1 <= u)
opti.subject_to( x <= 2)
opti.subject_to( -2 <= x)
opti.subject_to(x[0] == 0)
opti.subject_to(v[0] == 0)
opti.subject_to(theta[0] == 0)
opti.subject_to(dtheta[0] == 0)

dt = 0.05
for i in range(N-1):
opti.subject_to( x[i+1] == x[i] + dt * (v[i]))
opti.subject_to( v[i+1] == v[i] + dt * (x[i+1] + u[i]))
opti.subject_to( theta[i+1] == theta[i] + dt * (dtheta[i]))
opti.subject_to( dtheta[i+1] == dtheta[i] + dt * (u[i] * cos(theta[i+1]) - sin(theta[i+1]) ))

opti.minimize( sum1(sin(theta)))

opti.solver("ipopt") #,p_opts, s_opts)
sol = opti.solve()
print(sol.value(x))
plt.plot(sol.value(x), label="x")
plt.plot(sol.value(u), label="u")
plt.plot(sol.value(theta), label="theta")
plt.legend()
plt.show()
'''
p = opti.parameter()
opti.set_value(p, 3)
'''

Very fast. Very impressive. Relatively readable code. I busted this out in like 15 minutes. IPopt solves the thing in the blink of an eye (about 0.05s self reported). Might be even faster if I warm start it with a good solution, as I would in online control (which may be feasible at this speed). Can add the initial condition as a parameter to the problem

I should try this on an openai gym.

## Model Predictive Control of CartPole in OpenAI Gym using OSQP

A continuation of this post http://www.philipzucker.com/osqp-sparsegrad-fast-model-predictive-control-python-inverted-pendulum/

I was having difficulty getting the model predictive control from a previous post working on an actual cartpole. There are a lot more unknown variables in that case and other issues (the thing has a tendency to destroy itself). I was kind of hoping it would just work. So I realized that I should get it working in simulation.

I did not copy the simulation code of the openai cartpole https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py  which gives some small amount of credence that the MPC might generalize to a real system.

For the sake of honesty, I’m still at the point where my controller freaks out about 1/3 of the time. I do not really understand why.

Looks damn good here though huh.

A problem I had for a while was the Length of my pole was off by a factor of 2. It still kind of worked, especially if nearly balanced (although with a lot of oscillation, which in hindsight was a clue something wasn’t tuned in right).

There are some useful techniques for manipulating gym. You can set some parameters from the outside, like starting positions and thresholds. Also you can mangle your way into continuous force control rather than just left right commands (wouldn’t it be cool to use Integer programming for that? Silly, but cool).

There is still a bunch of trash in here from me playing around with parameters. I like to keep it real (and lazy).

import gym
from mpc import MPC
import numpy as np
env = gym.make('CartPole-v0')
env.env.x_threshold = 5

start_theta = 0#-np.pi + 0.4# + 0.1#2 * np.pi #np.pi+0.4

mpc = MPC(0.5,0,start_theta,0)
action = 0
for i_episode in range(1):
observation = env.reset()
env.env.state[2] = start_theta - np.pi
for t in range(500):
env.render()
#print(observation)
#action = env.action_space.sample()
observation, reward, done, info = env.step(action)
a = mpc.update(observation[0] + 0.5, observation[1], observation[2]+np.pi, observation[3])
env.env.force_mag = abs(a) #min(100, abs(a))
#print(a)
if a < 0:
action = 0
else:
action = 1
if done:
pass
#print("Episode finished after {} timesteps".format(t+1))
#print(observation)
#print(dir(env))
#break
print(mpc.calcQ().reshape((5,50)))
print(observation)
print(dir(env))

One problem was that originally I had the pole just want to go to pi. But if it swings the other direction or many swings, that is bad and it will freak out. So I changed it to try to go the the current nearest multiple of pi, which helps.

Fiddling with the size of the regulation does have a significant effect and the relative size of regulation for x, v, f, omega. I am doing a lot of that search dumbly. I should probably do some kind of automatic.

Loosening the constraints on v and x seems to help stability.

I think weighting the angle at the end of the episode slightly more helps. That’s why I used linspace for the weight on the angle.

I’ve had a lot of problem with the answer coming back as infeasible from OSQP. I feel like it probably shouldn’t be and that is the solver’s problem?

Two things help: sometimes the cart does go out of the allowable range. The optimization probably will try to go all the way to the boundaries since it is useful. And since there is some mismatch between the actual dynamics and my model, it will go outside. So I heavily reduce the constraints for the first couple time steps. It takes a couple. 4 seems to work ok. It should want to apply forces during those four steps to get it back in range anyhow.

Even then it still goes infeasible sometimes and I don’t know why. So in that case, I reduce the required accuracy to hopefully at least get something that makes sense. That is what the eps_rel stuff is about. Maybe it helps. Not super clear. I could try a more iterative increasing of the eps?

import sparsegrad.forward as forward
import numpy as np
import osqp
import scipy.sparse as sparse

class MPC():
def __init__(self, x0, v0, theta0, thetadot0):
self.N = 50
self.NVars  = 5
T = 8.0
self.dt = T/self.N
dt = self.dt
self.dtinv = 1./dt
#Px = sparse.eye(N)
#sparse.csc_matrix((N, N))
# The three deifferent weigthing matrices for x, v, and external force
reg = sparse.eye(self.N)*0.05
z = sparse.bsr_matrix((self.N, self.N))
# sparse.diags(np.arange(N)/N)
pp = sparse.diags(np.linspace(1,7,self.N)) # sparse.eye(self.N)
P = sparse.block_diag([reg, 10*reg ,pp, reg, 10*reg]) #1*reg,1*reg])
#P[N,N]=10
self.P = P
THETA = 2
q = np.zeros((self.NVars, self.N))
q[THETA,:] = np.pi
q[0,:] = 0.5
#q[N,0] = -2 * 0.5 * 10
q = q.flatten()
q= -P@q
#u = np.arr

self.x = np.random.randn(self.N,self.NVars).flatten()
#x = np.zeros((N,NVars)).flatten()
#v = np.zeros(N)
#f = np.zeros(N)

A, l, u = self.getAlu(self.x, x0, v0, theta0, thetadot0)
self.m = osqp.OSQP()
self.m.setup(P=P, q=q, A=A, l=l, u=u , time_limit=0.1, verbose=False)# , eps_rel=1e-2 #  **settings # warm_start=False, eps_prim_inf=1e-1
self.results = self.m.solve()
print(self.results.x)
for i in range(100):

def calcQ(self):
THETA = 2
q = np.zeros((self.NVars, self.N))
thetas = np.reshape(self.x, (self.NVars, self.N))[THETA,:]
thetas = thetas - thetas % (2*np.pi) + np.pi
q[THETA,:] = thetas[0] #np.pi #thetas
q[0,:] = 0.5
#q[N,0] = -2 * 0.5 * 10
q = q.flatten()
q = -self.P@q
return q

A, l, u = self.getAlu(self.x, x0, v0, theta0, thetadot0)
#print(A.shape)
#print(len(A))
q = self.calcQ()
self.m.update(q=q, Ax=A.data, l=l, u=u)
self.results = self.m.solve()
if self.results.x[0] is not None:
self.x = np.copy(self.results.x)
self.m.update_settings(eps_rel=1e-3)
else:
#self.x += np.random.randn(self.N*self.NVars)*0.1 # help get out of a rut?
self.m.update_settings(eps_rel=1.1)
print("failed")
return 0
return self.x[4*self.N+1]

def constraint(self, var, x0, v0, th0, thd0):
#x[0] -= 1
#print(x[0])
g = 9.8
L = 1.0
gL = g * L
m = 1.0 # doesn't matter
I = L**2 / 3
Iinv = 1.0/I
dtinv = self.dtinv
N = self.N

x = var[:N]
v = var[N:2*N]
theta = var[2*N:3*N]
a = var[4*N:5*N]
xavg, vavg, thetavg, thdotavg = map(lambda z: (z[0:-1]+z[1:])/2, dynvars)
dx, dv, dthet, dthdot = map(lambda z: (z[1:]-z[0:-1])*dtinv, dynvars)
vres = dv - a[1:]
xres = dx - vavg
torque = (-gL*np.sin(thetavg) + a[1:]*L*np.cos(thetavg))/2
thetdotres = dthdot - torque*Iinv
thetres = dthet - thdotavg

return x[0:1]-x0, v[0:1]-v0, theta[0:1]-th0, thetadot[0:1]-thd0, xres,vres, thetdotres, thetres
#return x[0:5] - 0.5

def getAlu(self, x, x0, v0, th0, thd0):
N = self.N
gt = np.zeros((2,N))
gt[0,:] = -0.1 # 0.15 # x is greaer than 0.15
gt[1,:] = -3 #-1 #veclotu is gt -1m/s
#gt[4,:] = -10
control_n = max(3, int(0.1 / self.dt)) # I dunno. 4 seems to help
#print(control_n)

gt[:,:control_n] = -100
#gt[1,:2] = -100
#gt[1,:2] = -15
#gt[0,:3] = -10
gt = gt.flatten()
lt = np.zeros((2,N))
lt[0,:] = 1 #0.75 # x less than 0.75
lt[1,:] = 3 #1 # velocity less than 1m/s
#lt[4,:] = 10
lt[:,:	control_n] = 100
#lt[1,:2] = 100
#lt[0,:3] = 10
#lt[1,:2] = 15

lt = lt.flatten()

z = sparse.bsr_matrix((N, N))
ineqA = sparse.bmat([[sparse.eye(N),z,z,z,z],[z,sparse.eye(N),z,z,z]]) #.tocsc()
#print(ineqA.shape)
#print(ineqA)
cons = self.constraint(forward.seed_sparse_gradient(x), x0, v0, th0, thd0)
A = sparse.vstack(map(lambda z: z.dvalue, cons)) #  y.dvalue.tocsc()
#print(A.shape)
totval = np.concatenate(tuple(map(lambda z: z.value, cons)))
temp = A@x - totval

A = sparse.vstack((A,ineqA)).tocsc()

#print(tuple(map(lambda z: z.value, cons)))
#print(temp.shape)
#print(lt.shape)
#print(gt.shape)
u = np.concatenate((temp, lt))
l = np.concatenate((temp, gt))
return A, l, u



## OSQP and Sparsegrad: Fast Model Predictive Control in Python for an inverted pendulum

OSQP is a quadratic programming solver that seems to be designed for control problems. It has the ability to warm start, which should make it faster.

I heard about it in this Julia talk

They have some really cool technology over there in Julia land.

The problem is setup as a sequential quadratic program. I have a quadratic cost to try to bring the pendulum to an upright position.

The equations of motion are roughly

$\ddot{\theta}I=-mgL\sin(\theta)+mfL\cos(\theta)$

$\ddot{x}=f$

$I=\frac{1}{3}mL^2$

We don’t include the back reaction of the pole on the cart. It is basically irrelevant for our systems and overly complicated for no reason. The interesting bit is the nonlinear dynamics and influence of the cart acceleration.

I write down obeying the equations of motion as a linear constraint between timesteps. I use sparsegrad to get a sparse Jacobian of the equations of motion. The upper and lower (l and u) bounds are set equal to make an equality constraint.

Our setup has a finite track about a meter long. Our motors basically are velocity control and they can go about +-1m/s. Both of these can be encapsulated as linear constraints on the position and velocity variables. $l \le Ax \le u$

$\phi(x)=0$

$\phi(x) \approx \phi(x_0) + \partial \phi(x_0) (x - x_0)$

$A= \partial \phi(x_0)$

$l=u=\partial \phi(x_0) x_0 - \phi_0(x_0)$

Similarly for finding the quadratic cost function in terms of the setpoint $x_s$. $\frac{1}{2}x^T P x + q^Tx= \frac{1}{2}(x-x_s)P(x-x_s)$ Expanding this out we get

$q = - Px_s$

I run for multiple iterations to hopefully converge to a solution (it does). The initial linearization might be quite poor, but it gets better.

The OSQP program seems to be able to run in under 1ms. Nice! Initial tests of our system seem to be running at about 100Hz.

Could probably find a way to make those A matrices faster than constructing them entirely anew every time. We’ll see if we need that.

I very inelegantly smash all the variables into x. OSQP and scipy sparse don’t support multiIndex objects well, best as I can figure. Vectors should be single dimensioned and matrices 2 dimensioned.

Still to be seen if it works on hardware. I was already having infeasibility problems. Adjusting the regularization on P seemed to help.

import sparsegrad.forward as forward
import numpy as np
import osqp
import scipy.sparse as sparse
import matplotlib.pyplot as plt
import time

#def f(x):
#	return x**2

N = 1000
NVars  = 5
T = 5.0
dt = T/N
dtinv = 1./dt
#Px = sparse.eye(N)
#sparse.csc_matrix((N, N))
# The three deifferent weigthing matrices for x, v, and external force
reg = sparse.eye(N)*0.01
# sparse.diags(np.arange(N)/N)
P = sparse.block_diag([reg,reg ,sparse.eye(N), 1*reg,1*reg])
#P[N,N]=10
THETA = 2
q = np.zeros((NVars, N))
q[THETA,:] = np.pi
#q[N,0] = -2 * 0.5 * 10
q = q.flatten()
q= -P@q
#u = np.arr

x = np.random.randn(N,NVars).flatten()
#x = np.zeros((N,NVars)).flatten()
#v = np.zeros(N)
#f = np.zeros(N)

g = 9.8
L = 0.5
gL = g * L
m = 1.0 # doesn't matter
I = L**2 / 3
Iinv = 1.0/I
print(Iinv)

def constraint(var, x0, v0, th0, thd0):
#x[0] -= 1
#print(x[0])
x = var[:N]
v = var[N:2*N]
theta = var[2*N:3*N]
a = var[4*N:5*N]
xavg, vavg, thetavg, thdotavg = map(lambda z: (z[0:-1]+z[1:])/2, dynvars)
dx, dv, dthet, dthdot = map(lambda z: (z[1:]-z[0:-1])*dtinv, dynvars)
vres = dv - a[1:]
xres = dx - vavg
torque = -gL*np.sin(thetavg) + a[1:]*L*np.cos(thetavg)
thetdotres = dthdot - torque*Iinv
thetres = dthet - thdotavg

return x[0:1]-x0, v[0:1]-v0, theta[0:1]-th0, thetadot[0:1]-thd0, xres,vres, thetdotres, thetres
#return x[0:5] - 0.5

def getAlu(x, x0, v0, th0, thd0):
gt = np.zeros((2,N))
gt[0,:] = 0.1 # x is greaer than 0
gt[1,:] = -1 #veclotu is gt -1m/s
gt = gt.flatten()
lt = np.zeros((2,N))
lt[0,:] = 0.8
lt[1,:] = 1 # velocity less than 1m/s
lt = lt.flatten()

z = sparse.bsr_matrix((N, N))
ineqA = sparse.bmat([[sparse.eye(N),z,z,z,z],[z,sparse.eye(N),z,z,z]]) #.tocsc()
print(ineqA.shape)
#print(ineqA)
cons = constraint(forward.seed_sparse_gradient(x), x0, v0, th0, thd0)
A = sparse.vstack(map(lambda z: z.dvalue, cons)) #  y.dvalue.tocsc()
print(A.shape)
totval = np.concatenate(tuple(map(lambda z: z.value, cons)))
temp = A@x - totval

A = sparse.vstack((A,ineqA)).tocsc()

#print(tuple(map(lambda z: z.value, cons)))
print(temp.shape)
print(lt.shape)
print(gt.shape)
u = np.concatenate((temp, lt))
l = np.concatenate((temp, gt))
return A, l, u

#A = y.dvalue.tocsc()
#print(y.dvalue)
#sparse.hstack( , format="csc")
A, l, u = getAlu(x, 0.5, 0, 0, 0)
m = osqp.OSQP()
m.setup(P=P, q=q, A=A, l=l, u=u) #  **settings
results = m.solve()
print(results.x)

#plt.plot(results.x[3*N:4*N])
#plt.plot(results.x[4*N:5*N])
start = time.time()
iters = 100
for i in range(iters):
A, l, u = getAlu(results.x, 0.5, 0, 0, 0)
print(A.shape)
#print(len(A))
m.update(Ax=A.data, l=l, u=u)
results = m.solve()
end = time.time()
print("Solve in :", iters / (end-start) ,"Hz")

plt.plot(results.x[:N], label="x")
plt.plot(results.x[N:2*N], label="v")
plt.plot(results.x[2*N:3*N], label="theta")
plt.plot(results.x[4*N:5*N], label="force")
plt.legend()
plt.show()
#m.update(q=q_new, l=l_new, u=u_new)


## PyTorch Trajectory Optimization Part 2: Work in Progress

I actually have been plotting the trajectories, which is insane that I wasn’t already doing in part 1. There was clearly funky behavior.

Alternating the Lagrange multiplier steps and the state variable steps seems to have helped with convergence. Adding a cost to the dynamical residuals seems to have helped clean them up also.

I should attempt some kind of analysis rather than making shit up. Assuming quadratic costs (and dynamics), the problem is tractable. The training procedure is basically a dynamical system.

Changed the code a bit to use more variables. Actually trying the cart pole problem now. The results seem plausible. A noisy but balanced dynamical residual around zero. And the force appears to flip it’s direction as the pole crosses the horizontal.

Polyak’s step length

The idea is that if you know the optimal value you’re trying to achieve, that gives you a scale of gradient to work with. Not as good as a hessian maybe, but it’s somethin’. If you use a gradient step of $x + (f-f*)\frac{\nabla f}{|\nabla f|^2}$ it at least has the same units as x and not f/x. In some simple models of f, this might be exactly the step size you’d need. If you know you’re far away from optimal, you should be taking some big step sizes.

The Polyak step length has not been useful so far. Interesting idea though.

import torch
import matplotlib.pyplot as plt
import numpy as np
batch = 1
N = 50
T = 7.0
dt = T/N
NVars = 4

#print(x)

#	x[0,0,2] = np.pi

'''
class Vars():
def __init__(self, N=10):
self.data = torch.zeros(batch, N, 2)
self.data1 = torch.zeros(batch, N-1, 3)
self.lx = self.data1[:,:,0]
self.lv = self.data1[:,:,1]
self.f = self.data1[:,:,2]
self.x = self.data[:,:,0]
self.v = self.data[:,:,1]
'''
def calc_loss(x,f, l):
delx = (x[:,1:,:] - x[:, :-1,:]) / dt

xbar = (x[:,1:,:] + x[:, :-1,:]) / 2
dxdt = torch.zeros(batch, N-1,NVars)
THETA = 2
X = 0
V = 1
dxdt[:,:,X] = xbar[:,:,V]
dxdt[:,:,V] = f

xres = delx - dxdt

#dyn_err = torch.sum(torch.abs(xres) + torch.abs(vres), dim=1) #torch.sum(xres**2 + vres**2, dim=1) # + Abs of same thing?

lagrange_mult = torch.sum(l * xres, dim=1).sum(1)

cost = 0.1*torch.sum((x[:,:,X]-1)**2, dim=1)  + .1*torch.sum( f**2, dim=1) + torch.sum( torch.abs(xres), dim=1).sum(1)*dt +  torch.sum((x[:,:,THETA]-np.pi)**2, dim=1)*0.01

#cost = torch.sum((x-1)**2, dim=1)

total_cost =   cost + lagrange_mult  #100 * dyn_err + reward

import torch.optim as optim
'''
optimizer = optim.SGD(net.parameters(), lr=0.01)

output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update
'''

#Could interleave an ODE solve step - stinks that I have to write dyanmics twice
#Or interleave a sepearate dynamic solving
# Or could use an adaptive line search. Backtracking
# Goal is to get dyn_err quite small

'''
learning_rate = 0.001
for i in range(40):
total_cost=calc_loss(x,v,f)
total_cost.backward()
while dyn_loss > 0.01:
dyn_loss.backward()
x[:,1:] -= learning_rate * x.grad[:,1:] # Do not change Starting conditions
reward.backward()
'''
learning_rate = 0.005
costs= []
for i in range(4000):
total_cost, lagrange, cost, xres = calc_loss(x,f, l)
costs.append(total_cost[0])
print(total_cost)
#print(x)

total_cost.backward()

x[:,1:,:] -= learning_rate * x.grad[:,1:,:] # Do not change Starting conditions

total_cost, lagrange, cost, xres = calc_loss(x,f, l)
costs.append(total_cost[0])
print(total_cost)
#print(x)

total_cost.backward()

#x[:,1:,:] -= learning_rate * x.grad[:,1:,:] # Do not change Starting conditions

print(x)
#print(v)
print(f)

plt.plot(xres[0,:,0].detach().numpy(), label='Xres')
plt.plot(xres[0,:,1].detach().numpy(), label='Vres')
plt.plot(xres[0,:,2].detach().numpy(), label='THeres')
plt.plot(x[0,:,0].detach().numpy(), label='X')
plt.plot(x[0,:,1].detach().numpy(), label='V')
plt.plot(x[0,:,2].detach().numpy(), label='Theta')
plt.plot(f[0,:].detach().numpy(), label='F')

#plt.plot(costs)
#plt.plot(l[0,:,0].detach().numpy(), label='Lx')

plt.legend(loc='upper left')
plt.show()

Problems:

1. The step size is ad hoc.
2. Lagrange multiplier technique does not seem to work
3. Takes a lot of steps
4. diverges
5. seems to not be getting an actual solution
6. Takes a lot of iterations

On the table: Better integration scheme. Hermite collocation?

Be more careful with scaling, what are the units?

mutiplier smoothing. Temporal derivative of lagrange multiplier in cost?

alternate more complete solving steps

huber on theta position cost. Square too harsh? Punishes swing away too much?

more bullshit optimization strats as alternatives to grad descent

weight sooner more than later. Care more about earlier times since want to do model predictive control

Just solve eq of motion don’t worry about control as simpler problem

Pole up balancing

logarithm squeezing method – nope

The lambda * x model of lagrange mulitplier. Leads to oscillation

Damping term?

This learning rate is more like a discretization time step than a decay parameter. Well the product of both actually.

Heat equation model. Kind of relaxing everything into place

______________________________

Switched to using pytorch optimizers. Adam seems to work the best. Maybe 5x as fast convergence as my gradient descent. Adagrad and Adadelta aren’t quite as good. Should still try momentum. Have to reset the initial conditions after every iteration. A better way? Maybe pass x0 in to calc_loss separately?

Switched over to using the method of multipliers http://www.cs.cmu.edu/~pradeepr/convexopt/Lecture_Slides/Augmented-lagrangian.pdf

The idea is to increase the quadratic constraint cost slowly over time, while adjusting a Lagrange mutiplier term to compensate also. Seems to be working better. The scheduling of the increase is still fairly ad hoc.

import torch
import matplotlib.pyplot as plt
import numpy as np
import torch.optim
batch = 1
N = 50
T = 10.0
dt = T/N
NVars = 4

def getNewState():

return x,f,l

def calc_loss(x,f, l, rho):
delx = (x[:,1:,:] - x[:, :-1,:]) / dt

xbar = (x[:,1:,:] + x[:, :-1,:]) / 2
dxdt = torch.zeros(batch, N-1,NVars)
THETA = 2
X = 0
V = 1
dxdt[:,:,X] = xbar[:,:,V]
dxdt[:,:,V] = f

xres = delx - dxdt

#dyn_err = torch.sum(torch.abs(xres) + torch.abs(vres), dim=1) #torch.sum(xres**2 + vres**2, dim=1) # + Abs of same thing?

lagrange_mult = torch.sum(l * xres, dim=1).sum(1)

#cost = 0
cost =  1.0*torch.sum(torch.abs(x[:,:,THETA]-np.pi), dim=1) # 0.1*torch.sum((x[:,:,X]-1)**2, dim=1)  +
#cost += 2.0 * torch.sum((x[:,30:-1,THETA] - np.pi)**2,dim=1)
#cost += 7.0*torch.sum( torch.abs(xres)+ xres**2 , dim=1).sum(1)
penalty = rho*torch.sum( xres**2 , dim=1).sum(1)
#  + 1*torch.sum( torch.abs(xres)+ xres**2 , dim=1).sum(1)
# 5.0*torch.sum( torch.abs(xres)+ xres**2 , dim=1).sum(1) +
cost += 0.01*torch.sum( f**2, dim=1)
#cost += torch.sum(-torch.log(f + 1) - torch.log(1 - f),dim=1)
cost += 0.1*torch.sum(-torch.log(xbar[:,:,X] + 1) - torch.log(1 - xbar[:,:,X]),dim=1)
cost += 0.1*torch.sum(-torch.log(xbar[:,:,V] + 1) - torch.log(1 - xbar[:,:,V]),dim=1)

#cost += torch.sum(-torch.log(xres + .5) - torch.log(.5 - xres),dim=1).sum(1)

# torch.sum( torch.abs(xres), dim=1).sum(1)*dt +
#cost = torch.sum((x-1)**2, dim=1)

total_cost =   cost + lagrange_mult + penalty  #100 * dyn_err + reward

import torch.optim as optim

learning_rate = 0.001

x, f, l = getNewState()
optimizers = [torch.optim.SGD([x,f], lr= learning_rate),
optimizer = optimizers[1]
#optimizer = torch.optim.SGD([x,f], lr= learning_rate)
costs= []
path = []
mults = []
rho = 0.1
prev_cost = 0
for j in range(15):
prev_cost = None
for i in range(1,10000):

total_cost, lagrange, cost, xres = calc_loss(x,f, l, rho)

costs.append(total_cost[0])
if i % 5 == 0:
#pass
print(total_cost)

total_cost.backward()

optimizer.step()

x[0,0,2] = 0#np.pi+0.3 # Initial Conditions
x[0,0,0] = 0
x[0,0,1] = 0
x[0,0,3] = 0
#if (x.grad.norm()).detach().numpy()/N < 0.1: #Put Convergence condition here

if i > 2000:
break
if prev_cost:
if ((total_cost - prev_cost).abs()/total_cost).detach().numpy() < 0.000001:
pass #break

prev_cost = total_cost

total_cost, lagrange, cost, xres = calc_loss(x,f, l, rho)
costs.append(total_cost[0])

l += 2 * rho * xres

rho = rho + 0.5
print(rho)

plt.subplot(131)
plt.plot(xres[0,:,0].detach().numpy(), label='Xres')
plt.plot(xres[0,:,1].detach().numpy(), label='Vres')
plt.plot(xres[0,:,2].detach().numpy(), label='THeres')

plt.legend(loc='upper right')
#plt.figure()
plt.subplot(132)
plt.plot(x[0,:,0].detach().numpy(), label='X')
plt.plot(x[0,:,1].detach().numpy(), label='V')
plt.plot(x[0,:,2].detach().numpy(), label='Theta')
plt.plot(f[0,:].detach().numpy(), label='F')
#plt.plot(cost[0,:].detach().numpy(), label='F')
plt.legend(loc='upper right')
#plt.figure()
plt.subplot(133)
plt.plot(costs)

plt.show()

The left is residuals of obeying the equations of motion, the middle is the force and trajectories themselves and the right is cost vs iteration time. Not entirely clear that a residual of 0.04 is sufficient. Integrated over time this could be an overly optimistic error of 0.2 ish I’d guess. That is on the edge of making me uncomfortable. Increase rho more? Also that force schedule seems funky and overly complex. Still, improvement from before. Feels like we’re cookin’ with gas

## CartPole WORKIN’ BOYEEE

We have been fighting a problem for weeks. The Serial port was just not reliable, it had sporadic. The problem ended up being a surprising thing, we were using threading to receive the messages nd checking for limit switches. It is not entirely clear why but this was totally screwing up the serial port update in an unpredictable manner. Yikes. What a disaster.

After that though smoooooooth sailing.

With a slight adaptation of the previous Openai gym LQR cartpole code and a little fiddling with parameters we have a VERY stable balancer. We removed the back reaction of the pole dynamics on the cart itself for simplicity. This should be accurate when the pole vastly.

We did find that the motor is exactly velocity control in steady state with a linear response. There is a zero point offset (you need to ask for 100 out of 2046 before you get any movement at all).

We’ll see where we can get with the Lyapunov control next time.

https://github.com/philzook58/cart_pole/blob/master/lqr_controller.py

from sabretooth_command import CartCommand
from cart_controller import CartController
from encoder_analyzer import EncoderAnalyzer
import time
import numpy as np
import serial.tools.list_ports
import scipy.linalg as linalg
lqr = linalg.solve_continuous_are

ports = list(serial.tools.list_ports.comports())
print(dir(ports))
for p in ports:
print(dir(p))
print(p.device)
if "Sabertooth" in p.description:
sabreport = p.device
else:
ardPort = p.device

print("Initilizing Commander")
comm = CartCommand(port= sabreport) #"/dev/ttyACM1")
print("Initilizing Analyzer")
analyzer = EncoderAnalyzer(port=ardPort) #"/dev/ttyACM0")
print("Initializing Controller.")
cart = CartController(comm, analyzer)
print("Starting Zero Routine")
cart.zeroAnalyzer()

gravity = 9.8
mass_pole = 0.1
length = 0.5

moment_of_inertia = (1./3.) * mass_pole * length**2
print(moment_of_inertia)

A = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,1],
[0,0,length * mass_pole * gravity / (2 * moment_of_inertia) ,0]
])
B = np.array([0,1,0,length * mass_pole / (2 * moment_of_inertia)]).reshape((4,1))
Q = np.diag([1.0, 1.0, 1.0, 0.01])
R = np.array([[0.001]])

P = lqr(A,B,Q,R)
Rinv = np.linalg.inv(R)
K = np.dot(Rinv,np.dot(B.T, P))
print(K)
def ulqr(x):
x1 = np.copy(x)
x1[2] = np.sin(x1[2] + np.pi)
return -np.dot(K, x1)

cart.goTo(500)
command_speed = 0
last_time = time.time()
while True:
observation = cart.analyzer.getState()
x,x_dot,theta,theta_dot = observation
a = ulqr(np.array([(x-500)/1000,x_dot/1000,theta-0.01,theta_dot]))
t = time.time()
dt = t - last_time
last_time = t
command_speed += 1 * a[0] * dt
#command_speed -= (x - 500) * dt * 0.001 * 0.1
#command_speed -= x_dot * dt * 0.001 * 0.5
cart.setSpeedMmPerS(1000 *command_speed)
print("theta {}\ttheta_dot {}\taction {}\tspeed {}".format(theta, theta_dot, a, command_speed))