A Smattering of Physics in Sympy

Sympy is fun. I’ve been enjoying trying out some simple physics problems and seeing what kind of fun angles sympy brings to the table. It does pretty good on concrete problems, not so good at abstract derivations.

Kinematics

Ah such fond memories! In high school, I was taught by Ric Thompson “the big four”.

x_f = x_i + v_i t + \frac{1}{2} a t^2

v_f = v_i + a t

v_f^2 = v_i^2 + 2 a d

d = \frac{v_f + v_i}{2} t

The equations are of course, overcomplete. They are all implied by \frac{d^2}{dt^2}x = a, but even with only algebra then second two are derivable from the first two.

Of course a natural way of deriving the equations is to solve one equation for a variable and substitute it into the other equation. sympy makes this pretty dang easy.

from sympy import *
init_printing()
t,a,d,vf,vi = symbols("t a d vf vi")
e1 = Eq(d , vi * t + 1/2 * a * t ** 2)
tsub = solve(Eq(vf , vi + a * t),t)[0]
print(tsub) # This is assuming a is nonzero though.
expand(simplify(e1.subs(t,tsub)))
(vf - vi)/a
Eq(d, 0.5*vf**2/a - 0.5*vi**2/a)

However, there is a more automated approach.

It turns out that a decent chunk of physics equations are or can be well approximated by a system of polynomial equations. There are systematic methods that are guaranteed to solve the problem (albeit maybe not in the lifetime of the universe).

A grobner basis is an equivalent set of polynomial equations that has useful properties. For some simple purposes, all you need to know is that if you give the variables you want to eliminate first, the Groebner basis will contain equations without those variable. Here we specify t as one to eliminate, so we get an equation without t in it


G = groebner(  [vi * t + 1/2 * a * t ** 2 - d,  
                vi + a * t - vf] , 
                 t,vf,d,a,vi  )
for e in G:
    print(e)
-2.0*d + 1.0*t*vf + 1.0*t*vi
1.0*a*t - 1.0*vf + 1.0*vi
-2.0*a*d + 1.0*vf**2 - 1.0*vi**2

I’ve actually been pleasantly surprised at how many physics problems reduce ultimately to systems of polynomial constraints. Energy and momentum conservation are polynomial constraints (classical feynman diagrams kind of). Special relativity questions can be reduced to polynomial constraints using the proper time.

#elephant problem
# elephants give birth at 21 months. On a rocket at velocity v
# how long T until you see it give birth? 
tau , t1, t2, x1, v, c, T = symbols("tau t1 t2 c1 v c T")

eqs = [
    tau**2 - (t1**2 - x1**2 / c**2), # proper time
    x1 - v * t1, # distance away
    c * t2 - x1, # time for light to travel back
    T - t1 - t2, # total time
    tau - 21 # proper time is 21 months
    
]

groebner(eqs, tau , t1, t2, x1, v, T)

Lagrangian Mechanics

The Structure and Interpretation of Classical Mechanics is an interesting book.

It points out that notation we use is extremely imprecise and implicit. This is a source of great confusion.

A great benefit of programming up such examples is that it makes explicit (sometimes painfully so) steps that were implicit before.

In the Euler Lagrange equation, first partially differentiates considering q and \dot{q} to be independent parameters. Then a substitution is makde for a function x(t) and then we procede with a differentiation with respect to time.

# simple harmonic oscillator lagrangian style
m, k = symbols("m k", real = True, positive=True)
v, q = symbols("v q")
K = Rational(1,2) * m * v ** 2 #kinetic energy
V = Rational(1,2) * k * q ** 2 # potential energy
L =  K - V  #Lagrangian
F = diff(L,q) # force
p = diff(L,v) # momentum

x_ = Function("x")
t = symbols("t")

x = x_(t)

subst = { v : diff(x,t),
         q : x} # replacement to turn q into a function x

# euler-lagrange equations of motion
eq = F.subs( subst ) - diff( p.subs(subst)  , t )
dsolve(eq) # general solution cosine and sines

Here’s an analogous thing for a pendulum

#simple harmonic oscillator lagrangian style
m, g, L = symbols("m g L", real = True, positive=True)
theta, w = symbols("theta omega")
K = Rational(1,2) * m * (w * L) ** 2 #kinetic energy
V = - Rational(1,2) * m * g * L * cos(theta) # potential energy. angle is defined as 0 = hanging down
L =  K - V  #Lagrangian
F = diff(L,theta) # force
p = diff(L,w) # momentum
F
p

x_ = Function("theta")
t = symbols("t")

x = x_(t)

subst = { w : diff(x,t),
         theta : x} # replacement to turn q into a function x

# euler-lagrange equations of motion
eq = F.subs( subst ) - diff( p.subs(subst)  , t )
eq
#dsolve(eq) 

Another place where an implicit stated substitution is absolutely vital is in the Legendre transform going from the Lagrangian to the Hamiltonian.

# legendre transformation to hamiltonian
p = symbols( "p" )
H_ = p * v - L # hamiltonian but we haven't solved out v yet
v_of_pq = solve(diff(H_, v), v)[0] # set derivative to 0 to solve for v.
H = simplify(H_.subs(v, v_of_pq )) # substitue back in. Here is the actual hamiltonian
H

Statistical Mechanics

Sympy can do Gaussian integrals! How convenient. It can also do power series expansions. And differentiate. So it takes the drudgery out of some simple calculations

# ideal gas partition function
beta, m, V, N, kb, T  = symbols("beta m V N k_b T", real=True, positive=True)
p = symbols("p", real=True)
Z = integrate( exp( - beta * Rational(1,2) * p ** 2 / m ), (p,-oo,oo))**(3*N) * V**N #partition function
def avg_energy(Z):
    return - diff(ln(Z), beta).subs(beta, 1/ kb / T)
print(avg_energy(Z)) #
F = (-ln(Z) / beta).subs(beta, 1 / kb / T) #helmholtz free energy
S = diff(F , T) # sentropy is derivative of helmholtz wrt T
S # the functional dependence on T and V I think is correct
P = -diff(F , V) # pressure is - derivative of V
P
# Neato
# hamrmonic oscillator partition function
beta, m, k = symbols("beta m k ", real=True, positive=True)
p, x = symbols("p x", real=True)
E = R(1,2) * p ** 2 / m  + R(1,2) * k * x ** 2
Z = integrate( integrate( exp( - beta * E ), (p,-oo,oo)) , (x,-oo, oo))**N 
diff(-ln(Z),beta)

Perturbation theory of the partition function of an anharmonic oscillator. Pretty easy. It is interesting to note that this is the very simplest schematic of how perturbation theory can be approached for quantum field theory.

# pertubration theory of anharmonic oscillator
beta, m, k, g = symbols("beta m k g ", real=True, positive=True)
p, x = symbols("p x", real=True)
E = Rational(1,2) * ( p ** 2 / m  +  k * x ** 2) + g * x ** 4
series(exp( - beta * E ), g).removeO()
Z = integrate( integrate( series(exp( - beta * E ), g, n=2).removeO(), (p,-oo,oo)) , (x,-oo, oo))
simplify(diff(-ln(Z),beta)) #E
simplify(diff(-ln(Z),k)/beta) #<x**> 

Other things that might be interesing : 2 oscillators, A chain of oscillators, virial expansion

Thermo and Legendre Tranformations

Thermodynamics is a poorly communicated topic. Which variables remain in expressions and what things are held constant when differentiating are crucial and yet poorly communicated and the notation is trash. Sympy helps make some things explicit. It’s fun.

u,s,t,p,v,n,r = symbols("u s t p v n r")

du,ds,dt,dp,dv = symbols("du ds dt dp dv")
# taylor series in stuff?

e1 = p * v - n * r * t
e2 = u - Rational(3 , 2) * n * r * t

state = [  (u,du), (s,ds), (t,dt) , (p,dp) , (v,dv) ]

def differential(e):
    return sum( [ diff(e,x) * dx  for x,dx in state]   )


de1 = differential(e1 )
de2 = differential(e2 )

e3 = du - (t * ds - p * dv)

eqs = [e1,e2,de1,de2,e3]
print(eqs)
G = groebner( eqs, u , du,  t, dt, p, dp, v, dv,  ds )
for e in G:
    print(e)
R = Rational
U,S,T,P,V,N, k = symbols("U S T P V N k")

cv = R(3,2) * N * k
e1 = U - cv * T
e2 = P * V - N * k * T
e3 = S - cv * ln(T) + N * k * ln(V)

elim = [P,T]
Ps = solve([e1,e2,e3], P)
Ps
es = [ e.subs(Ps) for e in [e1,e2,e3] ]
Ts = solve(e3, T)[0]
es = [  e.subs(T,Ts) for e in es ]
Usv = solve(es[0],U)[0]
psv = diff(Usv,V)
tsv = diff( Usv , S )

#solve(es[0], V)

Hsv = Usv + P * V  # enthalpy and legendre trnasformation
Vps = solve(diff(Hsv, V) , V) 
H =  Hsv.subs(V, Vps[0]) 
simplify(H)

There are so many other things!

What about a Van Der Waals equation? Optics (geometrical and wave, paraxial ~ Schrodinger, fourier optics), GR (exterior derivatives ) , Quantum (wave matching problems. What can we do about hydrogen? WKB, QHE) rutherford scattering, Weiss mean field, canonical transformations, Rotations. Clebsh-Gordon coefficients

Computational Category Theory in Python III: Monoids, Groups, and Preorders

Parts 1 and 2 are found here and here

From one perspective, categories are just another algebraic structure, like groups, monoids and rings. They are these abstract things that have some abstract equational axioms and operations. They are the next stop on our magnificent category journey.

A monoid is a thing that has an associative operation with a unit. Addition and 0 make numbers a monoid. Multiplication and 1 are a separate monoid for numbers. Concatenation and empty lists make lists a monoid. Union and empty set make sets a monoid. We can encode this in python like so:

What does this have to do with categories? Well, if some thing is a category, it obeys the axioms that define what it means to be a category. It has morphisms and objects. The morphisms compose if head meets tail on an object. There are always identity morphism.

The morphisms in a category with 1 object automatically obey the monoid axioms. In this case, the category axioms imply the monoid axioms. Everything composes because there is only one object. It’s a kind of degenerate case where we are not using the partiality of the composition operator. There is automatically a unit for composition because the identity morphism is a unit. Composition is already required to be associative. Boom. The thing is a monoid.

Continuing with our representation from previous posts, we make a python class for each category. An instance of this class is a morphism in this category. If you ask for the domain or codomain of any morphism, you always get back () because it is a single object category. Compare these classes with the above classes.

Some monoids are also groups if there is a natural inverse operation. The integers are a group under addition where the negative gives you the inverse. Some aren’t though. The natural numbers (0,1,2…) aren’t a group under addition though.

Similarly groups can be thought of as a category with one object, with the additional requirement that every morphism is invertible, that there is always a f^{-1} such that f \circ f^{-1} = id.

Sympy has groups in it. We can make a wrapper of that functionality that looks like a categorical interface. To match our pattern of using python classes to represent categories, it is convenient to do the slightly uncommon thing of making a class definition generator function fp_group_cat. Every time you call this function, it makes a different class and a different category. I have only here wrapped the finitely presented group functionality, but there are also free groups, permutation groups, and named groups available in sympy.

Many objects, at most one arrow per pair: Preorders

We can simplify the power of a category in a different direction. Instead of having only 1 object, we’ll have few arrows.

A category with many objects but at most a single morphism between a pair of them obeys the axioms of a preorder. In categorical terminology this is sometimes called a thin category Any actual order like like \le on numbers is also a preorder, but preorders have slightly weaker requirements. Here is a categorical representation of the ordering on integers (although really the same implementation will work for any python type that implements <= and == )

An example of a partial order is the subset relationship, which we can represent using python sets. This is an important but perhaps confusing example. Haven’t we already defined FinSet? Yes, but these are different categories. In FinSet, morphisms are functions. In SubSetCat a morphisms is the subset relationship (of which there can either be one or not one). They just plain are not the same thing even though there are sets in the mix for both. The situation is made even more confusing by the fact that the subset relationship can be talked about indirectly inside FinSet using monic morphisms, which have as their image the subset of interest.

Preorders are related to directed acyclic graphs (DAG), the directed graphs that have no loops. If you give me a DAG, there is a preorder that is generated by that DAG. Exercise for the reader (AKA I’m lazy): Can you turn a Networkx DAG into a category?

Thoughts

This is nice and all just to explain categories in terms of some perhaps more familiar concepts. It feels a little ho-hum to me. We are not getting really any benefit from the concept of a category from this post. However, the examples of monoids, groups and preorders are always something you should think about when presented when a new categorical concept, because it probably reduces to something more familiar in this case. In addition, mappings to/from these simple objects to more complicated categories can be very interesting.

The methods of computational group theory are intriguing. It seems like some of them should extend to category theory. See this book by RFC Walters for example https://www.cambridge.org/core/books/categories-and-computer-science/203EBBEE29BEADB035C9DD80191E67B1 A very interesting book in other ways too. (Thanks to Evan Patterson for the tip)

Next time I think we’ll talk about finite categories and the finite Yoneda lemma.

Artwork courtesy of David

Edit: Hacker News discussion: https://news.ycombinator.com/item?id=23058551

Computational Category Theory in Python II: Numpy for FinVect

Linear algebra seems to be the place where any energy you put in to learning it seems to pay off massively in understanding other subjects and applications. It is the beating heart of numerical computing. I can’t find the words to overstate the importance of linear algebra.

Here’s some examples:

  • Least Squares Fitting – Goddamn is this one useful.
  • Quadratic optimization problems
  • Partial Differential Equations – Heat equations, electricity and magnetism, elasticity, fluid flow. Differential equations can be approximated as finite difference matrices acting on vectors representing the functions you’re solving for.
  • Linear Dynamical systems – Solving, frequency analysis, control, estimation, stability
  • Signals – Filtering, Fourier transforms
  • Quantum mechanics – Eigenvalues for energy, evolving in time, perturbation theory
  • Probability – Transition matrices, eigenvectors for steady state distributions.
  • Multidimensional Gaussian integrals – A canonical model in quantum mechanics and probability because they are solvable in closed form. Gaussian integrals are linear algebra in disguise. Their solution is describable in terms of the matrices and vectors in the exponent. More on this another day.

Where does category theory come in to this?

On one side, exploring what categorical constructions mean concretely and computationally in linear algebra land helps explain the category theory. I personally feel very comfortable with linear algebra. Matrices make me feel good and safe and warm and fuzzy. You may or may not feel the same way depending on your background.

In particular, understanding what the categorical notion of a pullback means in the context of matrices is the first time the concept clicked for me thanks to discussions with James Fairbanks and Evan Patterson.

But the other direction is important too. A categorical interface to numpy has the promise of making certain problems easier to express and solve. It gives new tools for thought and programming. The thing that seems the most enticing to me about the categorical approach to linear algebra is that it gives you a flexible language to discuss gluing together rectangular subpieces of a numerical linear algebra problem and it gives a high level algebra for manipulating this gluing. Down this road seems to be an actionable, applicable, computational, constructible example of open systems.

Given how important linear algebra is, given that I’ve been tinkering and solving problems (PDEs, fitting problems, control problems, boundary value problems, probabilistic dynamics, yada yada ) using numpy/scipy for 10 years now and given that I actually have a natural reluctance towards inscrutable mathematics for its own sake, I hope that lends some credence to when I say that there really is something here with this category theory business.

It frankly boggles my mind that these implementations aren’t available somewhere already! GAH!

Uh oh. I’m foaming. I need to take my pills now.

FinVect

The objects in the category FinVect are the vector spaces. We can represent a vector space by its dimensionality n (an integer). The morphisms are linear maps which are represented by numpy matrices. ndarray.shape basically tells you what are the domain and codomain of the morphism. We can get a lot of mileage by subclassing ndarray to make our FinVect morphisms. Composition is matrix multiplication (which is associative) and identity morphisms are identity matrices. We’ve checked our category theory boxes.

A part of the flavor of category theory comes from taking the focus away from the objects and putting focus on the morphisms.

One does not typically speak of the elements of a set, or subsets of a set in category theory. One takes the slight indirection of using the map whose image is that subset or the element in question when/if you need to talk about such things.

This actually makes a lot of sense from the perspective of numerical linear algebra. Matrices are concrete representations of linear maps. But also sometimes we use them as data structures for collections of vectors. When one wants to describe a vector subspace concretely, you can describe it either as the range of a matrix or the nullspace of a matrix. This is indeed describing a subset in terms of a mapping. In the case of the range, we are describing the subspace as all possible linear combinations of the columns \lambda_1 c_1 + \lambda_2 c_2 + ... . It is a matrix mapping from the space of parameters \lambda to the subspace (1 dimension for each generator vector / column). In the case of the nullspace it is a matrix mapping from the subspace to the space of constraints (1 dimension for each equation / row).

The injectivity or surjectivity of a matrix is easily detectable as a question about its rank. These kinds of morphisms are called monomorphisms and epimorphisms respectively. They are characterized by whether you can “divide” out by the morphism on the left or on the right. In linear algebra terms, whether there is a left or right inverse to these possibly rectangular, possibly ill-posed matrices. I personally can never remember which is which (surf/ing, left/right, mono/epi) without careful thought, but then again I’m an ape.

Some categorical constructions are very simple structural transformation that correspond to merely stacking matrices, shuffling elements, or taking transposes. The product and coproduct are examples of this. The product is an operation that takes in 2 objects and returns a new object, two projections \pi_1 \pi_2 and a function implementing the universal property that constructs f from f_1 f_2.

The diagram for a product

Here is the corresponding python program. The matrix e (called f in the diagram. Sorry about mixed conventions. ) is the unique morphism that makes those triangles commute, which is checked numerically in the assert statements.

The coproduct proceeds very similarly. Give it a shot. The coproduct is more similar to the product in FinVect than it is in FinSet.

The initial and terminal objects are 0 dimensional vector spaces. Again, these are more similar to each other in FinVect than they are in FinSet. A matrix with one dimension as 0 really is unique. You have no choice for entries.

Where the real meat and potatoes lives is in the pullback, pushout, equalizer, and co-equalizer. These are the categorical constructions that hold equation solving capabilities. There is a really nice explanation of the concept of a pullback and the other above constructions here .

Vector subspaces can be described as the range of the matrix or the nullspace of a matrix. These representations are dual to each other in some sense. RN=0. Converting from one representation to the other is a nontrivial operation.

In addition to thinking of these constructions as solving equations, you can also think of a pullback/equalizer as converting a nullspace representation of a subspace into a range representation of a subspace and the coequalizer/pushout as converting the range representation into a nullspace representation.

The actual heart of the computation lies in the scipy routine null_space and orth. Under the hood these use an SVD decomposition, which seems like the most reasonable numerical approach to questions about nullspaces. (An aside: nullspaces are not a very numerical question. The dimensionality of a nullspace of a collection of vectors is pretty sensitive to perturbations. This may or may not be an issue. Not sure. )

Let’s talk about how to implement the pullback. The input is the two morphisms f and g. The output is an object P, two projections p1 p2, and a universal property function that given q1 q2 constructs u. This all seems very similar to the product. The extra feature is that the squares are required to commute, which corresponds to the equation f p_1 = g p_2 and is checked in assert statements in the code. This is the equation that is being solved. Computationally this is done by embedding this equation into a nullspace calculation \begin{bmatrix} F & -G \end{bmatrix} \begin{bmatrix} x  \\ y \end{bmatrix} = 0. The universal morphism is calculated by projecting q1 and q2 onto the calculated orthogonal basis for the nullspace. Because q1 and q2 are required to be in a commuting square with f and g by hypothesis, their columns live in the nullspace of the FG stacked matrix. There is extra discussion with James and Evan and some nice derivations as mentioned before here

The equalizer, coequalizer, and pushout can all be calculated similarly. A nice exercise for the reader (AKA I’m lazy)!

Thoughts

I think there are already some things here for you to chew on. Certainly a lot of polish and filling out of the combinators is required.

I am acutely aware that I haven’t shown any of this being used. So I’ve only shown the side where the construction helps teach us category theory and not entirely fulfilled the lofty promises I set in the intro. I only have finite endurance. I’m sure the other direction, where this helps us formulate problems, will show up on this blog at some point. For what I’m thinking, it will be something like this post http://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ but in a different pullback-y style. Mix together FinSet and FinVect. Something something decorated cospans? https://arxiv.org/abs/1609.05382

One important thing is we really need to extend this to affine maps rather than linear maps (affine maps allow an offset y = Ax + b. This is important for applications. The canonical linear algebra problem is Ax=b which we haven’t yet shown how to represent.

One common approach to embed the affine case in the linear case is to use homogenous coordinates. https://en.wikipedia.org/wiki/Homogeneous_coordinates.

Alternatively, we could make a new python class FinAff that just holds the b vector as a separate field. Which approach will be more elegant is not clear to me at the moment.

Another very enticing implementation on the horizon is a nice wrapper for compositionally calculating gaussian integrals + linear delta functions. Gaussian integrals + delta functions are solved by basically a minimization problem over the exponent. I believe this can be formulated by describing the linear subspace we are in as a span over the input and output variables, associating a quadratic form with the vertex of the span. You’ll see.

References

Blah Blah Blah

Whenever I write a post, I just let it flow, because I am entranced by the sound of my own keyboard clackin’. But it would deeply surprise me if you are as equally entranced, so I take sections out that are just musings and not really on the main point. So let’s toss em down here if you’re interested.

I like to draw little schematic matrices sometimes so I can visually see with dimensions match with which dimensions.

Making the objects just the dimension is a structural approach and you could make other choices. It may also make sense to not necessarily identify two vector spaces of the same dimensionality. It is nonsensical to consider a vector of dog’s nose qualities to be interchangeable with a vector of rocket ship just because they both have dimensionality 7.

High Level Linear Algebra

Linear algebra already has some powerful high level abstractions in common use.

Numpy indexing and broadcasting can sometimes be a little cryptic, but it is also very, very powerful. You gain both concision and speed.

Matrix notation is the most commonly used “pointfree” notation in the world. Indexful expressions can be very useful, but the calculus of matrices lets us use intuition built about algebraic manipulation of ordinary numbers to manipulate large systems of equations in a high level way. There are simple rules governing matrix inverse, transpose, addition, multiplication, identity.

Another powerful notion in linear algebra is that of block matrices. Block matrices are the standard high level notation to talk about subpieces of a numerical linear algebra problem. For example, you might be solving the heat equation on two hunks of metal attached at a joint. It is natural to consider this system in block form with the off diagonal blocks corresponding to the coupling of the two hunks. https://en.wikipedia.org/wiki/Domain_decomposition_methods

One does not typically speak of the elements of a set, or subsets of a set in category theory. One takes the slight indirection of using the map whose image is that subset or the element in question when/if you need to talk about such things. This pays off in a couple ways. There is a nice minimalism in that you don’t need a whole new entity (python class, data structure, what have you) when you already have morphisms lying around. More importantly though the algebraic properties of what it means to be an element or subset are more clearly stated and manipulated in this form. On the flipside, given that we often return to subset or element based thinking when we’re confused or explaining something to a beginner shows that I think it is a somewhat difficult game to play.

The analogy is that a beginner will often write for loops for a numpy calculation that an expert knows how to write more concisely and efficiently using broadcasting and vectorization. And sometimes the expert just can’t figure out how to vectorize some complicated construction and defeatedly writes the dirty feeling for loop.

What about in a language where the for loops are fast, like Julia? Then isn’t the for loop version just plain better, since any beginner can read and write it and it runs fast too? Yes, I think learning some high level notation or interface is a harder sell here. Nevertheless, there is utility. High level formulations enable optimizing compilers to do fancier things. They open up opportunities for parallelism. They aid reasoning about code. See query optimization for databases. Succinctness is surprising virtue in and of itself.

Aaron Hsu (who is an APL beast) said something that has stuck with me. APL has a reputation for being incredibly unscrutable. It uses characters you can’t type, each of which is a complex operation on arrays. It is the epitome of concision. A single word in APL is an entire subroutine. A single sentence is a program. He says that being able to fit your entire huge program on a single screen puts you in a different domain of memory and mindspace. That it is worth the inscrutability because once trained, you can hold everything in your extended mind at once. Sometimes I feel when I’m writing stuff down on paper that it is an extension of my mind, that it is part of my short term memory. So too the computer screen. I’m not on board for APL yet, but food for thought ya know?

Differences between the pure mathematical perspective of Linear Algebra, and the Applied/Numerical Linear Alegbra

I think there a couple conceptual points of disconnect between the purely mathematical conception of vector spaces and the applied numerical perspective.

First off, the numerical world is by and large focused on full rank square matrices. The canonical problem is solving the matrix equation Ax=b for the unknown vector x. If the matrix isn’t full rank or square, we find some way to make it square and full rank.

The mathematical world is more fixated on the concept of a vector subspace, which is a set of vectors.

It is actually extremely remarkable and I invite you for a moment to contemplate that a vector subspace over the real numbers is a very very big set. Completely infinite. And yet it is tractable because it is generated by only a finite number of vectors, which we can concretely manipulate.

Ok. Here’s another thing. I am perfectly willing to pretend unless I’m being extra careful that machine floats are real numbers. This makes some mathematicians vomit blood. I’ve seen it. Cody gave me quite a look. Floats upon closer inspection are not at all the mathematical real numbers though. They’re countable first off.

From a mathematical perspective, many people are interested in precise vector arithmetic, which requires going to somewhat unusual fields. Finite fields are discrete mathematical objects that just so happen to actually have a division operation like the rationals or reals. Quite the miracle. In pure mathematics they more often do linear algebra over these things rather than the rationals or reals.

The finite basis theorem. This was brought up in conversation as a basic result in linear algebra. I’m not sure I’d ever even heard of it. It is so far from my conceptualization of these things.

Monoidal Products

The direct sum of matrices is represented by taking the block diagonal. It is a monoidal product on FinVect. Monoidal products are binary operations on morphisms in a category that play nice with it in certain ways. For example, the direct sum of two identity matrices is also an identity matrix.

The kronecker product is another useful piece of FinVect. It is a second monoidal product on the catgeory FinVect It is useful for probability and quantum mechanics. When you take the pair of the pieces of state to make a combined state, you

    def par(f,g):
        ''' One choice of monoidal product is the direct sum '''
        r, c = f.shape
        rg, cg = g.shape
        return Vect(np.block( [ [f       ,           np.zeros((r,cg))]  ,
                                [np.zeros((rg,c))  , g              ]]  ))
    def par2(f,g):
        '''  another choice is the kroncker product'''
        return np.kron(f,g)

We think about row vectors as being matrices where the number of columns is 1 or column vectors as being matrices where the number of rows is 1. This can be considered as a mapping from/to the 1 dimensional vector. These morphisms are points.

The traditional focus of category theory in linear algebra has been on the kronecker product, string diagrams as quantum circuits/ penrose notation, and applications to quantum mechanics.

However, the direct sum structure and the limit/co-limit structures of FinVect are very interesting and more applicable to everyday engineering. I associate bringing more focus to this angle with John Baez’s group and his collaborators.

Anyway, that is enough blithering.

Computational Category Theory in Python I: Dictionaries for FinSet

Category theory is a mathematical theory with reputation for being very abstract.

Category theory is an algebraic theory of functions. It has the flavor of connecting up little pipes and ports that is reminiscent of dataflow languages or circuits, but with some hearty mathematical underpinnings.

So is this really applicable to programming at all? Yes, I think so.

Here’s one argument. Libraries present an interface to their users. One of the measures of the goodness or badness of an interface is how often you are inclined to peek under the hood to get it to do the thing that you need. Designing these interfaces is hard. Category theory has taken off as a field because it has been found to be a useful and uniform interface to a surprising variety of very different mathematics. I submit that it is at least plausible that software interfaces designed with tasteful mimicry of category theory may achieve similar uniformity across disparate software domains. This is epitomized for me in Conal Elliott’s Compiling to Categories. http://conal.net/papers/compiling-to-categories/

I think it is easy to have the miscomprehension that a fancy language like Haskell or Agda is necessary to even begin writing software that encapsulates category theory based ideas, but this is simply not the case. I’ve been under this misapprehension before.

It just so happens that category theory is especially useful in those languages for explaining some programming patterns especially those concerning polymorphism. See Bartosz Milewski’s Category theory for Programmers.

But this is not the only way to use category theory.

There’s a really delightful book by Rydeheard and Burstall called Computational Category Theory. The first time I looked at it, I couldn’t make heads or tails of it, going on the double uphill battle of category theory and Standard ML. But looking at it now, it seems extremely straightforward and well presented. It’s a cookbook of how to build category theoretic interfaces for software.

So I think it is interesting to perform some translation of its concepts and style into python, the lingua franca of computing today.

In particular, there is a dual opportunity to both build a unified interface between some of the most commonly used powerful libraries in the python ecosystem and also use these implementations to help explain categorical concepts in concrete detail. I hope to have the attention span to do this following:

A very simple category is that of finite sets. The objects in the category can be represented by python sets. The morphisms can be represented by python dictionaries. Nothing abstract here. We can rip and tear these things apart any which way we please.

The manipulations are made even more pleasant by the python features of set and dictionary comprehension which will mimic the definitions you’ll find on the wikipedia page for these constructions quite nicely.

Composition is defined as making a new dictionary by feeding the output of the first dictionary into the second. The identity dictionary over a set is one that has the same values as keys. The definition of products and coproducts (disjoint union) are probably not too surprising.

One really interesting thing about the Rydeheard and Burstall presentation is noticing what are the inputs to these constructions and what are the outputs. Do you need to hand it objects? morphisms? How many? How can we represent the universal property? We do so by outputting functions that construct the required universal morphisms. They describe this is a kind of skolemization . The constructive programmatic presentation of the things is incredibly helpful to my understanding, and I hope it is to yours as well.

Here is a python class for FinSet. I’ve implemented a couple of interesting constructions, such as pullbacks and detecting monomorphisms and epimorphisms.

I’m launching you into the a deep end here if you have never seen category theory before (although goddamn does it get deeper). Do not be surprised if this doesn’t make that much sense. Try reading Rydeheard and Burstall chapter 3 and 4 first or other resources.

Here’s some fun exercises (Ok. Truth time. It’s because I got lazy). Try to implement exponential and pushout for this category.

Rough Ideas on Categorical Combinators for Model Checking Petri Nets using Cvxpy

Petri nets are a framework for modeling dynamical systems that is very intuitive to some people. The vanilla version is that there are discrete tokens at nodes on a graph representing resources of some kind and tokens can be combined according to the firing of transition rules into new tokens in some other location.

This is a natural generalization of chemical reaction kinetics, where tokens are particular kinds of atoms that need to come together. It also is a useful notion for computer systems, where tokens represent some computational resource.

To me, this becomes rather similar to a flow problem or circuit problem. Tokens feel a bit like charge transitions are a bit like current (although not necessarily conservative). In a circuit, one can have such a small current that the particulate nature of electric current in terms of electrons is important. This happens for shot noise or for coulomb blockade for example.

If the number of tokens is very large, it seems intuitively sensible to me that one can well approximate the behavior by relaxing to a continuum. Circuits have discrete electrons and yet are very well approximated by ohm’s laws and the like. Populations are made of individuals, and yet in the thousands their dynamics are well described by differential equations.

It seems to me that mixed integer programming is a natural fit for this problem. Mixed integer programming has had its implementations and theory heavily refined for over 70 years so now very general purpose and performant off the shelf solvers are available. The way mixed integer programs are solved is by considering their quickly solved continuous relaxation (allowing fractional tokens and fractional transitions more akin to continuous electrical circuit flow) and using this information to systematically inform a discrete search process. This seems to me a reasonable starting approximation. Another name for petri nets is Vector Addition Systems, which has more of the matrix-y flavor.

We can encode a bounded model checking for reachability of a petri net into a mixed integer program fairly easily. We use 2-index variables, the first of which will be used for time step. We again turn to the general purpose functional way of encoding pointful dsls into pointfree ones as I have done here and here. The key point is that you need to be careful where you generate fresh variables. This is basically copy catting my posts here. http://www.philipzucker.com/categorical-combinators-for-graphviz-in-python/ http://www.philipzucker.com/a-sketch-of-categorical-relation-algebra-combinators-in-z3py/

I’m like 70% sure what I did here makes sense, but I’m pretty sure the general idea makes sense with some fiddling.

The big piece is the weighted_block function. It let’s you build a combinator with an internal state and input and output flow variables. You give matrices with entries for every possible transition. Whether transitions occurred between t and t+1 is indicated by integer variables. There is also possible accumulation of tokens at nodes, which also requires integer variables. Perhaps we’d want to expose the token state of the nodes to the outside too?

Weighted block schematically looks something like this

We can also get out a graphical representation of the net by reinterpreting our program into GraphCat. This is part of the power of the categorical interface. http://www.philipzucker.com/categorical-combinators-for-graphviz-in-python/

When I talked to Zach about this, he seemed skeptical that MIP solvers wouldn’t eat die a horrible death if you threw a moderately large petri net into them. Hard to say without trying.

Thoughts

There is an interesting analogy to be found with quantum field theory in that if you lift up to considering distributions of tokens, it looks like an occupation number representation. See Baez. http://math.ucr.edu/home/baez/stoch_stable.pdf

If you relax the integer constraint and the positivity constraints, this really is a finite difference formulation for capacitor circuits. The internal states would then be the charge of the capacitor. Would the positivity constraint be useful for diodes?

I wonder how relevant the chunky nature of petri nets might be for considering superconducting circuits, which have quantization of quantities from quantum mechanical effects.

Cvxpy let’s you describe convex regions. We can use this to implement a the convex subcategory of Rel which you can ask reachability questions. Relational division won’t work probably. I wonder if there is something fun there with respect the the integerizing operation and galois connections.

Edit: I should have googled a bit first. https://www.sciencedirect.com/science/article/pii/S0377221705009240 mathemtical programming tecchniques for petri net reachability. So it has been tried, and it sounds like the results weren’t insanely bad.

Categorical Combinators for Graphviz in Python

Graphviz is a graph visualization tool https://www.graphviz.org/. In Conal Elliott’s Compiling to categories http://conal.net/papers/compiling-to-categories/, compiling code to its corresponding graphviz representation was one very compelling example. These graphs are very similar to the corresponding string diagram of the monoidal category expression, but they also look like boolean circuit diagrams. Besides in Conal Elliott’s Haskell implementation, there is an implementation in the Julia Catlab.jl project https://epatters.github.io/Catlab.jl/stable/

I’ve basically implemented a toy version of a similar thing in python. It lets you do things like this

Why python?

  • Python is the lingua franca of computing these days. Many people encounter it, even people whose main thing isn’t computers.
  • The python ecosystem is nutso good.
  • Julia is kind of an uphill battle for me. I’m fighting the battle ( https://github.com/philzook58/Rel.jl ) , but I already know python pretty well. I can rip this out and move on.

What I did was implement some wrappers around the graphviz python package. It exposes a not very feature rich stateful interface. It is pretty nice that it prints the graphs inline in jupyter notebooks though.

The code is written in a style very similar (and hopefully overloadable with) to that of z3py relation algebra. http://www.philipzucker.com/a-sketch-of-categorical-relation-algebra-combinators-in-z3py/ . There is a fairly general cookbook method here for categorifying dsl. Since graphviz does not directly expose fresh node creation as far as I can tell, I made my own using a random number generator. The actual combinators are graphviz object processors, so we build up a giant functional chain of processors and then actually execute it with run. The inputs and outputs are represented by lists of node names. The is some design space to consider here.

I also had to use class based wrappers Based on the precedent set by the python 3 matrix multiplication operator @, I think it is a requirement that this also be used for category composition. id is a keyword or something in python unfortunately. For monoidal product, I feel like overloading power ** looks nice even if it is a nonsensical analogy, * is also not too bad. I went with * for now.

The graphviz graphs aren’t quite string diagrams. They make no promise to preserve the ordering of your operations, but they seem to tend to.

Here’s some example usage

cup = GraphCat.cup()
cap = GraphCat.cap()
d =  cap @ (I * I) @ cup  #(I * cap) @ (I * I * I) @ (cup * I) 
d.run()
d = plus @ (GC.const(1) * GC.const(2))
d = d.run()
GC = GraphCat
f = GraphCat.named_simple("f")
g = GraphCat.named_simple("g")
I = GraphCat.idd()
dump = GC.dump()
dup = GC.dup()
diagram = ((f * I) @ dup @ g @ (dump * I)  @ (I * f) @ (f * f)) * g
diagram.run()

Class based overloading is the main paradigm of overloading in python. You overload a program into different categories, by making a program take in the appropriate category class as a parameter.

# by passing in different category classes, we can make polymorphic functions
# They have to have a uniform interface though, which is hard to constrain in python.
def polymorphic_prog(M):
    idd = M.idd()
    dump = M.dump()
    dup = M.dup()
    return (idd * dump) @ dup
polymorphic_prog(GraphCat).run()

For example something like this ought to work. Then you can get the graph of some matrix computation to go along with it’s numpy implementation

class FinVect(np.ndarray):

    def compose(f,g):
        return f @ g
    def idd(n):
        return np.eye(n)
    def par(f,g):
        return np.kron(f,g)
    def __mult__(self,rhs):
        return np.kron(f,g)
# and so on. 

Maybe outputting tikz is promising? https://github.com/negrinho/sane_tikz

Stupid is as Stupid Does: Floating Point in Z3Py

Floating points are nice and all. You can get pretty far pretending they are actually numbers. But they don’t obey some mathematical properties that feel pretty obvious. A classic to glance through is “What Every Computer Scientist Should Know About Floating-Point Arithmetic” https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

We can check some properties with z3py. Here are a couple simple properties that succeed for mathematical integers and reals, but fail for floating point

I recently saw on twitter a reference to a Sylvie Boldo paper https://hal.archives-ouvertes.fr/hal-01148409/ “Stupid is as Stupid Does: Taking the Square Root of the Square of a Floating-Point Number”.

In it, she uses FlocQ and Coq to prove a somewhat surprising result that the naive formula \sqrt{x^2} = |x| actually is correct for the right rounding mode of floating point, something I wouldn’t have guessed.

Z3 confirms for Float16. I can’t get Float32 to come back after even a day on a fairly beefy computer. If I use FPSort(ebits,sbits) rather than a standard size, it just comes back unknown, so i can’t really see where the cutoff size is. This does not bode well for checking properties of floating point in z3 in general. I think a brute force for loop check of 32 bit float properties is feasible. I might even be pretty fast. To some degree, if z3 is taking forever to find a counterexample, I wonder to what to degree the property is probably true.

If anyone has suggestions, I’m all ears.

A Sketch of Categorical Relation Algebra Combinators in Z3Py

I’ve discussed relation algebra before. Relations are sets of tuples. There, I implemented the relations naively using lists for sets. This is very simple, and very clean especially with list comprehension syntax. It is however horrifically inefficient, and we could only deal with finitely enumerable domains. The easiest path to fixing these problems is to cash out to an external solver, in this case z3.

There are many beautifully implemented solvers out there and equally beautiful DSL/modeling languages. Examples in mind include sympy, cvxpy, and z3. These modeling languages require you to instantiate variable objects and build expressions out of them and then hand it off to the solver. This is a reasonable interface, but there are advantages to a more categorical/point-free style DSL.

Point-free languages are ones that do not include binding forms that introduce bound/dummy variables. Examples of binding forms like this are \lambda \sum \max \min \int \sup \lim \forall \exists. One problem lies in the fact that the names of bound variables don’t matter, and that they end up accidentally smashing into each other. You may have experienced this in physics or math class as the dummy indices or dummy variable problem causing you to screw up your calculation of some cross product identity or some complicated tensor sum. These are surprisingly subtle problems, very difficult to diagnose and get right. de Bruijn indices is a technique for giving the bound variables canonical names, but it sucks to implement in its own way. When you make a DSL point free, it is a joy to manipulate, optimize, and search. I think this may be the core of why category theory is good language for mathematics and programming.

Point-free style also tends to have significant economy of size, for better or worse. Sometimes it is better to have an expression very dense in information. This is important if you are about the algebraically manipulate an expression with paper and pencil. Every manipulation requires a great deal of mind numbing copying as you proceed line by line, so it can be excruciating if your notation has a lot of unnecessary bulk.

Relations are like functions. The two pieces of the tuple can be roughly thought of as the “input” and the “output”. Relations are only loosely directional though. Part of the point of relations is that the converse (inverse) of a relation is easy to define.

When we are implement relations, we have a choice. Do we want the relation to produce its variables, accept its variable, or accept one and produce the other? There are advantages to each. When relations were [(a,b)], a -> b -> Bool, and a -> [b] converting between these forms was a rather painful enumeration process. The sting of converting between them is taken out by the fact that the conversion is no longer a very computationally expensive process, since we’re working at the modeling layer.

When you’re converting a pointful DSL to pointfree DSL, you have to be careful where you instantiate fresh variables or else you’ll end up with secret relations that you didn’t intend. Every instantiation of id needs to be using fresh variables for example. You don’t want the different id talking to each other. Sometimes achieving this involves a little currying and/or thunking.

There is a pattern that I have notice when I’m using modeling languages. When you have a function or relation on variables, there are constraints produced that you have to keep a record of. The pythonic way is to have a Model or Solver object, and then have that objects mutate an internal record of the set of constraints. I don’t particularly enjoy this style though. It feels like too much boiler plate.

In Haskell, I would use something like a Writer monad to automatically record the constraints that are occurring. Monads are not really all that pleasant even in Haskell, and especially a no go in python without “do” syntax.

However, because we are going point free it is no extra cost at all to include this pipework along for the ride in the composition operation.

Here are implementations of the identity and composition for three different styles. Style 1 is fully receptive, style 2 is mixed (function feeling) and style 3 is fully productive of variables.

Fair warning, I’m being sketchy here. I haven’t really tried this stuff out.

z3 is a simply typed language. You can get away with some polymorphism at the python level (for example the == dispatches correctly accord to the object) but sometimes you need to manually specify the sort of the variables. Given these types, the different styles are interconvertible

We can create the general cadre of relation algebra operators. Here is a somewhat incomplete list

Questions about relation algebra expressions are often phrased in term of relational inclusion. You can construct a relation algebra expression, use the rsub in the appropriate way and ask z3’s prove function if it is true.

Z3 has solvers for

  • Combinatorial Relations
  • Linear Relations
  • Polyhedral Relations
  • Polynomial Relations
  • Interval Relations – A point I was confused on. I thought interval relations were not interesting. But I was interpetting the term incorrectly. I was thinking of relations on AxB that are constrained to take the form of a product of intervals. In this case, the choice of A has no effect on the possible B whatsoever, so it feels non relational. However, there is also I_A x I_B , relations over the intervals of A and B. This is much closer to what is actually being discussed in interval arithmetic.

Applications we can use this for:

  • Graph Problems. The Edges can be thought of as a relation between vertices. Relation composition Using the starn operator is a way to ask for paths through the graph.
  • Linear Relations – To some degree this might supplant my efforts on linear relations. http://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ Z3 is fully capable of understanding linear relations.
  • Safety and liveness of control systems. Again. a transition relation is natural here. It is conceivable that the state space can be heterogenous in time, which is the interesting power of the categorical style. I feel like traditional control systems usually maintain the same state space throughout.
  • Program verification
  • Games? Nash equilibria?

Other Thoughts

  • Maybe we are just building a shitty version of alloy. https://alloytools.org/
  • What about uninterpeted relations? What about higher order relations? What about reflecting into a z3 ADT for a relational language. Then we could do relational program synthesis. This is one style, just hand everything off to smt. https://github.com/nadia-polikarpova/cse291-program-synthesis/tree/master/lectures
  • I should try to comply with python conventions, in particular numpy and pandas conventions. @ should be composition for example, since relation composition has a lot of flavor of matrix composition. I should overload a lot of operators, but then I’d need to wrap in a class 🙁
  • Z3 has special support for some relations. How does that play in? https://theory.stanford.edu/~nikolaj/programmingz3.html#sec-special-relations https://z3prover.github.io/api/html/ml/Z3.Relation.html
  • As long as you only use composition, there is a chaining of existentials, which really isn’t so bad.
  • What we’ve done here is basically analogous/identical to what John Wiegley did compiling to the category of z3. Slightly different in that he only allowed for existential composition rather than relational division. http://newartisans.com/2017/04/haskell-and-z3/
  • We can reduced the burden on z3 if we know the constructive proof objects that witness our various operations. Z3 is gonna do better if we can tell it exactly which y witness the composition of operators, or clues to which branch of an Or it should use.
  • It’s a bummer, but when you use quantifiers, you don’t see countermodels? Maybe there is some hook where you can, or in the dump of the proof object.
  • What about recursion schemes? The exact nature of z3 to handle unbounded problems is fuzzy to me. It does have the support to define recursive functions. Also explicit induction predicates can go through sometimes. Maybe look at the Cata I made in fancy relaion algebra post
  • I think most proof assistants have implementations of relation algebra available. I find you can do a surprising amount in z3.

Stupid Z3Py Tricks: Verifying Sorting Networks off of Wikipedia

Sorting networks are a circuit flavored take on sorting. Although you can build circuits for any size input, any particular circuit works for a fixed sized input. They are like an unrolling of the loops or recursion of more familiar sorting algorithms. They come up also in the context of parallel and gpu sorting

Here’s an interesting thing. We can go to Wikipedia and get a little python snippet for the comparison order of a Batcher odd-even mergesort. Kind of a confusing algorithm. Why does it even work? Is it even right? It’s written in some kind of funky, indexful generator style.

Source: Wikipedia

Well we can confirm this relatively straightforwardly using z3 by replacing the implementation of compare_and_swap with its z3 equivalent. We then ask z3 .

This comes back unsat, hence there are no inputs or executions that do not come back sorted. If I delete some elements from pair_to_compare, it comes back sat, showing that it doesn’t always sort.

The trick here is that the circuit is fixed size, so we have no need for induction, one of the main things z3 is rather finicky at.

It’s somewhat interesting to note that the output of odd_even_merge is a sequence of instructions, we can think of this as interpreting a very small 1 instruction programming language.

We can also confirm similarly a simple odd-even bubblesort and other similar algorithms.

Q: What about using uninterpreted sorts rather than integers? Integers is pretty convincing to me.

same_elems is slightly weaker than a permutation predicate. Wasn’t super obvious to me the best way to do a permutation predicate in z3. Would I want to internalize the array?

Edit: Upon further thought, actually the sort IS a nice predicate for permutation. How do we compute if two things are permutations of each other? By sorting them and forcing a zipped equality. Alternatively count the number of each element (a piece of bucket sort). Since this sort is done by composing swaps, it is somewhat intrinsically a permutation

As a bummer though, I think randomized testing on arrays would be equally or perhaps more convincing of the correctness of the algorithm. Oh well.

Programming and Interactive Proving With Z3Py

I’ve been fiddling with z3py, figuring out some functionality and realizing some interesting things you could do with it. I think I’m at a point where it is nice to checkpoint myself with a blog post.

I’m a little surprised z3py doesn’t overload the & and | operators and some kind of implies operator for BoolRef. You can insert them later using this.

from z3 import *
# useful non default operator definitions for z3 bools
BoolRef.__and__ = lambda self, rhs: And(self,rhs)
BoolRef.__or__ = lambda self, rhs: Or(self,rhs)
BoolRef.__xor__ = lambda self, rhs: Xor(self,rhs)
BoolRef.__invert__ = lambda self: Not(self)
BoolRef.__rshift__ = lambda self, rhs: Implies(self,rhs)

Functional Programming

Python is not the best functional programming environment imo. And by functional programming I implicitly mean roughly ML-like FP a la Haskell or OCaml. I don’t venture much into lisp land.

The lack of good algebraic datatypes (the class syntax is so ungainly) and a type system hurts. The lack of pattern matching hurts. The lambda keyword is so long it makes me sad.

But you have full access to z3 from the python bindings. Z3 does have algebraic data types, and a type system. It has built in substitution mechanisms and evaluation. And it has insane search procedures and the ability to prove things. Pretty damn cool!

Unfortunately the type system is rather simplistic, being basically simply typed rather than polymorphic or something else. But using python a a schema/macro system for z3 seems a plausible way forward.

To build templated types, you can have constructor functions in python for the appropriate types.

def Tuple(a,b):
    Type = Datatype('Tuple(f{(a.name(),b.name())})')
    Type.declare('pair', ('fst', a), ('snd', b))
    Type = Type.create()
    return Type
def Either(a,b):
    Type = Datatype('Either(f{(a.name(),b.name())})')
    Type.declare('left', ('getLeft', a))
    Type.declare('right', ('getRight', b))
    Type = Type.create()
    return Type 
def Maybe(a):
    Type = Datatype('Maybe(f{(a.name())})')
    Type.declare('Just', ('fromJust', a))
    Type.declare("Nothing")
    Type = Type.create()
    return Type
def List(a):
    Type = Datatype('List(f{(a.name())})')
    Type.declare('Cons', ('car', a), ('cdr', Type))
    Type.declare("Nil")
    Type = Type.create()
    return Type
''' 
Note in regards to List. Z3 has a built in type Seq that I think it has built in smarts about. You might be better off using that rather than a custom List. YMMV
'''

You can access the constructors from the returned types. Check this out. You get detector functions is_Nothing and is_Just , the extractor function fromJust and constructor functions Nothing and Just. I do a lot of dir exploration with z3py. It’s hard to know what’s available sometimes

# dir(Maybe(IntSort())) returns
[
'Just',
 'Nothing',
... underscore junk ... ,
 'accessor',
 'as_ast',
 'ast',
 'cast',
 'constructor',
 'ctx',
 'ctx_ref',
 'eq',
 'fromJust',
 'get_id',
 'hash',
 'is_Just',
 'is_Nothing',
 'kind',
 'name',
 'num_constructors',
 'recognizer',
 'sexpr',
 'subsort',
 'translate',
 'use_pp']

It’s possible to build a general purpose match combinator on these types since you can introspect the number of constructors of the ADT using num_constructors, constructor, recognizer, and accessor. There might be a match inside z3py somewhere? I think it’s part of the SMTLIB standard now.

def match(x, **kwargs):
    t = x.sort()
    nc = t.num_constructors()
    acc = kwargs["_"] # default argument
    for c in range(nc):
        con = t.constructor(c)
        rec = t.recognizer(c)
        nfields = con.arity()
        if nfields == 0:
            res = kwargs[con.name()]
        else:
            res = kwargs[con.name()](  *[t.accessor(c,a)(x) for a in range(nfields)] )
        acc = If(rec(x), res, acc)
    return acc

Example usage:

match(Const("x", Maybe(IntSort())), Just=lambda y : y + 1, Nothing = IntVal(3), _=IntVal(10))
# returns If(is(Nothing, x), 3, If(is(Just, x), fromJust(x) + 1, 10))

Z3 has a substitution mechanism built in. This is useful for instantiating ForAll and for evaluating Lambda. The substitute_vars function is what you want like so substitute_vars(f.body(), x, y, z)

It is possible to reflect the syntax in a fairly straightforward way back into python via a lambdify function, mimicking the equivalent very useful function from sympy. Lambdify is basically an interp function. Here is a start for such a function. I by no means have implemented interpretation of the entirety of z3. Also I feel like this implementation is very clunky. Some kind of CPS?

def lift1(f,x):
    return lambda *args: f(x(*args))

def lift2(op,l,r):
    return lambda *args: op(l(*args), r(*args))

# interp is useful for transferring expressions into numpy, sympy
# but also for program extraction

from functools import reduce
import operator as op
def interp(a, *args):
    if is_true(a):
        return lambda *args: True
    elif is_false(a):
        return lambda *args: False
    elif is_int_value(a):
        return lambda *args: a.as_long()
    elif is_rational_value(a):
        n = a.numerator_as_long()
        d = a.denominator_as_long()
        return lambda *args: n / d
    #elif is_algebraic_value(a):
    #    pass
    elif is_const(a): # is free variable
        loc = [ind for ind, b in enumerate(args) if eq(a,b)]
        if len(loc) == 0:
            return a
        else:
            ind = loc[0]
            return lambda *args2: args2[ind]   
    b = [interp(c, *args) for c in a.children()]
    if is_and(a):
        return lambda *args: reduce(op.and_, [f(*args) for f in b])
    elif is_or(a):
        return lambda *args: reduce(op.or_, [f(*args) for f in b])
    elif is_app_of(a, Z3_OP_XOR):
        return lambda *args: reduce(op.xor, [f(*args) for f in b])
    elif is_add(a):
        return lambda *args: reduce(op.add, [f(*args) for f in b])
    elif is_mul(a):
        return lambda *args: reduce(op.mul, [f(*args) for f in b])
    elif len(b) == 1:
        n = b[0]    
        if is_not(a):
            return lift1(op.invert , n)# lambda *args: ~n
    elif len(b) == 2:
        l,r = b
        if is_eq(a):
            return lift2(op.eq, l,r) #lambda *args: l == r
        elif is_distinct(a): # This can be multi_argument
            return lift2(op.ne, l,r) # lambda *args: l != r
        elif is_sub(a):
            return lift2(op.sub, l,r) # lambda *args: l - r
        elif is_app_of(a, Z3_OP_POWER):
            return lift2(op.pow, l,r) #lambda *args: l ** r
        elif is_div(a):
            return  lift2(op.truediv, l,r)# lambda *args: l / r
        elif is_idiv(a):
            return lift2(op.floordiv, l,r) # lambda *args: l // r
        elif is_mod(a):
            return lift2(op.mod, l,r) # lambda *args: l % r
        elif is_le(a):
            return lift2(op.le, l,r) # lambda *args: l <= r
        elif is_lt(a):
            return lift2(op.lt, l,r) # lambda *args: l < r
        elif is_ge(a):
            return lift2(op.ge, l,r) #lambda *args: l > r
        elif is_gt(a):
            return lift2(op.gt, l,r) # lambda *args: l >= r
        elif is_implies(a):
            return lambda *args: (~ l(*args) ) & r(*args) 
    print("unrecognized constructor: ", type(a))
    assert(False)
#example usage
a = Bool('a')
interp(Xor(a & a | a, a), a)(True)
x, y = Reals('x y')
interp(x + y + y + y * x, x ,y)(3,2)

There is the ability to define recursive functions in z3. It is also plausible to define them via. In this way you can get a reversible functional programming language, maybe some subset of mercury / curry’s power.

fac = RecFunction('fac', IntSort(), IntSort())
n = Int('n')
RecAddDefinition(fac, n, If(n == 0, 1, n*fac(n-1)))

s = Solver()
s.add(fac(n) == 6)
s.check()
s.model()
#  returns [n = 3, fac = [0 → 1, else → fac(-1 + ν0)·ν0]]

Interactive Theorem Proving

Z3 is awesome at thoerem proving. But somethings it just doesn’t handle right and needs human guidance.

Through searching, there are a couple interesting python interactive theorem prover projects. Cody pointed me to a project he worked on a while back, Boole https://github.com/avigad/boole . It has a dependently typed lambda calculus in it with the purpose of gluing together many systems, I think. He implemented a lot of stuff from scratch. I think I want to try to get less and do less. There is also holpy https://arxiv.org/abs/1905.05970 which appears to be being actively developed. It’s roughly a translation of hol to python I think. It’s available from a strange chinese github on the author’s website if you go looking for it.

This suggests an interesting approach. Most interactive theorem provers start unautomated and add it later. Instead we can iteratively build an interface to de-automate z3.

Altogether, this approach is more HOL flavored than Coq/Agda flavored. z3 terms are our logic and python is our manipulation metal language. Ideally, one would want to verify that every.

Python is so unprincipled that I can’t imagine that you could ever build a system up to the trustworthiness of the other theorem provers. But this is freeing in many ways. Since that is off the table, we can just do the best we can.

Using the z3 syntax tree and the z3 proof automation and z3 substitution mechanisms gives us a HUGE step up from implementing them from scratch. Ideally, we’d want to write as little python as possible, and especially as little python as possible that has to be trusted to be implemented correctly.

One big concern is accidental mutation of the proof under our feet by python. Perhaps using hashes and checking them might be a way to at least detect this. I need to have a good think about how to factor out a trusted core from all possible tactics.

I think it helps a little that z3 often will be able to verify the equivalence of small steps in proofs even if it can’t do the entire proof itself.

I think induction principles will need to be injected by hand. Z3 doesn’t really have that built in. There are definitely situations that after you introduce the induction, z3 can slam all the cases no problem. For example, check this out.

Another thing that might be nice is integration/translation to sympy. Sympy has a ton of useful functionality, at the very least differentiation.

Translation and integration with cvxpy for sum of squares proofs would also be quite neat. I already did something with this using sympy. I’m not super sure how you extract exact proofs from the floating point solutions SCS returns? I think there is a thing. I’ve heard the LLL algorithm can be used for this somehow (finding likely algebraic number matches to floating point numbers)?

So here are some sketched out ideas for tactics.

class Proof():
    def __init__(self, goal, name=None): # Taken a name for the theorem?
        self.goals = [([],goal)]
        self.proven = False
        self.name = name
    #def intros(self): #intro_all        
    #    self.goals.append( (ctx, goal.intros())  )
    #    return self
    def equiv(self, goal2):
        ctx, goal1 = self.goals.pop()
        if prove2(Implies(And(*ctx), goal1 == goal2)):
            g = goal2
        else:
            g = goal1
        self.goals.append( (ctx, g))
        return self
    def __eq__(self,rhs):
        return self.equiv(rhs)
    #def assert(): #put new goal in stack with current context. Put into context of 1 below top
    #def assume(): #just put crap in the context.
    def intro_all(self): #name = hint maybe later
        ctx, goal = self.goals.pop()
        assert(goal.is_forall())
        vs = [FreshConst(goal.var_sort(i) , prefix=goal.var_name(i)) for i in range(goal.num_vars())]
        g = instantiate(goal,*vs) 
        self.goals.append( (ctx + vs, g)) # wait. I should keep propositions and variables seperate
        return self
    def intro_imp(self): #intro_impl
        ctx, goal = self.goals.pop()
        if is_implies(goal):
            a, b = goal.children()
            ctx.append(a)
            self.goals.append((ctx,b))
        else:
            self.goals.append((ctx,goal))
        return self
    def split(self): #z3 tactic split-clauses?
        ctx, goal = self.goals.pop()
        if is_and(goal):
            for c in goal.children():
                self.goals.append((ctx,c))
        else:
            self.goals.append((ctx,goal))
        return self
    def z3_tactic(self,t):
        t = Tactic(t)
        ctx, goal = self.goals.pop()
        #g = t(Implies(And(*ctx), goal)).as_expr()
        g = t(goal).as_expr()
        self.goals.append(([],g))
        return self
    def simpl(self):
        return self.z3_tactic("simplify")
    def congruence(self):
        #maybe search for equalities. And put them in the goal
        return self.z3_tactic("solve-eqs")
    def smt(self):
        ctx, goal = self.goals.pop()
        s = Solver()
        #s.set(**keywords)
        claim = Implies(And(*ctx), goal)
        s.add(Not(claim))
        r = s.check()
        if r  == sat:
            print("Countermodel : " + str(s.model()))
        assert(r == unsat)
        return self
    def destruct(self):
        ctx, goal = self.goals.pop()
        if is_bool(goal):
            ctx1 = ctx.copy()
            ctx2 = ctx.copy()
            ctx1.append(goal == True)
            ctx2.append(goal == False)
            self.goals.append((ctx2, BoolVal(False) ))
            self.goals.append((ctx1, BoolVal(True) ))
        else:
            self.goals.append((ctx, goal))    
        return self
    def forget(self,n):
        ctx, goal = self.goals.pop()
        ctx.pop(n)
        self.goals.append((ctx, goal))  
        return self
    def qed(self):
        if len(self.goals) == 0:
            self.proven = True
            # add self to global proof context if self.name is not None
    def get_ctx(self,n):
        return self.goals[-1][0][n]
    def __str__(self):
        if len(self.goals) >= 1:
            ctx, goal = self.goals[-1]
            return "".join([f"[{i}] {str(c)} : {str(c.sort())}\n" for i, c in enumerate(ctx)]) + "----------------\n" + f"{str(goal)} : {str(goal.sort())}"
        else:
            return "No Goals Left"
    def __repr__(self):
        return str(self)
x = Real("x")
Proof(x**2 - 1 == 0).equiv((x+1)*(x-1) == 0).equiv((x == 1) | (x == -1))
a, b = Bools('a b')
p = Proof((a & b) > b)
p.intro_imp().destruct() 
   .smt() \
   .smt() \
.qed()

Another question is how to implement an apply tactic gracefully. Fully deconstructing syntax trees and unifying ourselves is not utilizing z3 well. If you have a good idea how to get unification out of z3, I’d be interested to hear from you here. https://stackoverflow.com/questions/59398955/getting-z3-instantiations-of-quantified-variables/59400838#59400838

Here’s an idea though. In the cold light of day, I am still not sure this reasoning makes much sense. Suppose we’re trying to apply forall x. a(x) -> b(x) to a c(y). If forall x. b(x) -> c(y) we’re good and by assumption that is obvious for some reason, like the syntactic instantiation of b gives c. We can ask z3 to prove that and it will hopefully easy. If we can prove forall x. a(x) in the current context, that would be sufficient, but not true typically. It is an overly difficult request. We really only need to prove a(x) for values pertinent to the proof of c(y). Here’s a suspicious strategem. Any a -> b can be weakened to (q -> a) -> (q -> b). In particular we can choose to weaken forall x. a(x) -> b(x) to forall x. ((c(y) -> b(x)) -> a(x)) -> ((c(y) -> b(x)) -> b(x)). Then we can replace the goal with forall x. ((c(y) -> b(x)) -> a(x)) after we prove that (forall x. (c(y) -> b(x)) -> b(x)) -> c(y). Maybe c(y) -> b(x) is sufficient to restrict the values of x? Not sure.

Another rough sketch of induction on Nat. Not right yet.

def inductionNat(self):
    assert(self.num_vars() == 1 and self.var_sort(0) == IntSort() and self.is_forall())
    n = FreshInt()
    return instantiate(self, IntVal(0)) & ForAll([n],instantiate(self, n) & (n > 0) > instantiate(self, n+1))

We could also make a simple induction for ADTs based on the similar introspection we used for match above. It’s ugly but I think it works.

def induction(self):
    assert(is_quantifier(self) and self.is_forall() and self.num_vars() == 1) #we can eventually relax vars = 1
    t = self.var_sort(0)
    nc = t.num_constructors()
    th = []
    for i in range(nc):
        con = t.constructor(i)
        nfields = con.arity()
        if nfields == 0:
            th += [substitute_vars(self.body(), con())]
        else:
            hyp = []
            args = []
            for d in range(nfields):
                td = con.domain(d)
                x = FreshConst(td)
                print(x)
                if td == t:
                    hyp += [substitute_vars(self.body(), x)]
                args += [x]
            th += [ForAll(args, Implies(And(*hyp), substitute_vars(self.body(), con(*args))))]
        print(th)
    return And(*th)

I haven’t really though much about tacticals yet.

# describe_tactics() gives a list of all z3 tactics
ackermannize_bv : A tactic for performing full Ackermannization on bv instances.
subpaving : tactic for testing subpaving module.
horn : apply tactic for horn clauses.
horn-simplify : simplify horn clauses.
nlsat : (try to) solve goal using a nonlinear arithmetic solver.
qfnra-nlsat : builtin strategy for solving QF_NRA problems using only nlsat.
nlqsat : apply a NL-QSAT solver.
qe-light : apply light-weight quantifier elimination.
qe-sat : check satisfiability of quantified formulas using quantifier elimination.
qe : apply quantifier elimination.
qsat : apply a QSAT solver.
qe2 : apply a QSAT based quantifier elimination.
qe_rec : apply a QSAT based quantifier elimination recursively.
psat : (try to) solve goal using a parallel SAT solver.
sat : (try to) solve goal using a SAT solver.
sat-preprocess : Apply SAT solver preprocessing procedures (bounded resolution, Boolean constant propagation, 2-SAT, subsumption, subsumption resolution).
ctx-solver-simplify : apply solver-based contextual simplification rules.
smt : apply a SAT based SMT solver.
psmt : builtin strategy for SMT tactic in parallel.
unit-subsume-simplify : unit subsumption simplification.
aig : simplify Boolean structure using AIGs.
add-bounds : add bounds to unbounded variables (under approximation).
card2bv : convert pseudo-boolean constraints to bit-vectors.
degree-shift : try to reduce degree of polynomials (remark: :mul2power simplification is automatically applied).
diff-neq : specialized solver for integer arithmetic problems that contain only atoms of the form (<= k x) (<= x k) and (not (= (- x y) k)), where x and y are constants and k is a numeral, and all constants are bounded.
eq2bv : convert integer variables used as finite domain elements to bit-vectors.
factor : polynomial factorization.
fix-dl-var : if goal is in the difference logic fragment, then fix the variable with the most number of occurrences at 0.
fm : eliminate variables using fourier-motzkin elimination.
lia2card : introduce cardinality constraints from 0-1 integer.
lia2pb : convert bounded integer variables into a sequence of 0-1 variables.
nla2bv : convert a nonlinear arithmetic problem into a bit-vector problem, in most cases the resultant goal is an under approximation and is useul for finding models.
normalize-bounds : replace a variable x with lower bound k <= x with x' = x - k.
pb2bv : convert pseudo-boolean constraints to bit-vectors.
propagate-ineqs : propagate ineqs/bounds, remove subsumed inequalities.
purify-arith : eliminate unnecessary operators: -, /, div, mod, rem, is-int, to-int, ^, root-objects.
recover-01 : recover 0-1 variables hidden as Boolean variables.
bit-blast : reduce bit-vector expressions into SAT.
bv1-blast : reduce bit-vector expressions into bit-vectors of size 1 (notes: only equality, extract and concat are supported).
bv_bound_chk : attempts to detect inconsistencies of bounds on bv expressions.
propagate-bv-bounds : propagate bit-vector bounds by simplifying implied or contradictory bounds.
propagate-bv-bounds-new : propagate bit-vector bounds by simplifying implied or contradictory bounds.
reduce-bv-size : try to reduce bit-vector sizes using inequalities.
bvarray2uf : Rewrite bit-vector arrays into bit-vector (uninterpreted) functions.
dt2bv : eliminate finite domain data-types. Replace by bit-vectors.
elim-small-bv : eliminate small, quantified bit-vectors by expansion.
max-bv-sharing : use heuristics to maximize the sharing of bit-vector expressions such as adders and multipliers.
blast-term-ite : blast term if-then-else by hoisting them.
cofactor-term-ite : eliminate term if-the-else using cofactors.
collect-statistics : Collects various statistics.
ctx-simplify : apply contextual simplification rules.
der : destructive equality resolution.
distribute-forall : distribute forall over conjunctions.
dom-simplify : apply dominator simplification rules.
elim-term-ite : eliminate term if-then-else by adding fresh auxiliary declarations.
elim-uncnstr : eliminate application containing unconstrained variables.
injectivity : Identifies and applies injectivity axioms.
snf : put goal in skolem normal form.
nnf : put goal in negation normal form.
occf : put goal in one constraint per clause normal form (notes: fails if proof generation is enabled; only clauses are considered).
pb-preprocess : pre-process pseudo-Boolean constraints a la Davis Putnam.
propagate-values : propagate constants.
reduce-args : reduce the number of arguments of function applications, when for all occurrences of a function f the i-th is a value.
reduce-invertible : reduce invertible variable occurrences.
simplify : apply simplification rules.
elim-and : convert (and a b) into (not (or (not a) (not b))).
solve-eqs : eliminate variables by solving equations.
special-relations : detect and replace by special relations.
split-clause : split a clause in many subgoals.
symmetry-reduce : apply symmetry reduction.
tseitin-cnf : convert goal into CNF using tseitin-like encoding (note: quantifiers are ignored).
tseitin-cnf-core : convert goal into CNF using tseitin-like encoding (note: quantifiers are ignored). This tactic does not apply required simplifications to the input goal like the tseitin-cnf tactic.
qffd : builtin strategy for solving QF_FD problems.
pqffd : builtin strategy for solving QF_FD problems in parallel.
smtfd : builtin strategy for solving SMT problems by reduction to FD.
fpa2bv : convert floating point numbers to bit-vectors.
qffp : (try to) solve goal using the tactic for QF_FP.
qffpbv : (try to) solve goal using the tactic for QF_FPBV (floats+bit-vectors).
qffplra : (try to) solve goal using the tactic for QF_FPLRA.
default : default strategy used when no logic is specified.
sine-filter : eliminate premises using Sine Qua Non
qfbv-sls : (try to) solve using stochastic local search for QF_BV.
nra : builtin strategy for solving NRA problems.
qfaufbv : builtin strategy for solving QF_AUFBV problems.
qfauflia : builtin strategy for solving QF_AUFLIA problems.
qfbv : builtin strategy for solving QF_BV problems.
qfidl : builtin strategy for solving QF_IDL problems.
qflia : builtin strategy for solving QF_LIA problems.
qflra : builtin strategy for solving QF_LRA problems.
qfnia : builtin strategy for solving QF_NIA problems.
qfnra : builtin strategy for solving QF_NRA problems.
qfuf : builtin strategy for solving QF_UF problems.
qfufbv : builtin strategy for solving QF_UFBV problems.
qfufbv_ackr : A tactic for solving QF_UFBV based on Ackermannization.
ufnia : builtin strategy for solving UFNIA problems.
uflra : builtin strategy for solving UFLRA problems.
auflia : builtin strategy for solving AUFLIA problems.
auflira : builtin strategy for solving AUFLIRA problems.
aufnira : builtin strategy for solving AUFNIRA problems.
lra : builtin strategy for solving LRA problems.
lia : builtin strategy for solving LIA problems.
lira : builtin strategy for solving LIRA problems.
skip : do nothing tactic.
fail : always fail tactic.
fail-if-undecided : fail if goal is undecided.
macro-finder : Identifies and applies macros.
quasi-macros : Identifies and applies quasi-macros.
ufbv-rewriter : Applies UFBV-specific rewriting rules, mainly demodulation.
bv : builtin strategy for solving BV problems (with quantifiers).
ufbv : builtin strategy for solving UFBV problems (with quantifiers).