This morning I got a DM saying that a third party had seen some kind of plugin installation popup show up on my website, specifically on my about me page.
If you experience anything like this PLEASE PLEASE LET ME KNOW. (comments below, or DM me @sandmouth https://twitter.com/SandMouth )
I haven’t been able to replicate this behavior in any way, nor can anyone else I’ve asked, upon cursory examination of the code of my blog I can’t find anything, everything is up to date, and scans aren’t finding anything.
Maybe it’s time to finally transfer out of wordpress. I liked that it was easy, but the internet is too scary.
Also, if you’re a hacker, please pick on someone else. I really like my blog.
I’ve discussed relation algebra before. Relations are sets of tuples. There, I implemented the relations naively using lists for sets. This is very simple, and very clean especially with list comprehension syntax. It is however horrifically inefficient, and we could only deal with finitely enumerable domains. The easiest path to fixing these problems is to cash out to an external solver, in this case z3.
There are many beautifully implemented solvers out there and equally beautiful DSL/modeling languages. Examples in mind include sympy, cvxpy, and z3. These modeling languages require you to instantiate variable objects and build expressions out of them and then hand it off to the solver. This is a reasonable interface, but there are advantages to a more categorical/point-free style DSL.
Point-free languages are ones that do not include binding forms that introduce bound/dummy variables. Examples of binding forms like this are . One problem lies in the fact that the names of bound variables don’t matter, and that they end up accidentally smashing into each other. You may have experienced this in physics or math class as the dummy indices or dummy variable problem causing you to screw up your calculation of some cross product identity or some complicated tensor sum. These are surprisingly subtle problems, very difficult to diagnose and get right. de Bruijn indices is a technique for giving the bound variables canonical names, but it sucks to implement in its own way. When you make a DSL point free, it is a joy to manipulate, optimize, and search. I think this may be the core of why category theory is good language for mathematics and programming.
Point-free style also tends to have significant economy of size, for better or worse. Sometimes it is better to have an expression very dense in information. This is important if you are about the algebraically manipulate an expression with paper and pencil. Every manipulation requires a great deal of mind numbing copying as you proceed line by line, so it can be excruciating if your notation has a lot of unnecessary bulk.
Relations are like functions. The two pieces of the tuple can be roughly thought of as the “input” and the “output”. Relations are only loosely directional though. Part of the point of relations is that the converse (inverse) of a relation is easy to define.
When we are implement relations, we have a choice. Do we want the relation to produce its variables, accept its variable, or accept one and produce the other? There are advantages to each. When relations were [(a,b)], a -> b -> Bool, and a -> [b] converting between these forms was a rather painful enumeration process. The sting of converting between them is taken out by the fact that the conversion is no longer a very computationally expensive process, since we’re working at the modeling layer.
When you’re converting a pointful DSL to pointfree DSL, you have to be careful where you instantiate fresh variables or else you’ll end up with secret relations that you didn’t intend. Every instantiation of id needs to be using fresh variables for example. You don’t want the different id talking to each other. Sometimes achieving this involves a little currying and/or thunking.
There is a pattern that I have notice when I’m using modeling languages. When you have a function or relation on variables, there are constraints produced that you have to keep a record of. The pythonic way is to have a Model or Solver object, and then have that objects mutate an internal record of the set of constraints. I don’t particularly enjoy this style though. It feels like too much boiler plate.
In Haskell, I would use something like a Writer monad to automatically record the constraints that are occurring. Monads are not really all that pleasant even in Haskell, and especially a no go in python without “do” syntax.
However, because we are going point free it is no extra cost at all to include this pipework along for the ride in the composition operation.
Here are implementations of the identity and composition for three different styles. Style 1 is fully receptive, style 2 is mixed (function feeling) and style 3 is fully productive of variables.
Fair warning, I’m being sketchy here. I haven’t really tried this stuff out.
z3 is a simply typed language. You can get away with some polymorphism at the python level (for example the == dispatches correctly accord to the object) but sometimes you need to manually specify the sort of the variables. Given these types, the different styles are interconvertible
We can create the general cadre of relation algebra operators. Here is a somewhat incomplete list
Questions about relation algebra expressions are often phrased in term of relational inclusion. You can construct a relation algebra expression, use the rsub in the appropriate way and ask z3’s prove function if it is true.
Z3 has solvers for
Combinatorial Relations
Linear Relations
Polyhedral Relations
Polynomial Relations
Interval Relations – A point I was confused on. I thought interval relations were not interesting. But I was interpetting the term incorrectly. I was thinking of relations on AxB that are constrained to take the form of a product of intervals. In this case, the choice of A has no effect on the possible B whatsoever, so it feels non relational. However, there is also I_A x I_B , relations over the intervals of A and B. This is much closer to what is actually being discussed in interval arithmetic.
Applications we can use this for:
Graph Problems. The Edges can be thought of as a relation between vertices. Relation composition Using the starn operator is a way to ask for paths through the graph.
Safety and liveness of control systems. Again. a transition relation is natural here. It is conceivable that the state space can be heterogenous in time, which is the interesting power of the categorical style. I feel like traditional control systems usually maintain the same state space throughout.
I should try to comply with python conventions, in particular numpy and pandas conventions. @ should be composition for example, since relation composition has a lot of flavor of matrix composition. I should overload a lot of operators, but then I’d need to wrap in a class 🙁
As long as you only use composition, there is a chaining of existentials, which really isn’t so bad.
What we’ve done here is basically analogous/identical to what John Wiegley did compiling to the category of z3. Slightly different in that he only allowed for existential composition rather than relational division. http://newartisans.com/2017/04/haskell-and-z3/
We can reduced the burden on z3 if we know the constructive proof objects that witness our various operations. Z3 is gonna do better if we can tell it exactly which y witness the composition of operators, or clues to which branch of an Or it should use.
It’s a bummer, but when you use quantifiers, you don’t see countermodels? Maybe there is some hook where you can, or in the dump of the proof object.
What about recursion schemes? The exact nature of z3 to handle unbounded problems is fuzzy to me. It does have the support to define recursive functions. Also explicit induction predicates can go through sometimes. Maybe look at the Cata I made in fancy relaion algebra post
I think most proof assistants have implementations of relation algebra available. I find you can do a surprising amount in z3.
As I have gotten more into the concerns of formal methods, I’ve become unsure that ODEs actually exist. These are concerns that did not bother me much when I defined myself as being more in the physics game. How times change. Here’s a rough cut.
A difficulty with ODE error analysis is that it is very confusing how to get the error on something you are having difficulty approximating in the first place.
If I wanted to know the error of using a finite step size dt vs a size dt/10, great. Just compute both and compare. However, no amount of this seems to bootstrap you down to the continuum. And so I thought, you’re screwed in regards to using numerics in order to get true hard facts about the true solution. You have to go to paper and pencil considerations of equations and variables and epsilons and deltas and things. It is now clearer to me that this is not true. There is a field of verified/validated numerics.
A key piece of this seems to be interval arithmetic. https://en.wikipedia.org/wiki/Interval_arithmetic An interval can be concretely represented by its left and right point. If you use rational numbers, you can represent the interval precisely. Interval arithmetic over approximates operations on intervals in such a way as to keep things easily computable. One way it does this is by ignoring dependencies between different terms. Check out Moore et al’s book for more.
What switching over to intervals does is you think about sets as the things you’re operating on rather than points. For ODEs (and other things), this shift of perspective is to no longer consider individual functions, but instead sets of functions. And not arbitrary extremely complicated sets, only those which are concretely manipulable and storable on a computer like intervals. Taylor models are a particular choice of function sets. You are manipulating an interval tube around a finite polynomial. If during integration / multiplication you get higher powers, truncate the polynomials by dumping the excess into the interval term. This keeps the complexity under wraps and closes the loop of the descriptive system.
If we have an iterative, contractive process for getting better and better solutions of a problem (like a newton method or some iterative linear algebra method), we can get definite bounds on the solution if we can demonstrate that a set maps into itself under this operation. If this is the case and we know there is a unique solution, then it must be in this set.
It is wise if at all possible to convert an ODE into integral form. is the same as .
For ODEs, the common example of such an operation is known as Picard iteration. In physical terms, this is something like the impulse approximation / born approximation. One assumes that the ODE evolves according to a known trajectory as a first approximation. Then one plugs in the trajectory to the equations of motion to determine the “force” it would feel and integrate up all this force. This creates a better approximation (probably) which you can plug back in to create an even better approximation.
If we instead do this iteration on an intervally function set / taylor model thing, and can show that the set maps into itself, we know the true solution lies in this interval. The term to search for is Taylor Models (also some links below).
I was tinkering with whether sum of squares optimization might tie in to this. I have not seen SOS used in this context, but it probably has or is worthless.
But that isn’t really what makes Sum of squares special. There are other methods by which to do this.
There are very related methods called DSOS and SDSOS https://arxiv.org/abs/1706.02586 which are approximations of the SOS method. They replace the SDP constraint at the core with a more restrictive constraint that can be expressed with LP and socp respectively. These methods lose some of the universality of the SOS method and became basis dependent on your choice of polynomials. DSOS in fact is based around the concept of a diagonally dominant matrix, which means that you should know roughly what basis your certificate should be in.
This made me realize there is an even more elementary version of DSOS that perhaps should have been obvious to me from the outset. Suppose we have a set of functions we already know are positive everywhere on a domain of interest. A useful example is the raised chebyshev polynomials. https://en.wikipedia.org/wiki/Chebyshev_polynomials The appropriate chebyshev polynomials oscillate between [-1,1] on the interval [-1,1], so if you add 1 to them they are positive over the whole interval [-1,1]. Then nonnegative linear sums of them are also positive. Bing bang boom. And that compiles down into a simple linear program (inequality constraints on the coefficients) with significantly less variables than DSOS. What we are doing is restricting ourselves to completely positive diagonal matrices again, which are of course positive semidefinite. It is less flexible, but it also has more obvious knobs to throw in domain specific knowledge. You can use a significantly over complete basis and finding this basis is where you can insert your prior knowledge.
It is not at all clear there is any benefit over interval based methods.
Here is a sketch I wrote for which has solution . I used raised chebyshev polynomials to enforce positive polynomial constraints and tossed in a little taylor model / interval arithmetic to truncate off the highest terms.
I’m using my helper functions for translating between sympy and cvxpy expressions. https://github.com/philzook58/cvxpy-helpers Sympy is great for collecting up the coefficients on terms and polynomial multiplication integration differentiation etc. I do it by basically creating sympy matrix variables corresponding to cvxpy variables which I compile to cvxpy expressions using lambdify with an explicit variable dictionary.
Seems to work, but I’ve been burned before.
man, LP solvers are so much better than SDP solvers
Random junk and links: Should I be more ashamed of dumps like this? I don’t expect you to read this.
Functional analysis by and large analyzes functions by analogy with more familiar properties of finite dimensional vector spaces. In ordinary 2d space, it is convenient to work with rectangular regions or polytopic regions.
Suppose I had a damped oscillator converging to some unknown point. If we can show that every point in a set maps within the set, we can show that the function
One model of a program is that it is some kind of kooky complicated hyper nonlinear discrete time dynamical system. And vice versa, dynamical systems are continuous time programs. The techniques for analyzing either have analogs in the other domain. Invariants of programs are essential for determining correctness properties of loops. Invariants like energy and momentum are essential for determining what physical systems can and cannot do. Lyapunov functions demonstrate that control systems are converging to the set point. Terminating metrics are showing that loops and recursion must eventually end.
If instead you use interval arithmetic for a bound on your solution rather than your best current solution, and if you can show the interval maps inside itself, then you know that the iterative process must converge inside of the interval, hence that is where the true solution lies.
A very simple bound for an integral is
The integral is a very nice operator. The result of the integral is a positive linear sum of the values of a function. This means it plays nice with inequalities.
Rigorously Bounding ODE solutions with Sum of Squares optimization – Intervals
Intervals – Moore book. Computational functional analaysis. Tucker book. Coqintervals. fixed point theorem. Hardware acceleration? Interval valued functions. Interval extensions.
Banach fixed point – contraction mapping
Brouwer fixed point
Schauder
Knaster Tarski
Picard iteration vs? Allowing flex on boundary conditions via an interval?
Interval book had an interesting integral form for the 2-D
sympy has cool stuff
google scholar search z3, sympy brings up interesting things
Lyapunov functions. Piecewise affine lyapunov funcions. Are lyapunov functions kind of like a PDE? Value functions are pdes. If the system is piecewise affine we can define a grid on the same piecewise affine thingo. Compositional convexity. Could we use compositional convexity + Relu style piecewise affinity to get complicated lyapunov functions. Lyapunov functions don’t have to be continiuous, they just have to be decreasing. The Lie derivative wrt the flow is always negative, i.e gradeint of function points roughly in direction of flow. trangulate around equilibrium if you want to avoid quadratic lyapunov. For guarded system, can relax lyapunov constrain outside of guard if you tighten inside guard. Ax>= 0 is guard. Its S-procedure.
Idea: Approximate invariants? At least this ought to make a good coordinate system to work in where the dynamics are slow. Like action-angle and adiabatic transformations. Could also perhaps bound the
Picard Iteration
I have a method that I’m not sure is ultimately sound. The idea is to start with
Error analysis most often uses an appeal to Taylor’s theorem and Taylor’s theorem is usually derived from them mean value theorem or intermediate value theorem. Maybe that’s fine. But the mean value theorem is some heavy stuff. There are computational doo dads that use these bounds + interval analysis to rigorously integrate ODEs. See https://github.com/JuliaIntervals/TaylorModels.jl
The beauty of sum of squares certificates is that they are very primitive proofs of positivity for a function on a domain of infinitely many values. If I give you a way to write an expression as a sum of square terms, it is then quite obvious that it has to be always positive. This is algebra rather than analysis.
. Sum of squares is a kind of a quantifier elimination method. The reverse direction of the above implication is the subject of the positivstullensatz, a theorem of real algebraic geometry. At the very least, we can use the SOS constraint as a relaxation of the quantified constraint.
So, I think by using sum of squares, we can turn a differential equation into a differential inequation. If we force the highest derivative to be larger than the required differential equation, we will get an overestimate of the required function.
A function that is dominated by another in derivative, will be dominated in value also. You can integrate over inequalities (I think. You have to be careful about such things. ) x(t) – x(0) >= y(t) – y(0) $
The derivative of a polynomial can be thought of as a completely formal operation, with no necessarily implied calculus meaning. It seems we can play a funny kind of shell game to avoid the bulk of calculus.
As an example, let’s take with the solution . is a transcendental
The S-procedure is trick by which you can relax a sum of squares inequality to only need to be enforced in a domain. If you build a polynomials function that describes the domain, it that it is positive inside the domain and negative outside the domain, you can add a positive multiple of that to your SOS inequalities. Inside the domain you care about, you’ve only made them harder to satisfy, not easier. But outside the domain you have made it easier because you can have negative slack.
For the domain the polynomial works as our domain polynomial.
We parametrize our solution as an explicit polynomial . It is important to note that what follows is always linear in the .
can be relaxed to with .
So with that we get a reasonable formulation of finding a polynomial upper bound solution of the differential equation
.
And here it is written out in python using my cvxpy-helpers which bridge the gap between sympy polynomials and cvxpy.
1
We can go backwards to figure out sufficient conditions for a bound. We want . It is sufficient that . For this it is sufficient that . We follow this down in derivative until we get the lowest derivative in the differential equation. Then we can use the linear differential equation itself . .
. This accounts for the possibility of terms changing signs. Or you could separate the terms into regions of constant sign.
The minimization characterization of the bound is useful. For any class of functions that contains our degree-d polynomial, we can show that the minimum of the same optimization problem is less than or equal to our value.
Is the dual value useful? The lower bound on the least upper bound
Doesn’t seem like the method will work for nonlinear odes. Maybe it will if you relax the nonlinearity. Or you could use perhaps a MIDSP to make piecewise linear approximations of the nonlinearity?
It is interesting to investigtae linear programming models. It is simpler and more concrete to examine how well different step sizes approximate each other rather than worry about the differential case.
We can explicit compute a finite difference solution in the LP, which is a power that is difficult to achieve in general for differential equations.
We can instead remove the exact solution by a convservative bound.
While we can differentiate through an equality, we can’t differentiate through an inequality. Differentiation involves negation, which plays havoc with inequalities. We can however integrate through inequalities.
x(t) >= \int^t_0 f(x) + a$
As a generalization we can integrate over inequalities as long as
In particular x(t) – x(0) >= y(t) – y(0) $
We can convert a differential equation into a differential inequation. It is not entirely clear to me that there is a canonical way to do this. But it works to take the biggest.
Neural networks are all the rage these days. One mundane way of looking at neural networks is that they are a particular class of parametrized functions . What makes them useful is:
They can be used at insane scale due to their simplicity and excellent available implementations and tooling
There are intuitive ways to input abstract structure and symmetry expected of a problem, for example translation symmetry, or a hierarchy of small scale pattern recognition combining into large scale structures. How this all works is very peculiar.
Inspirational analogies can be drawn from nature.
Neural networks made out of just relus (rectified linear units, relu(x) = max(0,x) ) and linear layers are particularly amenable to formal analysis. Regarding the weights as fixed (once a network has be trained), the complete neural network is a piecewise linear function. The regions where it is piecewise define are polyhedra (They are defined by the hyperplanes of the relu transitions as inequality constraints). Such functions are among those the most amenable to automated rigorous analysis.
A relu
Most machine learning tasks don’t have a mathematically precise specification. What is the mathematically precise definition of a picture of a horse? We could try to come up with something (this is sort of what good old fashioned AI tried to do), but in my opinion it would be rather suspect.
Tasks that do have a precise spec are questionable areas for machine learning techniques, because how can you know that the network meets the spec? Also, one would suspect such a problem would have structure that you might be better off with a more traditional algorithmic approach.
However, there a a couple areas where one does have reasonable formal questions one might want to ask about a neural network:
Robustness around training and validation data. Finding Adversarial examples or proving there are none.
Games like Go. Alpha Go is a marriage of more traditional algorithmic approaches and machine learning. There is a core of traditional game tree search to it.
Control Systems – There are many control systems which we do have a reasonable formal spec one could write, such as walking robots. These systems are so high dimensional that it is difficult to derive a good controller from the spec, and hence reinforcement learning may be a reasonable option. But we would like to confirm the controller is good and not dangerous
Neural networks as computational accelerators. There are problems which we know how to solve, but are too slow. Neural networks can be evaluated very quickly and easily thanks to modern frameworks. It may be useful to presolve a large number of examples offline using the slow algorithm and train a neural network to give good estimates. We may be able to replace the slow thing entirely if we can verify the neural network always is good enough.
We’re going to use a neural network to fit a chebyshev polynomial. Here we’re picking a Chebyshev polynomial as our base truth because Chebyshev polynomials have some pleasant waviness to them. Why not. I like ’em. Also polynomials are easily understood by z3 as a base spec.
This example of course is a complete toy. How often do you see 1-D input space neural networks? Not often I bet. But it’s nice for a couple reasons:
Because we can visualize it.
It drives home the point about neural nets being a space of piecewise linear function approximators, and how similar training is to curve fitting.
It’s simple enough that z3 can crush it. There is a big question if z3 or other methods can scale to realistic neural nets. Modern top of the line neural nets are insanely huge. As we’ve done it here, I highly doubt it. There are special purpose SMT solvers being built for this task. Also the slightly different technology of mixed integer programming can be used and seems very promising. So this is an area of research. See links at the bottom for more.
Generally speaking, the combination of the capabilities of sympy and z3 give us access to some very intriguing possibilities. I’m not going to investigate this in detail in this post, but I are showing how you can convert a sympy derived polynomial into a python expression using lambdify, which can then be in turn used on z3 variables.
Our actual chebyshev polynomial
Here we build a very small 3 layers network using Keras. We use a least squares error and an adam optimizer. Whatever. I actually had difficulty getting nice results out for higher order chebyshev polynomials. Neural networks are so fiddly.
Our neural fit of the polynomial
And here we extract the weights and reinterpret them into z3. We could also have used z3 floating point capabilities rather than reals if you’re concerned about numerical issues. It was convenient to have my layers be different sizes, so that size mismatch would throw a python error. That’s how I found out the weights are transposed by default. The code at the end extracts a found countermodel and evaluates it. If you want to feel fancier, you can also use the prove function rather than an explicit Solver() instance. Saying you proved the neural network matches the polynomial to a certain tolerance feels really good. If you look at the graphs, the edges at 1 and -1 actually have pretty bad absolute error, around 0.5.
Z3 can be used for proofs. The input language isn’t anywhere near as powerful as interactive theorem provers like Coq, Isabelle, or Agda, but you can ask Z3 to prove pretty interesting things. Although the theorems that follow aren’t hard in interactive theorem provers, they would take beyond complete novice level skills to state or prove.
I like to think of the z3 proving process as “failing to find a counterexample”. Z3py has supplies a function prove which is implemented like this.
Basically, it negates the thing you want to prove. It then tries to find a way to instantiate the variables in the expression to make the statement false. If it comes back unsat, then there is no variable assignment that does it. Another way to think about this is rewriting the as . The first lives at sort of a meta level, where we consider unsat as a success, but the inner is the one appearing in s.add(Not(claim)).
We can prove some simple facts. This is still quite cool, let’s not get too jaded. Manually proving these things in Coq does suck (although is easy if you use the ring, psatz, and lra tactics https://coq.inria.fr/refman/addendum/micromega.html, which you DEFINITELY should. It is a great irony of learning coq that you cut your teeth on theorems that you shouldn’t do by hand).
Ok, here’s our first sort of interesting example. Some properties of even and odd numbers. Even and Odd are natural predicates. What are possible choices to represent predictaes in z3? We can either choose python functions IntSort -> BoolSort() as predicates or we can make internal z3 functions Function(IntSort(), BoolSort())
All well and good, but try to prove facts about the multiplicative properties of even and odd. Doesn’t go through. 🙁
Here’s a simple inductive proof. Z3 can do induction, but you sort of have to do it manually, or with a combinator. Given a predicate f, inductionNat returns
Here’s another cute and stupid trick. Z3 doesn’t have a built in sine or cosine. Perhaps you would want to look into dreal if you think you might be heavily looking into such things. However, sine and cosine are actually defined implicitly via a couple of their formula. So we can instantiate A slightly counterintuitive thing is that we can’t use this to directly compute sine and cosine values. That would require returning a model, which would include a model of sine and cosine, which z3 cannot express. However, we can try to assert false facts about sine and cosine and z3 can prove they are in fact unsatisfiable. In this way we can narrow down values by bisection guessing. This is very silly.
A trick that I like to use sometimes is embedding objects in numpy arrays. Numpy slicing is the best thing since sliced bread. A lot, but not all, of numpy operations come for free, like matrix multiply, dot, sum, indexing, slicing, reshaping. Only some are implemented in terms of overloadable operations. here we can prove the Cauchy Schwartz inequality for a particular vector and some axioms of vector spaces.
Defining and proving simple properties of Min and Max functions
Proving the Babylonian method for calculating square roots is getting close to the right answer. I like the to think of the Babylonian method very roughly this way: If your current guess is low for the square root x/guess is high. If your guess is high, x/guess is low. So if you take the average of the two, it seems plausible you’re closer to the real answer. We can also see that if you are precisely at the square root, (x/res + x)/2 stays the same. Part of the the trick here is that z3 can understand square roots directly as a specification. Also note because of python overloading, babylonian with work on regular numbers and symbolic z3 numbers. We can also prove that babylon_iter is a contractive, which is interesting in it’s own right.
A funny thing we can do is define interval arithmetic using z3 variables. Interval arithmetic is very cool. Checkout Moore’s book, it’s good. This might be a nice way of proving facts related to real analysis. Not sure. This is funny because z3 internally uses interval arithmetic. So what we’re doing is either very idiotically circular or pleasantly self-similar. We could use a similar arrangement to get complex numbers, which z3 does not natively support
Sorting networks are a circuit flavored take on sorting. Although you can build circuits for any size input, any particular circuit works for a fixed sized input. They are like an unrolling of the loops or recursion of more familiar sorting algorithms. They come up also in the context of parallel and gpu sorting
Here’s an interesting thing. We can go to Wikipedia and get a little python snippet for the comparison order of a Batcher odd-even mergesort. Kind of a confusing algorithm. Why does it even work? Is it even right? It’s written in some kind of funky, indexful generator style.
Source: Wikipedia
Well we can confirm this relatively straightforwardly using z3 by replacing the implementation of compare_and_swap with its z3 equivalent. We then ask z3 .
This comes back unsat, hence there are no inputs or executions that do not come back sorted. If I delete some elements from pair_to_compare, it comes back sat, showing that it doesn’t always sort.
The trick here is that the circuit is fixed size, so we have no need for induction, one of the main things z3 is rather finicky at.
It’s somewhat interesting to note that the output of odd_even_merge is a sequence of instructions, we can think of this as interpreting a very small 1 instruction programming language.
We can also confirm similarly a simple odd-even bubblesort and other similar algorithms.
Q: What about using uninterpreted sorts rather than integers? Integers is pretty convincing to me.
same_elems is slightly weaker than a permutation predicate. Wasn’t super obvious to me the best way to do a permutation predicate in z3. Would I want to internalize the array?
Edit: Upon further thought, actually the sort IS a nice predicate for permutation. How do we compute if two things are permutations of each other? By sorting them and forcing a zipped equality. Alternatively count the number of each element (a piece of bucket sort). Since this sort is done by composing swaps, it is somewhat intrinsically a permutation
As a bummer though, I think randomized testing on arrays would be equally or perhaps more convincing of the correctness of the algorithm. Oh well.
I’ve been fiddling with z3py, figuring out some functionality and realizing some interesting things you could do with it. I think I’m at a point where it is nice to checkpoint myself with a blog post.
I’m a little surprised z3py doesn’t overload the & and | operators and some kind of implies operator for BoolRef. You can insert them later using this.
1
2
3
4
5
6
7
from z3 import *
# useful non default operator definitions for z3 bools
Python is not the best functional programming environment imo. And by functional programming I implicitly mean roughly ML-like FP a la Haskell or OCaml. I don’t venture much into lisp land.
The lack of good algebraic datatypes (the class syntax is so ungainly) and a type system hurts. The lack of pattern matching hurts. The lambda keyword is so long it makes me sad.
But you have full access to z3 from the python bindings. Z3 does have algebraic data types, and a type system. It has built in substitution mechanisms and evaluation. And it has insane search procedures and the ability to prove things. Pretty damn cool!
Unfortunately the type system is rather simplistic, being basically simply typed rather than polymorphic or something else. But using python a a schema/macro system for z3 seems a plausible way forward.
To build templated types, you can have constructor functions in python for the appropriate types.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def Tuple(a,b):
Type=Datatype('Tuple(f{(a.name(),b.name())})')
Type.declare('pair',('fst',a),('snd',b))
Type=Type.create()
returnType
def Either(a,b):
Type=Datatype('Either(f{(a.name(),b.name())})')
Type.declare('left',('getLeft',a))
Type.declare('right',('getRight',b))
Type=Type.create()
returnType
def Maybe(a):
Type=Datatype('Maybe(f{(a.name())})')
Type.declare('Just',('fromJust',a))
Type.declare("Nothing")
Type=Type.create()
returnType
def List(a):
Type=Datatype('List(f{(a.name())})')
Type.declare('Cons',('car',a),('cdr',Type))
Type.declare("Nil")
Type=Type.create()
returnType
'''
Note in regards to List. Z3 has a built in type Seq that I think it has built in smarts about. You might be better off using that rather than a custom List. YMMV
'''
You can access the constructors from the returned types. Check this out. You get detector functions is_Nothing and is_Just , the extractor function fromJust and constructor functions Nothing and Just. I do a lot of dir exploration with z3py. It’s hard to know what’s available sometimes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# dir(Maybe(IntSort())) returns
[
'Just',
'Nothing',
...underscore junk...,
'accessor',
'as_ast',
'ast',
'cast',
'constructor',
'ctx',
'ctx_ref',
'eq',
'fromJust',
'get_id',
'hash',
'is_Just',
'is_Nothing',
'kind',
'name',
'num_constructors',
'recognizer',
'sexpr',
'subsort',
'translate',
'use_pp']
It’s possible to build a general purpose match combinator on these types since you can introspect the number of constructors of the ADT using num_constructors, constructor, recognizer, and accessor. There might be a match inside z3py somewhere? I think it’s part of the SMTLIB standard now.
Z3 has a substitution mechanism built in. This is useful for instantiating ForAll and for evaluating Lambda. The substitute_vars function is what you want like so substitute_vars(f.body(), x, y, z)
It is possible to reflect the syntax in a fairly straightforward way back into python via a lambdify function, mimicking the equivalent very useful function from sympy. Lambdify is basically an interp function. Here is a start for such a function. I by no means have implemented interpretation of the entirety of z3. Also I feel like this implementation is very clunky. Some kind of CPS?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
def lift1(f,x):
returnlambda *args:f(x(*args))
def lift2(op,l,r):
returnlambda *args:op(l(*args),r(*args))
# interp is useful for transferring expressions into numpy, sympy
returnlift2(op.floordiv,l,r)# lambda *args: l // r
elif is_mod(a):
returnlift2(op.mod,l,r)# lambda *args: l % r
elif is_le(a):
returnlift2(op.le,l,r)# lambda *args: l <= r
elif is_lt(a):
returnlift2(op.lt,l,r)# lambda *args: l < r
elif is_ge(a):
returnlift2(op.ge,l,r)#lambda *args: l > r
elif is_gt(a):
returnlift2(op.gt,l,r)# lambda *args: l >= r
elif is_implies(a):
returnlambda *args:(~l(*args))&r(*args)
print("unrecognized constructor: ",type(a))
assert(False)
1
2
3
4
5
#example usage
a=Bool('a')
interp(Xor(a&a|a,a),a)(True)
x,y=Reals('x y')
interp(x+y+y+y *x,x,y)(3,2)
There is the ability to define recursive functions in z3. It is also plausible to define them via. In this way you can get a reversible functional programming language, maybe some subset of mercury / curry’s power.
Z3 is awesome at thoerem proving. But somethings it just doesn’t handle right and needs human guidance.
Through searching, there are a couple interesting python interactive theorem prover projects. Cody pointed me to a project he worked on a while back, Boole https://github.com/avigad/boole . It has a dependently typed lambda calculus in it with the purpose of gluing together many systems, I think. He implemented a lot of stuff from scratch. I think I want to try to get less and do less. There is also holpy https://arxiv.org/abs/1905.05970 which appears to be being actively developed. It’s roughly a translation of hol to python I think. It’s available from a strange chinese github on the author’s website if you go looking for it.
This suggests an interesting approach. Most interactive theorem provers start unautomated and add it later. Instead we can iteratively build an interface to de-automate z3.
Altogether, this approach is more HOL flavored than Coq/Agda flavored. z3 terms are our logic and python is our manipulation metal language. Ideally, one would want to verify that every.
Python is so unprincipled that I can’t imagine that you could ever build a system up to the trustworthiness of the other theorem provers. But this is freeing in many ways. Since that is off the table, we can just do the best we can.
Using the z3 syntax tree and the z3 proof automation and z3 substitution mechanisms gives us a HUGE step up from implementing them from scratch. Ideally, we’d want to write as little python as possible, and especially as little python as possible that has to be trusted to be implemented correctly.
One big concern is accidental mutation of the proof under our feet by python. Perhaps using hashes and checking them might be a way to at least detect this. I need to have a good think about how to factor out a trusted core from all possible tactics.
I think it helps a little that z3 often will be able to verify the equivalence of small steps in proofs even if it can’t do the entire proof itself.
I think induction principles will need to be injected by hand. Z3 doesn’t really have that built in. There are definitely situations that after you introduce the induction, z3 can slam all the cases no problem. For example, check this out.
Another thing that might be nice is integration/translation to sympy. Sympy has a ton of useful functionality, at the very least differentiation.
Translation and integration with cvxpy for sum of squares proofs would also be quite neat. I already did something with this using sympy. I’m not super sure how you extract exact proofs from the floating point solutions SCS returns? I think there is a thing. I’ve heard the LLL algorithm can be used for this somehow (finding likely algebraic number matches to floating point numbers)?
So here are some sketched out ideas for tactics.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
classProof():
def __init__(self,goal,name=None):# Taken a name for the theorem?
self.goals=[([],goal)]
self.proven=False
self.name=name
#def intros(self): #intro_all
# self.goals.append( (ctx, goal.intros()) )
# return self
def equiv(self,goal2):
ctx,goal1=self.goals.pop()
ifprove2(Implies(And(*ctx),goal1==goal2)):
g=goal2
else:
g=goal1
self.goals.append((ctx,g))
returnself
def __eq__(self,rhs):
returnself.equiv(rhs)
#def assert(): #put new goal in stack with current context. Put into context of 1 below top
Here’s an idea though. In the cold light of day, I am still not sure this reasoning makes much sense. Suppose we’re trying to apply forall x. a(x) -> b(x) to a c(y). If forall x. b(x) -> c(y) we’re good and by assumption that is obvious for some reason, like the syntactic instantiation of b gives c. We can ask z3 to prove that and it will hopefully easy. If we can prove forall x. a(x) in the current context, that would be sufficient, but not true typically. It is an overly difficult request. We really only need to prove a(x) for values pertinent to the proof of c(y). Here’s a suspicious strategem. Any a -> b can be weakened to (q -> a) -> (q -> b). In particular we can choose to weaken forall x. a(x) -> b(x) to forall x. ((c(y) -> b(x)) -> a(x)) -> ((c(y) -> b(x)) -> b(x)). Then we can replace the goal with forall x. ((c(y) -> b(x)) -> a(x)) after we prove that (forall x. (c(y) -> b(x)) -> b(x)) -> c(y). Maybe c(y) -> b(x) is sufficient to restrict the values of x? Not sure.
Another rough sketch of induction on Nat. Not right yet.
degree-shift:trytoreduce degree of polynomials(remark::mul2power simplification isautomatically applied).
diff-neq:specialized solver forintegerarithmetic problems that contain only atoms of the form(<=kx)(<=xk)and(not(=(-xy)k)),wherexandyare constants andkisanumeral,andall constants are bounded.
eq2bv:convert integervariables used asfinite domain elements tobit-vectors.
factor:polynomial factorization.
fix-dl-var:ifgoal isinthe difference logic fragment,thenfix the variable with the most number of occurrences at0.
fm:eliminate variables using fourier-motzkin elimination.
nla2bv:convertanonlinear arithmetic problem intoabit-vector problem,inmost cases the resultant goal isan under approximation andisuseul forfinding models.
solve-eqs:eliminate variables by solving equations.
special-relations:detect andreplace by special relations.
split-clause:splitaclause inmany subgoals.
symmetry-reduce:apply symmetry reduction.
tseitin-cnf:convert goal into CNF using tseitin-like encoding(note:quantifiers are ignored).
tseitin-cnf-core:convert goal into CNF using tseitin-like encoding(note:quantifiers are ignored).Thistactic does notapply required simplifications tothe input goal like the tseitin-cnf tactic.
It and slight variants of it describes electrostatics, magnetostatics, steady state heat flow, elastic flex, pressure, velocity potentials.
There are a couple reasons for that.
It results from minimizing the squared gradient of a field which can make sense from an energy minimization perspective.
Similarly it results from the combination of a flow conservation law and a linear constitutive relation connecting flow and field (such as Ohm’s law, Fick’s law, or Hooke’s law).
It also gets used even if not particularly appropriate because we know how to mathematically deal with it, for example in image processing.
There are a couple of questions we may want to ask about a Laplace equation system
Given the field on the boundary, determine the field in the interior (Dirchlet problem)
Given the normal derivative of the field on the boundary determine the field in the interior (Neumann problem)
Given sources in the interior and 0 boundary condition, determine the field. The Laplace equation is called the Poisson equation when you allow a source term on the right hand side. .
Given the field at the boundary, determine the derivative at the boundary. Dirichlet-to-Neumann map or Poincare-Steklov operator.
Given the Dirichlet to Neumann map, you do not have to consider the interior of a region to use it. The Dirichlet to Neumann map is sort of the same thing as an effective resistance or scattering matrix. It gives you a black box representation of a region based solely on the variables at its boundary.
This linear relation algebra is useful for many things that I’d have considered a use case for the Schur complement. The Schur complement arises when you do Gaussian elimination on a blocked matrix. It is good that this pattern has a name, because once you know about it, you’ll see it in many places. Domain decomposition, marginalized gaussian distributions, low-rank update, Scattering matrices.
By composing the linear relations corresponding to the Dirchlet-Neumann relations of regions, we can build the Dirichlet-Neumann relations of larger regions.
To make this more concrete, let us take the example of electrical circuits like before. A grid of resistors is a finite difference approximation to the continuous problem
Electric field is gradient of potential
continuum ohm’s law
current conservation
In this post, I mentioned how you can make reasonable 2 dimensional diagrams out of a monoidal category, by sort of arbitrarily flipping one wire up and one wire down as in the diagram below. This defines a horizontal and vertical composition which have to do the required book-keeping (associations) to keep an arrow in canonical form. I had considered this for the management method of weights in neural networks, but it is way more natural as the actual geometrical layout of a finite difference grid of a laplace equation.
So we can reuse our categorical circuit combinators to build a finite difference Laplace equation.
Just showing how you can bend a 4-wire monoidal box into a 2-d diagram. Ignore the labels.
This can be implemented in Haskell doing the following. Neato.
Bits and Bobbles
Not the tightest post, but I needed to get it out there. I have a short attention span.
Homology and defining simplices as categories. One way of describing Homology is about linear operators that are analogues of finite difference operators (or better yet, discrete differential geometry operators / exterior derivatives). To some degree, it is analyzing the required boundary conditions to fully define differential equations on weirdo topological surfaces, which correspond to geometrical loops/holes. You can figure this out by looking at subspaces and quotients of the difference operators. Here we have here a very category theory way of looking at partial differential equations. How does it all connect?
Cody mentioned that I could actually build circuits and measure categorical identities in a sense. That’s kind of cool. Or I could draw conductive ink on carbon paper and actually make my string diagrams into circuits? That is also brain tickling
Network circuits
I really want to get coefficients that aren’t just doubles. allowing rational functions of a frequency would allow analysis of capacitor/inductor circuits, but also tight binding model systems for fun things like topological insulators and the Haldane model http://www.philipzucker.com/topologically-non-trivial-circuit-making-haldane-model-gyrator/ . I may need to leave Haskell. I’m not seeing quite the functionality I need. Use Sympy? https://arxiv.org/abs/1605.02532 HLinear. Flint bindings for haskell? Looks unmaintained. Could also use a grobner basis package as dynamite for a mouse.
Blah Blah blah: The subtlest aspect of differential equations is that of boundary conditions. It is more correct usually to consider the system of the interior differential equation and the boundary conditions as equally essential parts of the statement of the problem you are considering.