Theorem Proving For Catlab 2: Let’s Try Z3 This Time. Nope.

Welp, you win some, you lose some.

As I had left off last time, I had realized that my encoding of the equations of Catlab was unsound.

As an example, look at the following suggested axioms.

fof( axiom2, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constid(VarB)) = Varf).
fof( axiom3, axiom, ![Varf, VarA, VarB]: constcompose(constid(VarA), Varf) = Varf).

It is a theorem from these axioms that compose(id(A),id(B)) = id(A) = id(B), which should not be a theorem. Evan made the interesting point that this is the standard proof that shows the identity of a group is unique, so we’re reducing our magnificent category into a pitiful monoid. How sad. And inspecting the trace for some “proofs” returned by eprover shows that the solver was actually using this fact. Oh well.

An approach that I feel more confident in being correct is using “type guards” as preconditions for the equations. In the very useful paper https://people.mpi-inf.mpg.de/~jblanche/mono-trans.pdf this technique is described as well known folklore, albeit in a slightly different context. The type guard is an implication clause that holds the necessary typing predicates from the typing context required for the equation to even make sense. For example composition associativity look like forall A B C D f g h, (type(f) = Hom A B /\ type(g) = Hom B C /\ type(h) = Hom C D /\ type(A) = Ob /\ type(B) = Ob /\ type(C) = Ob /\ type(C) = Ob) => compose(f (g, h)) = compose( compose(f,g),h).

Adding the guards seems to work, but slows the provers to a crawl for fairly trivial queries. My running example is pair(proj1(A,B), proj2(A,B)) = otimes(id(A),id(B)). In Catlab proj1, proj2, and pair, are defined in terms of mcopy and delete, which makes this theorem not as trivial as it would appear. Basically it involves unfolding the definitions, and then applying out of nowhere some identities involving braiding.

I decided to give Z3, an SMT solver, a go since I’m already familiar with it and its python bindings. There are native Julia bindings https://github.com/ahumenberger/Z3.jl which may be useful for a more high performance situation, but they don’t appear to have quantifier support yet.

Julia has the library PyCall https://github.com/JuliaPy/PyCall.jl which was a shear joy to use. I actually could copy and paste some python3 z3 code and run it with very few modifications and I couldn’t imagine going into and out of Julia data types being more seemless.

Z3 does a better job than I expected. I thought thus problem was more appropriate to eprover or vampire, but z3 seemed to consistently out perform them.

At first I tried using a single z3 sort z3.DeclareSort("Gat") , but eventually I switched to a multisorted representation z3.DeclareSort("Ob") and z3.DeclareSort("Hom") as this gets a step closer to accurately representation the types of the GATs in the simply sorted smtlib language. Which of these sorts to use can be determined by looking at the head symbol of the inferred Catlab types. I wrote a custom type inference just so I could try stuff out, but after asking in the zulip, apparently catlab has this built in also.

Some Z3 debugging tips:

I tend to make my z3 programs in python, dump the s.sexpr() in a file and then run that via the z3 command line. It’s easier to fiddle with the smtlib2 file to try out ideas fast. Take stuff out, put stuff in, make simpler questions, etc. Be aware most ideas do not work.

Z3 appears to be inferring pretty bad triggers. The main way z3 handles quantifiers is that it looks for patterns from the quantified expression in the currently known assertion set and instantiates the quantified expression accordingly. Hence I kind of think of quantified expressions as a kind of macro for formulas. This is called E-matching https://rise4fun.com/z3/tutorialcontent/guide#h28. Running z3 with a -v:10 flag let’s you see the triggers. Z3 tries to find very small pieces of expressions that contain the quantified variables. I think we don’t really want any equations instantiated unless it finds either the full right or left hand side + context types. In addition, the triggers inferred for the type inference predicates were not good. We mostly want z3 to run the typing predicate forward, basically as a type inference function. So I tried adding all this and I think it helped, but not enough to actually get my equation to prove. Only simpler problems.


(assert (forall ((A Ob)) (! (= (typo (id A)) (Hom A A)) :pattern ((id A)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom B C)))
         (= (typo (compose f g)) (Hom A C)))
     :pattern ((compose f g) (Hom A B) (Hom B C)))))
(assert (forall ((A Ob) (B Ob)) (! (= (typo (otimes A B)) Ob) :pattern ((otimes A B)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (D Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom C D)))
         (= (typo (otimes f g)) (Hom (otimes A C) (otimes B D))))
     :pattern ((otimes f g) (Hom A B) (Hom C D)))))
;(assert (forall ((A Ob) (B Ob) (C Ob) (D Ob) (f Hom) (g Hom))
;  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom C D)))
;         (= (typo (otimes f g)) (Hom (otimes A C) (otimes B D))))
;     :pattern ((= (typo f) (Hom A B)) (= (typo g) (Hom C D))))))
(assert (= (typo munit) Ob))
(assert (forall ((A Ob) (B Ob))
  (! (= (typo (braid A B)) (Hom (otimes A B) (otimes B A)))
     :pattern ((braid A B)))))
(assert (forall ((A Ob))
  (! (= (typo (mcopy A)) (Hom A (otimes A A))) :pattern ((mcopy A)))))
(assert (forall ((A Ob)) (! (= (typo (delete A)) (Hom A munit)) :pattern ((delete A)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom A C)))
         (= (typo (pair f g)) (Hom A (otimes B C))))
     :pattern ((pair f g) (Hom A B) (Hom A C)))))
(assert (forall ((A Ob) (B Ob))
  (! (= (typo (proj1 A B)) (Hom (otimes A B) A)) :pattern ((proj1 A B)))))
(assert (forall ((A Ob) (B Ob))
  (! (= (typo (proj2 A B)) (Hom (otimes A B) B)) :pattern ((proj2 A B)))))

I tried the axiom profiler to give me any insight. http://people.inf.ethz.ch/summersa/wiki/lib/exe/fetch.php?media=papers:axiomprofiler.pdf https://github.com/viperproject/axiom-profiler I do see some quantifiers that have an insane number of instantiations. This may be because of my multipattern approach of using the Hom type and separately the term as patterns. It will just randomly fire the trigger on Homs unrelated to the one their connected to. That’s awful. The associativity axioms also seem to be triggering too much and that is somewhat expected.

Z3 debugging is similar to prolog debugging since it’s declarative. https://www.metalevel.at/prolog/debugging Take out asserts. Eventually, if you take out enough, an unsat problem should turn sat. That may help you isolate problematic axiom

Another thing I tried was to manually expand out each step of the proof to see where z3 was getting hung up. Most simple step were very fast, but some hung, apparently due to bad triggers? Surprisingly, some things I consider 1 step trivial aren’t quite. Often this is because single equations steps involves associating and absorbing munit in the type predicates. The interchange law was difficult to get to fire for this reason I think.

Trimming the axioms available to only the ones needed really helps, but doesn’t seem practical as an automated thing.

Code

Here’s the Julia code I ended up using to generate the z3 query from the catlab axioms. It’s very hacky. My apologies. I was thrashing.

# here we're trying to use Z3 sorts to take care of some of the typign
using Catlab
using Catlab.Theories
using PyCall
z3 = pyimport("z3")

# my ersatz unnecessary type inference code for Cartesian category terms

function type_infer(x::Symbol; ctx = Dict())
    if x == :Ob
        return :TYPE
    elseif x == :munit
        return :Ob
    else 
        return ctx[x]
    end 
    
end
    
function type_infer(x::Expr; ctx = Dict())
        @assert x.head == :call
        head = x.args[1]
        if head == :compose
            t1 = type_infer(x.args[2], ctx=ctx)
            @assert t1.args[1] == :Hom
            obA = t1.args[2] 
            t2 = type_infer(x.args[3], ctx=ctx)
            @assert t2.args[1] == :Hom
            obC = t2.args[3] 

            if t1.args[3] != t2.args[2]
               #println("HEY CHECK THIS OUT ITS WEIRD")
            #println(t1)
            #println(t2)
        end

            return :(Hom($obA, $obC))
        elseif head == :otimes
            t1 = type_infer(x.args[2], ctx=ctx)
            #@assert t1.args[1] == :Hom
            if t1 isa Symbol && t1 == :Ob
                return :Ob
            end
            @assert t1.args[1] == :Hom
            obA = t1.args[2] 
            obC = t1.args[3] 
            t2 = type_infer(x.args[3], ctx=ctx)
            @assert t2.args[1] == :Hom
            obB = t2.args[2] 
            obD = t2.args[3] 
            return :(Hom(otimes($obA,$obB),otimes($obC, $obD)))
        elseif head == :pair
            t1 = type_infer(x.args[2], ctx=ctx)
            @assert t1.args[1] == :Hom
            obA = t1.args[2] 
            obB = t1.args[3] 
            t2 = type_infer(x.args[3], ctx=ctx)
            @assert t2.args[1] == :Hom
            obC = t2.args[3] 
            @assert t1.args[2] == t2.args[2]
            return :(Hom($obA, otimes($obB,$obC)))
        elseif head == :mcopy
            ob = x.args[2]
            return :(Hom($ob, otimes($ob,$ob)))
        elseif head == :id
            ob = x.args[2]
            return :(Hom($ob, $ob))
        elseif head == :delete
            ob = x.args[2]
            return :(Hom($ob, munit))
        elseif head == :proj1
            obA = x.args[2]
            obB = x.args[3]
            return :(Hom(otimes($obA, $obB), $obA))
        elseif head == :proj2
            obA = x.args[2]
            obB = x.args[3]
            return :(Hom(otimes($obA, $obB), $obB))
        elseif head == :braid
            obA = x.args[2]
            obB = x.args[3]
            return :(Hom(otimes($obA, $obB), otimes($obB, $obA)))
        elseif head == :Hom
            return :TYPE
        elseif head == :munit
            return :Ob
        else
            println(x, ctx)
            @assert false
        end
end

TYPE = z3.DeclareSort("TYPE")

# sortify takes a type expression, grabs the head, and returns the corresponding Z3 sort.
function sortify(ty) 
    if ty isa Symbol
        return z3.DeclareSort(String(ty))
    elseif ty isa Expr
        @assert ty.head == :call
        return z3.DeclareSort(String(ty.args[1]))
    end
end

# z3ify take an Expr or Symbol in a dictionary typing context and returns the z3 equivalent
z3ify( e::Symbol , ctx) = z3.Const(String(e), sortify(type_infer(e,ctx=ctx)))

function z3ify( e::Expr , ctx)
    @assert e.head == :call
    out_sort = sortify(type_infer(e,ctx=ctx))
    z3.Function(e.args[1], [sortify(type_infer(x,ctx=ctx)) for x in e.args[2:end]]..., out_sort)(map(x -> z3ify(x,ctx), e.args[2:end])...)
end

# typo is a helper routine that takes an Expr or Symbol term and returns the Z3 function typo applied to the z3ified term
function typo(x, ctx)
    f = z3.Function("typo" , sortify(type_infer(x,ctx=ctx))  , TYPE ) 
    f(z3ify(x,ctx))
end

# a helper function to z3ify an entire context for the implication
function build_ctx_predicate(ctx)
    map( kv-> begin
        #typo = z3.Function("typo" , sortify(typ)  , TYPE ) 
        typo(kv[1], ctx) == z3ify(kv[2], ctx)
        end
        
        , filter( kv -> kv[2] isa Expr , # we don't need to put typo predicates about simple types like Ob 
             collect(ctx)))

end

# converts the typing axioms of a GAT into the equivalent z3 axioms
# This is quite close to unreadable I think
function build_typo_z3(terms)
    map(myterm ->  begin
                ctx = myterm.context
                conc =  length(myterm.params) > 0  ?  Expr(:call, myterm.name, myterm.params...) : myterm.name
                 preconds = build_ctx_predicate(myterm.context) 
                    if length(myterm.context) > 0 && length(preconds) > 0
                        z3.ForAll( map(x -> z3ify(x,ctx), collect(keys(myterm.context))) ,
                            z3.Implies( z3.And(preconds)  , 
                                     typo(conc,myterm.context) == z3ify(myterm.typ, myterm.context)),
                  patterns = [ z3.MultiPattern(z3ify(conc,ctx),  
                               [ z3ify(x ,ctx ) for x in collect(values(myterm.context)) if x isa Expr]...) # not super sure this is a valid way of filtering generally
                             ],
                  )
                elseif length(myterm.context) > 0
                        z3.ForAll( map(x -> z3ify(x,ctx), collect(keys(myterm.context))) ,
                                     typo(conc,myterm.context) == z3ify(myterm.typ, myterm.context),
                         patterns = [z3ify(conc,ctx)])
                    else
                        typo(conc,myterm.context) == z3ify(myterm.typ, myterm.context)
                    end
            end
              
    , terms)
end

# convert the equations axioms of a GAT into the equivalent z3 terms
function build_eqs_z3(axioms)
        map(axiom -> begin
            @assert axiom.name == :(==)
            ctx = axiom.context
            l = z3ify(axiom.left, axiom.context)
            r = z3ify(axiom.right, axiom.context)
            preconds= build_ctx_predicate(axiom.context) 
            ctx_patterns = [ z3ify(x ,ctx ) for x in collect(values(axiom.context)) if x isa Expr]
            println([z3.MultiPattern( l , ctx_patterns...  ) , z3.MultiPattern( r , ctx_patterns...  ) ])
            if length(axiom.context) > 0 && length(preconds) > 0
                    try
                    z3.ForAll( map(x -> z3ify(x,ctx), collect(keys(axiom.context))) , 
                    z3.Implies(  z3.And( preconds) ,   l == r),
                patterns = [z3.MultiPattern( l , ctx_patterns...  ) , z3.MultiPattern( r , ctx_patterns...  ) ])
                    catch e
                      println(e)
                       z3.ForAll( map(x -> z3ify(x,ctx), collect(keys(axiom.context))) , 
                        z3.Implies(  z3.And( preconds) ,   l == r))
                end
                elseif length(axiom.context) > 0  && length(preconds) == 0
                    z3.ForAll( map(x -> z3ify(x,ctx), collect(keys(axiom.context))) , l == r, patterns = [l,r])
                
                else
                    l == r
                end
            end,
        axioms)
end

# jut trying some stuff out
sortify( :Ob )
sortify( :(Hom(a,b)))
ctx = Dict(:A => :Ob, :B => :Ob)
z3ify(:(id(A)) , ctx)
#=typing_axioms = build_typo_z3(theory(CartesianCategory).terms)
eq_axioms = build_eqs_z3(theory(CartesianCategory).axioms)

s = z3.Solver()
s.add(typing_axioms)
s.add(eq_axioms)
#print(s.sexpr())
=#

inferall(e::Symbol, ctx) = [typo(e,ctx) == z3ify(type_infer(e,ctx=ctx),ctx)]
inferall(e::Expr, ctx) = Iterators.flatten([[typo(e,ctx) == z3ify(type_infer(e,ctx=ctx),ctx)], Iterators.flatten(map(z -> inferall(z,ctx), e.args[2:end]))])


function prove(ctx, l,r; pr = false)
    typing_axioms = build_typo_z3(theory(CartesianCategory).terms)
    eq_axioms = build_eqs_z3(theory(CartesianCategory).axioms)
    s = z3.Solver()
    s.add(typing_axioms)
    s.add(eq_axioms)
    s.add(collect(inferall(l,ctx)))
    s.add(collect(inferall(r,ctx)))
    s.add(z3.Not( z3ify(l,ctx) == z3ify(r,ctx)))
    #println("checking $x")
    #if pr
    println(s.sexpr())
     #else
    #println(s.check())
    #end
end
ctx =  Dict(:A => :Ob, :B => :Ob)
prove( ctx, :(pair(proj1(A,B), proj2(A,B))), :(otimes(id(A),id(B))))

The returned smtlib2 predicate with a (check-sat) manually added at the end

(declare-sort Ob 0)
(declare-sort TYPE 0)
(declare-sort Hom 0)
(declare-fun id (Ob) Hom)
(declare-fun Hom (Ob Ob) TYPE)
(declare-fun typo (Hom) TYPE)
(declare-fun compose (Hom Hom) Hom)
(declare-fun otimes (Ob Ob) Ob)
(declare-fun Ob () TYPE)
(declare-fun typo (Ob) TYPE)
(declare-fun otimes (Hom Hom) Hom)
(declare-fun munit () Ob)
(declare-fun braid (Ob Ob) Hom)
(declare-fun mcopy (Ob) Hom)
(declare-fun delete (Ob) Hom)
(declare-fun pair (Hom Hom) Hom)
(declare-fun proj1 (Ob Ob) Hom)
(declare-fun proj2 (Ob Ob) Hom)
(declare-fun B () Ob)
(declare-fun A () Ob)
(assert (forall ((A Ob)) (! (= (typo (id A)) (Hom A A)) :pattern ((id A)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom B C)))
         (= (typo (compose f g)) (Hom A C)))
     :pattern ((compose f g) (Hom A B) (Hom B C)))))
(assert (forall ((A Ob) (B Ob)) (! (= (typo (otimes A B)) Ob) :pattern ((otimes A B)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (D Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom C D)))
         (= (typo (otimes f g)) (Hom (otimes A C) (otimes B D))))
     :pattern ((otimes f g) (Hom A B) (Hom C D)))))
(assert (= (typo munit) Ob))
(assert (forall ((A Ob) (B Ob))
  (! (= (typo (braid A B)) (Hom (otimes A B) (otimes B A)))
     :pattern ((braid A B)))))
(assert (forall ((A Ob))
  (! (= (typo (mcopy A)) (Hom A (otimes A A))) :pattern ((mcopy A)))))
(assert (forall ((A Ob)) (! (= (typo (delete A)) (Hom A munit)) :pattern ((delete A)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom A C)))
         (= (typo (pair f g)) (Hom A (otimes B C))))
     :pattern ((pair f g) (Hom A B) (Hom A C)))))
(assert (forall ((A Ob) (B Ob))
  (! (= (typo (proj1 A B)) (Hom (otimes A B) A)) :pattern ((proj1 A B)))))
(assert (forall ((A Ob) (B Ob))
  (! (= (typo (proj2 A B)) (Hom (otimes A B) B)) :pattern ((proj2 A B)))))
(assert (forall ((A Ob) (B Ob) (C Ob) (D Ob) (f Hom) (g Hom) (h Hom))
  (! (=> (and (= (typo f) (Hom A B))
              (= (typo g) (Hom B C))
              (= (typo h) (Hom C D)))
         (= (compose (compose f g) h) (compose f (compose g h))))
     :pattern ((compose (compose f g) h) (Hom A B) (Hom B C) (Hom C D))
     :pattern ((compose f (compose g h)) (Hom A B) (Hom B C) (Hom C D)))))
(assert (forall ((A Ob) (B Ob) (f Hom))
  (! (=> (and (= (typo f) (Hom A B))) (= (compose f (id B)) f))
     :pattern ((compose f (id B)) (Hom A B))
     :pattern (pattern f (Hom A B)))))
(assert (forall ((A Ob) (B Ob) (f Hom))
  (! (=> (and (= (typo f) (Hom A B))) (= (compose (id A) f) f))
     :pattern ((compose (id A) f) (Hom A B))
     :pattern (pattern f (Hom A B)))))
(assert (forall ((A Ob) (B Ob) (C Ob))
  (! (= (otimes (otimes A B) C) (otimes A (otimes B C)))
     :pattern ((otimes (otimes A B) C))
     :pattern ((otimes A (otimes B C))))))
(assert (forall ((A Ob))
  (! (= (otimes A munit) A) :pattern ((otimes A munit)) :pattern (pattern A))))
(assert (forall ((A Ob))
  (! (= (otimes munit A) A) :pattern ((otimes munit A)) :pattern (pattern A))))
(assert (forall ((A Ob) (B Ob) (C Ob) (X Ob) (Y Ob) (Z Ob) (f Hom) (g Hom) (h Hom))
  (! (=> (and (= (typo f) (Hom A X))
              (= (typo g) (Hom B Y))
              (= (typo h) (Hom C Z)))
         (= (otimes (otimes f g) h) (otimes f (otimes g h))))
     :pattern ((otimes (otimes f g) h) (Hom A X) (Hom B Y) (Hom C Z))
     :pattern ((otimes f (otimes g h)) (Hom A X) (Hom B Y) (Hom C Z)))))
(assert (forall ((A Ob)
         (B Ob)
         (C Ob)
         (X Ob)
         (Y Ob)
         (Z Ob)
         (f Hom)
         (h Hom)
         (g Hom)
         (k Hom))
  (! (=> (and (= (typo f) (Hom A B))
              (= (typo h) (Hom B C))
              (= (typo g) (Hom X Y))
              (= (typo k) (Hom Y Z)))
         (= (compose (otimes f g) (otimes h k))
            (otimes (compose f h) (compose g k))))
     :pattern ((compose (otimes f g) (otimes h k))
               (Hom A B)
               (Hom B C)
               (Hom X Y)
               (Hom Y Z))
     :pattern ((otimes (compose f h) (compose g k))
               (Hom A B)
               (Hom B C)
               (Hom X Y)
               (Hom Y Z)))))
(assert (forall ((A Ob) (B Ob))
  (! (= (id (otimes A B)) (otimes (id A) (id B)))
     :pattern ((id (otimes A B)))
     :pattern ((otimes (id A) (id B))))))
(assert (forall ((A Ob) (B Ob))
  (! (= (compose (braid A B) (braid B A)) (id (otimes A B)))
     :pattern ((compose (braid A B) (braid B A)))
     :pattern ((id (otimes A B))))))
(assert (forall ((A Ob) (B Ob) (C Ob))
  (! (= (braid A (otimes B C))
        (compose (otimes (braid A B) (id C)) (otimes (id B) (braid A C))))
     :pattern ((braid A (otimes B C)))
     :pattern ((compose (otimes (braid A B) (id C)) (otimes (id B) (braid A C)))))))
(assert (forall ((A Ob) (B Ob) (C Ob))
  (! (= (braid (otimes A B) C)
        (compose (otimes (id A) (braid B C)) (otimes (braid A C) (id B))))
     :pattern ((braid (otimes A B) C))
     :pattern ((compose (otimes (id A) (braid B C)) (otimes (braid A C) (id B)))))))
(assert (forall ((A Ob) (B Ob) (C Ob) (D Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom A B)) (= (typo g) (Hom C D)))
         (= (compose (otimes f g) (braid B D))
            (compose (braid A C) (otimes g f))))
     :pattern ((compose (otimes f g) (braid B D)) (Hom A B) (Hom C D))
     :pattern ((compose (braid A C) (otimes g f)) (Hom A B) (Hom C D)))))
(assert (forall ((A Ob))
  (! (= (compose (mcopy A) (otimes (mcopy A) (id A)))
        (compose (mcopy A) (otimes (id A) (mcopy A))))
     :pattern ((compose (mcopy A) (otimes (mcopy A) (id A))))
     :pattern ((compose (mcopy A) (otimes (id A) (mcopy A)))))))
(assert (forall ((A Ob))
  (! (= (compose (mcopy A) (otimes (delete A) (id A))) (id A))
     :pattern ((compose (mcopy A) (otimes (delete A) (id A))))
     :pattern ((id A)))))
(assert (forall ((A Ob))
  (! (= (compose (mcopy A) (otimes (id A) (delete A))) (id A))
     :pattern ((compose (mcopy A) (otimes (id A) (delete A))))
     :pattern ((id A)))))
(assert (forall ((A Ob))
  (! (= (compose (mcopy A) (braid A A)) (mcopy A))
     :pattern ((compose (mcopy A) (braid A A)))
     :pattern ((mcopy A)))))
(assert (forall ((A Ob) (B Ob))
  (! (let ((a!1 (compose (otimes (mcopy A) (mcopy B))
                         (otimes (otimes (id A) (braid A B)) (id B)))))
       (= (mcopy (otimes A B)) a!1))
     :pattern ((mcopy (otimes A B)))
     :pattern ((compose (otimes (mcopy A) (mcopy B))
                        (otimes (otimes (id A) (braid A B)) (id B)))))))
(assert (forall ((A Ob) (B Ob))
  (! (= (delete (otimes A B)) (otimes (delete A) (delete B)))
     :pattern ((delete (otimes A B)))
     :pattern ((otimes (delete A) (delete B))))))
(assert (= (mcopy munit) (id munit)))
(assert (= (delete munit) (id munit)))
(assert (forall ((A Ob) (B Ob) (C Ob) (f Hom) (g Hom))
  (! (=> (and (= (typo f) (Hom C A)) (= (typo g) (Hom C B)))
         (= (pair f g) (compose (mcopy C) (otimes f g))))
     :pattern ((pair f g) (Hom C A) (Hom C B))
     :pattern ((compose (mcopy C) (otimes f g)) (Hom C A) (Hom C B)))))
(assert (forall ((A Ob) (B Ob))
  (! (= (proj1 A B) (otimes (id A) (delete B)))
     :pattern ((proj1 A B))
     :pattern ((otimes (id A) (delete B))))))
(assert (forall ((A Ob) (B Ob))
  (! (= (proj2 A B) (otimes (delete A) (id B)))
     :pattern ((proj2 A B))
     :pattern ((otimes (delete A) (id B))))))
(assert (forall ((A Ob) (B Ob) (f Hom))
  (! (=> (and (= (typo f) (Hom A B)))
         (= (compose f (mcopy B)) (compose (mcopy A) (otimes f f))))
     :pattern ((compose f (mcopy B)) (Hom A B))
     :pattern ((compose (mcopy A) (otimes f f)) (Hom A B)))))
(assert (forall ((A Ob) (B Ob) (f Hom))
  (=> (and (= (typo f) (Hom A B))) (= (compose f (delete B)) (delete A)))))
(assert (= (typo (pair (proj1 A B) (proj2 A B))) (Hom (otimes A B) (otimes A B))))
(assert (= (typo (proj1 A B)) (Hom (otimes A B) A)))
(assert (= (typo A) Ob))
(assert (= (typo B) Ob))
(assert (= (typo (proj2 A B)) (Hom (otimes A B) B)))
(assert (= (typo A) Ob))
(assert (= (typo B) Ob))
(assert (= (typo (otimes (id A) (id B))) (Hom (otimes A B) (otimes A B))))
(assert (= (typo (id A)) (Hom A A)))
(assert (= (typo A) Ob))
(assert (= (typo (id B)) (Hom B B)))
(assert (= (typo B) Ob))
(assert (not (= (pair (proj1 A B) (proj2 A B)) (otimes (id A) (id B)))))
(check-sat)

Other junk

One could use z3 as glue for simple steps of proofs as is, but it doesn’t appear to scale well to even intermediately complex proofs. Maybe this could be used for a semi-automated (aka interactive) proof system for catlab? This seems misguided though. You’re better off using one of the many interactive proof assistants if that’s the way you wanna go. Maybe one could generate the queries to those system?

I tried the type tagging version, where every term t is recursively replaced with tag(t, typo_t). This allows us to avoid the guards and the axioms of the GAT take the form of pure equations again, albeit ones of complex tagged terms. This did not work well. I was surprised. It’s kind of interesting that type tagging is in some sense internalizing another piece of Catlab syntax into a logic, just like how type guards internalized the turnstile as an implication and the context as the guard. In this case we are internalizing the inline type annotations (f::Hom(A,B)) into the logic, where I write the infix notation :: as the function tag().

Notebook here https://github.com/philzook58/thoughtbooks/blob/master/catlab_gat.ipynb

file:///home/philip/Downloads/A_Polymorphic_Intermediate_Verification_Language_D.pdf The 3.1 method. If we have an extra argument to every function for the type of that argument inserted, then quantifier instantiation can only work when the

We could make it semi interactive (I guess semi interactive is just interactive though

https://hal.inria.fr/hal-01322328/document TLA+ encoding. Encoding to SMT solvers is a grand tradition

Wait, could it be that id really is the only problem? It’s the only equation with a raw variable in an equality. And that poisons all of Hom. Fascinating. I thought the problem was compose, but it’s id?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7324017/ vampire now supports polymorphism.

I realized that things that felt like a single step, were in fact not. This is because

Asserting the types of all subexpressions helped the solver sometimes and sometime hurt.

Solvers often use a heuristic where they want to look at the oldest generated inferences first. This means that the deeper you make your proof, the hard it is to for the solver to find it (well that’s true anyway). Making the proof depth harder for trivial type inference purposes is foolish.

Of course, taken to some extreme, at a certain point we’re asserting so many derived facts to the solver we have written a fraction of a solver ourselves.

I wonder what the recent burst of higher order capabilities of zipperposition, eprover, and vampire might do for me? The thing is we’re already compiling to combinators. That’s what categories are. https://matryoshka-project.github.io/

Functor example http://page.mi.fu-berlin.de/cbenzmueller/papers/J22.pdf THF is higher order format of tptp.

Exporting to Isabelle in particular is a viable approach, as it is well known to have good automation. I mean, I’m reading the sledgehammer guy’s papers for tips. Also, exporting to an interactive theorem prover of any kind seems kind of useful.

Notes on Synthesis and Equation Proving for Catlab.jl

Catlab is a library and growing ecosystem (I guess the ecosystem is called AlgebraicJulia now) for computational or applied category theory, whatever that may end up meaning.

I have been interested to see if I could find low hanging fruit by applying off the shelf automated theorem proving tech to Catlab.jl.

There area couple problems that seem like some headway might be made in this way:

  • Inferring the type of expressions. Catlab category syntax is pretty heavily annotated by objects so this is relatively easy. (id is explicitly tagged by the object at which it is based for example)
  • Synthesizing morphisms of a given type.
  • Proving equations

In particular two promising candidates for these problems are to use eprover/vampire style automated theorem provers or prolog/kanren logic programming.

Generalized Algebraic Theories (GATs)

Catlab is built around something known as a Generalized Algebraic Theory. https://algebraicjulia.github.io/Catlab.jl/dev/#What-is-a-GAT? In order to use more conventional tooling, we need to understand GATs in a way that is acceptable to these tools. Basically, can we strip the GAT down to first order logic?

I found GATs rather off putting at first glance. Who ordered that? The nlab article is 1/4 enlightening and 3/4 obscuring. https://ncatlab.org/nlab/show/generalized+algebraic+theory But, in the end of the day, I think it it’s not such a crazy thing.

Because of time invested and natural disposition, I understand things much better when they are put in programming terms. As seems to be not uncommon in Julia, one defines a theory in Catlab using some specialized macro mumbo jumbo.

@theory Category{Ob,Hom} begin
  @op begin
    (→) := Hom
    (⋅) := compose
  end

  Ob::TYPE
  Hom(dom::Ob, codom::Ob)::TYPE

  id(A::Ob)::(A → A)
  compose(f::(A → B), g::(B → C))::(A → C) ⊣ (A::Ob, B::Ob, C::Ob)

  (f ⋅ g) ⋅ h == f ⋅ (g ⋅ h) ⊣ (A::Ob, B::Ob, C::Ob, D::Ob,
                                f::(A → B), g::(B → C), h::(C → D))
  f ⋅ id(B) == f ⊣ (A::Ob, B::Ob, f::(A → B))
  id(A) ⋅ f == f ⊣ (A::Ob, B::Ob, f::(A → B))
end

Ok, but this macro boils down to a data structure describing the syntax, typing relations, and axioms of the theory. This data structure is not necessarily meant to be used by end users, and may change in it’s specifics, but I find it clarifying to see it.

Just like my python survival toolkit involves calling dir on everything, my Julia survival toolkit involves hearty application of dump and @macroexpand on anything I can find.

We can see three slots for types terms and axioms. The types describe the signature of the types, how many parameters they have and of what type. The terms describe the appropriate functions and constants of the theory. It’s all kind of straightforward I think. Try to come up with a data structure for this and you’ll probably come up with something similar

I’ve cut some stuff out of the dump because it’s so huge. I’ve placed the full dump at the end of the blog post.

>>> dump(theory(Category))

Catlab.GAT.Theory
  types: Array{Catlab.GAT.TypeConstructor}((2,))
    1: Catlab.GAT.TypeConstructor
      name: Symbol Ob
      params: Array{Symbol}((0,))
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((0,))
        vals: Array{Union{Expr, Symbol}}((0,))
        ndel: Int64 0
        dirty: Bool false
      doc: String " Object in a category "
    2: ... More stuff
  terms: Array{Catlab.GAT.TermConstructor}((2,))
    1: Catlab.GAT.TermConstructor
      name: Symbol id
      params: Array{Symbol}((1,))
        1: Symbol A
      typ: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol Hom
          2: Symbol A
          3: Symbol A
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((1,))
          1: Symbol A
        vals: Array{Union{Expr, Symbol}}((1,))
          1: Symbol Ob
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: ... More stuff
  axioms: Array{Catlab.GAT.AxiomConstructor}((3,))
    1: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol f
              3: Symbol g
          3: Symbol h
      right: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Symbol f
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol g
              3: Symbol h
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[5, 0, 0, 0, 1, 0, 4, 0, 2, 7, 0, 6, 0, 0, 0, 3]
        keys: Array{Symbol}((7,))
          1: Symbol A
          2: Symbol B
          3: Symbol C
          4: Symbol D
          5: Symbol f
          6: Symbol g
          7: Symbol h
        vals: Array{Union{Expr, Symbol}}((7,))
          1: Symbol Ob
          2: Symbol Ob
          3: Symbol Ob
          4: Symbol Ob
          5: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
          6: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol B
              3: Symbol C
          7: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol C
              3: Symbol D
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: ... More stuff
  aliases: ... Stuff

This infrastructure is not necessarily for category theory alone despite being in a package called Catlab. You can describe other algebraic theories, like groups, but you won’t need the full flexibility in typing relations that the “Generalized” of the GAT gets you. The big hangup of category theory that needs this extra power is that categorical composition is a partial function. It is only defined for morphisms whose types line up correctly, whereas any two group elements can be multiplied.

@theory Group(G) begin

  G::TYPE

  id()::G
  mul(f::G, g::G)::G
  inv(x::G)::G

  mul(mul(f, g), h) == mul(f,  mul(g , h)) ⊣ ( f::G, g::G, h::G)
   # and so on
end

Back to the first order logic translation. If you think about it, the turnstile ⊣ separating the context appearing in the Catlab theory definition is basically an implication. The definition id(A)::Hom(A,A) ⊣ (A::Ob) can be read like so: for all A, given A has type Ob it implies that id(A) has type Hom(A,A). We can write this in first order logic using a predicate for the typing relation. \forall A,  type(A,Ob) \implies type(id(A), Hom(A,A)).

The story I tell about this is that the way this deals with the partiality of compose is that when everything is well typed, compose behaves as it axiomatically should, but when something is not well typed, compose can return total garbage. This is one way to make a partial function total. Just define it to return random trash for the undefined domain values or rather be unwilling to commit to what it does in that case.

Even thought they are the same thing, I have great difficulty getting over the purely syntactical barrier of _::_ vs type(_,_). Infix punctuation never feels like a predicate to me. Maybe I’m crazy.

Turnstiles in general are usually interchangeable with or reflections of implication in some sense. So are the big horizontal lines of inference rules for that matter. I find this all very confusing.

Everything I’ve said above is a bold claim that could be actually proven by demonstrating a rigorous correspondence, but I don’t have enough interest to overcome the tremendous skill gap I’m lacking needed to do so. It could very easily be that I’m missing subtleties.

Automated Theorem Provers

While the term automated theorem prover could describe any theorem prover than is automated, it happens to connote a particular class of first order logic automated provers of which the E prover and Vampire are canonical examples.

In a previous post, I tried axiomatizing category theory to these provers in a different way https://www.philipzucker.com/category-theory-in-the-e-automated-theorem-prover/ , with a focus on the universal properties of categorical constructions. Catlab has a different flavor and a different encoding seems desirable.

What is particularly appealing about this approach is that these systems are hard wired to handle equality efficiently. So they can handle the equational specification of a Catlab theory. I don’t currently know to interpret to proofs it outputs into something more human comprehensible.

Also, I wasn’t originally aware of this, but eprover has a mode --conjectures-are-questions that will return the answers to existential queries. In this way, eprover can be used as a synthesizer for morphisms of a particular type. This flag gives eprover query capabilities similar to a prolog.

eprover cartcat.tptp --conjectures-are-questions --answers=1 --silent

One small annoying hiccup is that TPTP syntax takes the prolog convention of making quantified variables capitalized. This is not the Catlab convention. A simple way to fix this is to append a prefix of Var* to quantified objects and const* to constant function symbols.

All of the keys in the context dictionary are the quantified variables in an declaration. We can build a map to symbols where they are prefixed with Var

varmap = Dict(map(kv -> kv[1] => Symbol("Var$(kv[1])")  , collect(myterm.context )))

And then we can use this map to prefixify other expressions.

prefixify(x::Symbol, varmap) = haskey(varmap,x) ?  varmap[x] : Symbol( "const$x")
prefixify(x::Expr, varmap) = Expr(x.head, map(y -> prefixify(y, varmap),  x.args)... )

Given these, it has just some string interpolation hackery to port a catlab typing definition into a TPTP syntax axiom about a typing relation

function build_typo(terms)
    map(myterm ->  begin
                varmap = Dict(map(kv -> kv[1] => Symbol("Var$(kv[1])")  , collect(myterm.context )))
                prefix_context = Dict(map(kv -> kv[1] => prefixify(kv[2] , varmap) , collect(myterm.context )))
                context_terms = map( kv -> "typo($(varmap[kv[1]]), $(kv[2]))", collect(prefix_context))
                conc = "typo( const$(myterm.name)($(join(map(p -> prefixify(p,varmap) , myterm.params), ", "))) , $(prefixify(myterm.typ, varmap)) )"
                if length(myterm.context) > 0
                    "
                    ![$(join(values(varmap),","))]: 
                        ($conc <=
                            ($(join( context_terms , " &\n\t"))))"
                else # special case for empty context
                    "$conc"
                end
                    end
    , terms)
end

You can spit out the axioms for a theory like so

query = join(map(t -> "fof( axiom$(t[1]) , axiom, $(t[2]) ).", enumerate(build_typo(theory(CartesianCategory).terms))), "\n")
fof( axiom1 , axiom, 
![VarA]: 
    (typo( constid(VarA) , constHom(VarA, VarA) ) <=
        (typo(VarA, constOb))) ).
fof( axiom2 , axiom, 
![Varf,VarA,VarB,Varg,VarC]: 
    (typo( constcompose(Varf, Varg) , constHom(VarA, VarC) ) <=
        (typo(Varf, constHom(VarA, VarB)) &
	typo(VarA, constOb) &
	typo(VarB, constOb) &
	typo(Varg, constHom(VarB, VarC)) &
	typo(VarC, constOb))) ).
fof( axiom3 , axiom, 
![VarA,VarB]: 
    (typo( constotimes(VarA, VarB) , constOb ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).
fof( axiom4 , axiom, 
![Varf,VarA,VarD,VarB,Varg,VarC]: 
    (typo( constotimes(Varf, Varg) , constHom(constotimes(VarA, VarC), constotimes(VarB, VarD)) ) <=
        (typo(Varf, constHom(VarA, VarB)) &
	typo(VarA, constOb) &
	typo(VarD, constOb) &
	typo(VarB, constOb) &
	typo(Varg, constHom(VarC, VarD)) &
	typo(VarC, constOb))) ).
fof( axiom5 , axiom, typo( constmunit() , constOb ) ).
fof( axiom6 , axiom, 
![VarA,VarB]: 
    (typo( constbraid(VarA, VarB) , constHom(constotimes(VarA, VarB), constotimes(VarB, VarA)) ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).
fof( axiom7 , axiom, 
![VarA]: 
    (typo( constmcopy(VarA) , constHom(VarA, constotimes(VarA, VarA)) ) <=
        (typo(VarA, constOb))) ).
fof( axiom8 , axiom, 
![VarA]: 
    (typo( constdelete(VarA) , constHom(VarA, constmunit()) ) <=
        (typo(VarA, constOb))) ).
fof( axiom9 , axiom, 
![Varf,VarA,VarB,Varg,VarC]: 
    (typo( constpair(Varf, Varg) , constHom(VarA, constotimes(VarB, VarC)) ) <=
        (typo(Varf, constHom(VarA, VarB)) &
	typo(VarA, constOb) &
	typo(VarB, constOb) &
	typo(Varg, constHom(VarA, VarC)) &
	typo(VarC, constOb))) ).
fof( axiom10 , axiom, 
![VarA,VarB]: 
    (typo( constproj1(VarA, VarB) , constHom(constotimes(VarA, VarB), VarA) ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).
fof( axiom11 , axiom, 
![VarA,VarB]: 
    (typo( constproj2(VarA, VarB) , constHom(constotimes(VarA, VarB), VarB) ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).

% example synthesis queries
%fof(q , conjecture, ?[F]: (typo( F, constHom(a , a) )  <=  ( typo(a, constOb)  )   ) ).
%fof(q , conjecture, ?[F]: (typo( F, constHom( constotimes(a,b) , constotimes(b,a)) )  <=  ( typo(a, constOb) & typo(b,constOb) )   ) ).
%fof(q , conjecture, ?[F]: (typo( F, constHom( constotimes(a,constotimes(b,constotimes(c,d))) , d) )  <=  ( typo(a, constOb) & typo(b,constOb) & typo(c,constOb) & typo(d,constOb) )   ) ). % this one hurts already without some axiom pruning

For dealing with the equations of the theory, I believe we can just ignore the typing relations. Each equation axiom preserves well-typedness, and as long as our query is also well typed, I don’t think anything will go awry. Here it would be nice to have the proof output of the tool be more human readable, but I don’t know how to do that yet. Edit: It went awry. I currently think this is completely wrong.

function build_eqs(axioms)
        map(axiom -> begin
            @assert axiom.name == :(==)
            varmap = Dict(map(kv -> kv[1] => Symbol("Var$(kv[1])")  , collect(axiom.context )))
            l = prefixify(axiom.left, varmap)
            r = prefixify(axiom.right, varmap)
            "![$(join(values(varmap), ", "))]: $l = $r" 
            end,
        axioms)
end

t = join( map( t -> "fof( axiom$(t[1]), axiom, $(t[2]))."  , enumerate(build_eqs(theory(CartesianCategory).axioms))), "\n")
print(t)
fof( axiom1, axiom, ![Varf, VarA, VarD, VarB, Varh, Varg, VarC]: constcompose(constcompose(Varf, Varg), Varh) = constcompose(Varf, constcompose(Varg, Varh))).
fof( axiom2, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constid(VarB)) = Varf).
fof( axiom3, axiom, ![Varf, VarA, VarB]: constcompose(constid(VarA), Varf) = Varf).
fof( axiom4, axiom, ![Varf, VarA, VarB, Varg, VarC]: constpair(Varf, Varg) = constcompose(constmcopy(VarC), constotimes(Varf, Varg))).
fof( axiom5, axiom, ![VarA, VarB]: constproj1(VarA, VarB) = constotimes(constid(VarA), constdelete(VarB))).
fof( axiom6, axiom, ![VarA, VarB]: constproj2(VarA, VarB) = constotimes(constdelete(VarA), constid(VarB))).
fof( axiom7, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constmcopy(VarB)) = constcompose(constmcopy(VarA), constotimes(Varf, Varf))).
fof( axiom8, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constdelete(VarB)) = constdelete(VarA)).

% silly example query
fof( q, conjecture, ![Varf, Varh, Varg, Varj ]: constcompose(constcompose(constcompose(Varf, Varg), Varh), Varj) = constcompose(Varf, constcompose(Varg, constcompose(Varh,Varj)) )).

It is possible and perhaps desirable fully automating the call to eprover as an external process and then parsing the results back into Julia. Julia has some slick external process facilities https://docs.julialang.org/en/v1/manual/running-external-programs/

Prolog and Kanrens

It was an interesting revelation to me that the typing relations for morphisms as described in catlab seems like it is already basically in the form amenable to prolog or a Kanren. The variables are universally quantified and there is only one term to the left of the turnstile (which is basically prolog’s :-) This is a Horn clause.

In a recent post I showed how to implement something akin to a minikanren in Julia https://www.philipzucker.com/yet-another-microkanren-in-julia/ I built that with this application in mind

Here’s an example I wrote by hand in in minikaren

(define (typo f t)
(conde
  [(fresh (a) (== f 'id) (== t `(hom ,a ,a))) ]
  [(== f 'f) (== t '(hom a c))]
  [(fresh (a b) (== f 'snd) (== t `(hom ( ,a ,b) ,b)))]
  [(fresh (a b) (== f 'fst) (== t `(hom ( ,a ,b) ,a)))]
  [(fresh (g h a b c) (== f `(comp ,g ,h))
                       (== t `(hom ,a ,c)) 
                       (typo g `(hom ,a ,b ))
                       (typo h `(hom ,b ,c)))]
  [ (fresh (g h a b c) (== f `(fan ,g ,h))
                       (== t `(hom ,a (,b ,c))) 
                       (typo g `(hom ,a ,b ))
                       (typo h `(hom ,a ,c)))  ]
  )
  )

;queries
; could lose the hom
;(run 3 (q) (typo  q '(hom (a b) a)))
;(run 3 (q) (typo  q '(hom ((a b) c) a)))
(run 3 (q) (typo  q '(hom (a b) (b a))))

And here is a similar thing written in my Julia minikanren. I had to depth limit it because I goofed up the fair interleaving in my implementation.

function typo(f, t, n)
    fresh2( (a,b) -> (f ≅ :fst) ∧ (t  ≅ :(Hom(tup($a,$b),$a)))) ∨
    fresh2( (a,b) -> (f ≅ :snd) ∧ (t  ≅ :(Hom(tup($a,$b),$b)))) ∨
    freshn( 6, (g,h,a,b,c,n2) -> (n ≅ :(succ($n2))) ∧ (f ≅ :(comp($g, $h)))  ∧ (t  ≅ :(Hom($a,$c))) ∧ @Zzz(typo(g, :(Hom($a,$b)), n2))  ∧ @Zzz(typo(h, :(Hom($b,$c)), n2))) ∨
    fresh(a -> (f ≅ :(id($a))) ∧ (t  ≅ :(Hom($a,$a))))
end


run(1, f ->  typo( f  , :(Hom(tup(a,tup(b,tup(c,d))),d)), nat(5)))

Bits and Bobbles

Discussion on the Catlab zulip. Some interesting discussion here such as an alternative encoding of GATs to FOL https://julialang.zulipchat.com/#narrow/stream/230248-catlab.2Ejl/topic/Automatic.20Theorem.20Proving/near/207919104

Of course, it’d be great it these solvers were bullet proof. But they aren’t. They are solving very hard questions more or less by brute force. So the amount of scaling they can achieve can be resolved by experimentation only. It may be that using these solvers is a dead end. These solvers do have a number of knobs to turn. The command line argument list to eprover is enormous.

These solvers are all facing some bad churn problems

  • Morphism composition is known to be a thing that makes dumb search go totally off the rails.
  • The identity morphism can be composed arbitrary number of times. This also makes solvers churn
  • Some catlab theories are overcomplete.
  • Some catlab theories are capable are building up and breaking down the same thing over and over (complicated encodings of id like pair(fst,snd))).

use SMT? https://github.com/ahumenberger/Z3.jl SMT is capable of encoding the equational problems if you use quantifiers (which last I checked these bindings do not yet export) . Results may vary. SMT with quantifiers is not the place where they shine the most. Is there anything else that can be fruitfully encoded to SMT? SAT?

Custom heuristics for search. Purely declarative is too harsh a goal. Having pure Julia solution is important here.

GAP.jl https://github.com/oscar-system/GAP.jl has facilities for knuth-bendix. This might be useful for finitely presented categories. It would be interesting to explore what pieces of computational group theory are applicable or analogous to computational category theory

>>> dump(theory(Category))

Catlab.GAT.Theory
  types: Array{Catlab.GAT.TypeConstructor}((2,))
    1: Catlab.GAT.TypeConstructor
      name: Symbol Ob
      params: Array{Symbol}((0,))
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((0,))
        vals: Array{Union{Expr, Symbol}}((0,))
        ndel: Int64 0
        dirty: Bool false
      doc: String " Object in a category "
    2: Catlab.GAT.TypeConstructor
      name: Symbol Hom
      params: Array{Symbol}((2,))
        1: Symbol dom
        2: Symbol codom
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0]
        keys: Array{Symbol}((2,))
          1: Symbol dom
          2: Symbol codom
        vals: Array{Union{Expr, Symbol}}((2,))
          1: Symbol Ob
          2: Symbol Ob
        ndel: Int64 0
        dirty: Bool true
      doc: String " Morphism in a category "
  terms: Array{Catlab.GAT.TermConstructor}((2,))
    1: Catlab.GAT.TermConstructor
      name: Symbol id
      params: Array{Symbol}((1,))
        1: Symbol A
      typ: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol Hom
          2: Symbol A
          3: Symbol A
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((1,))
          1: Symbol A
        vals: Array{Union{Expr, Symbol}}((1,))
          1: Symbol Ob
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: Catlab.GAT.TermConstructor
      name: Symbol compose
      params: Array{Symbol}((2,))
        1: Symbol f
        2: Symbol g
      typ: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol Hom
          2: Symbol A
          3: Symbol C
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[4, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 5, 0, 0, 0, 3]
        keys: Array{Symbol}((5,))
          1: Symbol A
          2: Symbol B
          3: Symbol C
          4: Symbol f
          5: Symbol g
        vals: Array{Union{Expr, Symbol}}((5,))
          1: Symbol Ob
          2: Symbol Ob
          3: Symbol Ob
          4: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
          5: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol B
              3: Symbol C
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
  axioms: Array{Catlab.GAT.AxiomConstructor}((3,))
    1: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol f
              3: Symbol g
          3: Symbol h
      right: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Symbol f
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol g
              3: Symbol h
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[5, 0, 0, 0, 1, 0, 4, 0, 2, 7, 0, 6, 0, 0, 0, 3]
        keys: Array{Symbol}((7,))
          1: Symbol A
          2: Symbol B
          3: Symbol C
          4: Symbol D
          5: Symbol f
          6: Symbol g
          7: Symbol h
        vals: Array{Union{Expr, Symbol}}((7,))
          1: Symbol Ob
          2: Symbol Ob
          3: Symbol Ob
          4: Symbol Ob
          5: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
          6: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol B
              3: Symbol C
          7: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol C
              3: Symbol D
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Symbol f
          3: Expr
            head: Symbol call
            args: Array{Any}((2,))
              1: Symbol id
              2: Symbol B
      right: Symbol f
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[3, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((3,))
          1: Symbol A
          2: Symbol B
          3: Symbol f
        vals: Array{Union{Expr, Symbol}}((3,))
          1: Symbol Ob
          2: Symbol Ob
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    3: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Expr
            head: Symbol call
            args: Array{Any}((2,))
              1: Symbol id
              2: Symbol A
          3: Symbol f
      right: Symbol f
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[3, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((3,))
          1: Symbol A
          2: Symbol B
          3: Symbol f
        vals: Array{Union{Expr, Symbol}}((3,))
          1: Symbol Ob
          2: Symbol Ob
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
  aliases: Dict{Symbol,Symbol}
    slots: Array{UInt8}((16,)) UInt8[0x01, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]
    keys: Array{Symbol}((16,))
      1: Symbol ⋅
      2: #undef
      3: #undef
      4: #undef
      5: #undef
      ...
      12: #undef
      13: #undef
      14: #undef
      15: #undef
      16: #undef
    vals: Array{Symbol}((16,))
      1: Symbol compose
      2: #undef
      3: #undef
      4: #undef
      5: #undef
      ...
      12: #undef
      13: #undef
      14: #undef
      15: #undef
      16: #undef
    ndel: Int64 0
    count: Int64 2
    age: UInt64 0x0000000000000002
    idxfloor: Int64 1
    maxprobe: Int64 0

Yet Another MicroKanren in Julia

Minikanren is a relation and logic programming language similar in many respects to prolog. It’s designed to be lightweight and embeddable in other host languages.

There is a paper about a minimal implementation call MicroKanren that has spawned many derivatives. It’s impressively short. http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf .

I’m intrigued about such things and have my reasons for building a version of this in Julia (perhaps as an inference engine for Catlab stuff? More on that another day). There are already some implementations, but I’m opinionated and I really wanted to be sure I know how the guts work. Best way is to DIY.

There are at least 3 already existing implementations in Julia alone.

Logic programming consists of basically two pieces, search and unification. The search shows up as a stream. MiniKanren does a kind of clever search by interleaving looking at different branches. This stops it from getting stuck in a bad infinite branch in principle. The interleaving is kind of like a riffled list append.

interleave [] ys = ys
interleave (x:xs)  = x : interleave ys xs 

But then the actual streams used in Kanren have thunks lying around in them that also need to get forced. These thunk positions are where it chooses to switch over to another branch of the search.

Unification is comparing two syntax trees with variables in them. As you scan down them, you can identify which variables correspond to which subtrees in the other structure. You may find a contradictory assignment, or only a partial assignment. I talked more about unification here. Kanren uses triangular substitutions to record the variable assignments. These subsitutions are very convenient to make, but when you want to access a variable, you have to walk through the substitution. It’s a tradeoff.

Here we start describing my Julia implementation. Buyer beware. I’ve been finding very bad bugs very recently.

I diverged from microKanren in a couple ways. I wanted to not use a list based structure for unification. I feel like the most Julian thing to do is to use the Expr data structure that is built by Julia quotation :. You can see here that I tried to use a more imperative style where I could figure out how to, which I think is more idiomatic Julia.

struct Var 
    x::Symbol
end

function walk(s,u) 
    while isa(u,Var) && haskey(s,u)
            u = get(s,u)
    end
    return u
end

function unify(u,v,s) # basically transcribed from the microkanren paper
    u = walk(s,u)
    v = walk(s,v)
    if isa(u,Var) && isa(v,Var) && u === v # do nothing if same
        return s
    elseif isa(u,Var)
        return assoc(s,u,v)
    elseif isa(v,Var)
        return assoc(s,v,u)
    elseif isa(u, Expr) && isa(v,Expr)
        # Only function call expressions are implemented at the moment 
        @assert u.head === :call && v.head === :call 
        if u.args[1] === v.args[1] && length(u.args) == length(v.args) #heads match
            for (u,v) in zip( u.args[2:end] , v.args[2:end] )  # unify subpieces
                s = unify(u,v,s)
                if s === nothing
                    return nothing
                end
            end
            return s
        else # heads don't match or different arity
            return nothing 
        end
    else # catchall for Symbols, Integers, etc
        if u === v
            return s
        else
            return nothing
        end
    end
end

I decided to use the gensym facility of Julia to produce new variables. That way I don’t have to thread around a variable counter like microkanren does (Julia is already doing this somewhere under the hood). Makes things a touch simpler. I made a couple fresh combinators for convenience. Basically you pass them an anonymous function and you get fresh logic variables to use.


fresh(f) = f(Var(gensym()))
fresh2(f) = f(Var(gensym()), Var(gensym()))
fresh3(f) = f(Var(gensym()), Var(gensym()), Var(gensym()))
freshn(n, f) = f([Var(gensym()) for i in 1:n ]...) # fishy lookin, but works. Not so obvious the evaluation order here.

Kanren is based around composing goals with disjunction and conjunction. A goal is a function that accepts a current substitution dictionary s and outputs a stream of possible new substitution dictionaries. If the goal fails, it outputs an empty stream. If the goal succeeds only one way, it outputs a singleton stream. I decided to attempt to use iterators to encode my streams. I’m not sure I succeeded. I also decided to forego separating out mplus and unit to match the microkanren notation and inlined their definition here. The simplest implementation of conjunction and disjunction look like this.

# unification goal
eqwal(u,v) = s -> begin   
                     s = unify(u,v,s)
                     (s == nothing) ? () : (s,)
                  end

# concatenate them
disj(g1,g2) = s -> Iterators.flatten(  (g1(s)  , g2(s)) ) 
# bind = "flatmap". flatten ~ join
conj(g1,g2) = s -> Iterators.flatten( map( g2 ,  g1(s) ))

However, the next level throws thunks in the mix. I think I got it to work with a special thunk Iterator type. It mutates the iterator to unthunkify it upon first forcing. I have no idea what the performance characteristics of this are.

# Where do these get forced. Not obvious. Do they get forced when flattened? 
mutable struct Thunk #{I}
   it # Union{I,Function}
end

function pull(x) # Runs the trampoline
    while isa(x,Function)
        x = x()
    end
    x
end

function Base.length(x::Thunk) 
    x.it = pull(x.it)
    Base.length(x.it)
end

function Base.iterate(x::Thunk) 
    x.it = pull(x.it)
    Base.iterate(x.it)
end

function Base.iterate(x::Thunk, state) 
    x.it = pull(x.it) # Should we assume forced?
    Base.iterate(x.it, state)
end

# does this have to be a macro? Yes. For evaluation order. We want g 
# evaluating after Zzz is called, not before
macro Zzz(g) 
    return :(s -> Thunk(() -> $(esc(g))(s)))
end

Then the fancier conjunction and disjunction are defined like so. I think conjunction does not need to be changed since iterate takes care of the trampoline. (Edit: No this is fundamentally busted insofar as it was intended to be a miniKanren style complete search. It is instead doing something closer to depth first. I might as well not even do the swapping. I suspect one cannot use flatten as is if one wants minikanren style search. )

disj(g1,g2) = s -> begin
     s1 = g1(s)
     s2 = g2(s)
     if isa(s1,Thunk)  && isa(s1.it, Function) #s1.forced == false  
        Iterators.flatten(  (s2  , s1) )
     else
        Iterators.flatten(  (s1  , s2) )
     end
end

conj(g1,g2) = s -> Iterators.flatten( map( g2 ,  g1(s) )) # eta expansion

Nice operator forms of these expressions. It’s a bummer that operator precedence is not use definable. ≅ binds more weakly than ∧ and ∨, which is not what you want.


∧ = conj # \wedge
∨ = disj # \vee
≅ = eqwal #\cong

I skipped using the association list representation of substitutions (Although Assoc Lists are in Base). I’ve seen recommendations one just use persistent dictionaries and it’s just as easy to drop that it. I’m just using a stock persistent dictionary from FunctionalCollections.jl https://github.com/JuliaCollections/FunctionalCollections.jl .


using FunctionalCollections
function call_empty(n::Int64, c) # gets back the iterator
    collect(Iterators.take(c( @Persistent Dict() ), n))
end

function run(n, f)
    q = Var(gensym())
    res = call_empty(n, f(q))
    return map(s -> walk_star(q,s), res)    
end

# walk_star uses the substition to normalize an expression
function walk_star(v,s)
        v = walk(s,v)
        if isa(v,Var)
            return v
        elseif isa(v,Expr)
            @assert v.head == :call
            return Expr(v.head ,vcat( v.args[1], 
                        map(v -> walk_star(v,s), v.args[2:end]))...)
        else
            return v
        end
end

Here’s we define an append relation and an addition relation. They can be used in reverse and all sorts of funny ways!

function nat(n) # helper to build peano numbers
    s = :zero
    for i in 1:n
        s = :(succ($s))
    end
    return s
end

function pluso(x,y,z)
      (( x ≅ :zero ) ∧ (y ≅ z) ) ∨
      fresh2( (n,m) -> (x ≅ :(succ($n))) ∧ (z ≅ :(succ($m))) ∧ @Zzz(pluso( n, y, m)))
end

function appendo(x,y,z)
    (x ≅ :nil) ∧ (y ≅ z) ∨
    fresh3( (hd, xs ,zs) ->  (x ≅ :(cons($hd,$xs)) )  ∧ (z ≅ :(cons($hd, $zs)))  ∧ @Zzz( appendo( xs,y,zs )))
end

Here we actually run them and see results to queries.

# add 2 and 2. Only one answer
>>> run(5, z -> pluso(nat(2), nat(2), z))
1-element Array{Expr,1}:
 :(succ(succ(succ(succ(zero)))))

>>> run(5, z -> fresh2( (x,y) -> (z ≅ :( tup($x , $y))) ∧ pluso(x, :(succ(zero)), y)))
5-element Array{Expr,1}:
 :(tup(zero, succ(zero)))
 :(tup(succ(zero), succ(succ(zero))))
 :(tup(succ(succ(zero)), succ(succ(succ(zero)))))
 :(tup(succ(succ(succ(zero))), succ(succ(succ(succ(zero))))))
 :(tup(succ(succ(succ(succ(zero)))), succ(succ(succ(succ(succ(zero)))))))

>>> run(3, q ->  appendo(   :(cons(3,nil)), :(cons(4,nil)), q )  )
1-element Array{Expr,1}:
 :(cons(3, cons(4, nil)))

# subtractive append
>>> run(3, q ->  appendo(   q, :(cons(4,nil)), :(cons(3, cons(4, nil))) )  )
1-element Array{Expr,1}:
 :(cons(3, nil))

# generate partitions
>>> run(10, q -> fresh2( (x,y) ->  (q ≅ :(tup($x,$y))) ∧ appendo( x, y, :(cons(3,cons(4,nil)))  )))
3-element Array{Expr,1}:
 :(tup(nil, cons(3, cons(4, nil))))
 :(tup(cons(3, nil), cons(4, nil)))
 :(tup(cons(3, cons(4, nil)), nil))

Thoughts & Links

I really should implement the occurs check

Other things that might be interesting: Using Async somehow for the streams. Store the substitutions with mutation or do union find unification. Constraint logic programming. How hard would it be get get JuMP to tag along for the ride?

It would probably be nice to accept Expr for tuples and arrays in addition to function calls.

http://minikanren.org/ You may also want to check out the book The Reasoned Schemer.

http://io.livecode.ch/ online interactive minikanren examples

http://tca.github.io/veneer/examples/editor.html more minikanren examples.

Microkanren implementation tutorial https://www.youtube.com/watch?v=0FwIwewHC3o . Also checkout the Kanren online meetup recordings https://www.youtube.com/user/WilliamEByrd/playlists

Efficient representations for triangular substitutions – https://users.soe.ucsc.edu/~lkuper/papers/walk.pdf

https://github.com/ekmett/guanxi https://www.youtube.com/watch?v=D7rlJWc3474&ab_channel=MonadicWarsaw

Could it be fruitful to work natively with Catlab’s GATExpr? Synquid makes it seem like extra typing information can help the search sometimes.

LogicT http://okmij.org/ftp/Computation/LogicT.pdf

Seres Spivey http://www.jucs.org/jucs_6_4/functional_reading_of_logic

Hinze backtracking https://dl.acm.org/doi/abs/10.1145/357766.351258

Ray Tracing Algebraic Surfaces

Ray tracing is a natural way of producing computer images. One takes a geometrical ray that connects the pinhole of the camera to a pixel of the camera and find where it hits objects in the scene. You then color the pixel the color of the object it hit.

You can add a great deal of complexity to this by more sophisticated sampling and lighting, multiple bounces, strange surfaces, but that’s it in a nutshell.

A very popular tutorial on this is Ray Tracing in One Weekend https://raytracing.github.io/

There are a couple ways to do the geometrical collision detection part. One is to consider simple shapes like triangles and spheres and find closed form algorithms for the collision point. This is a fast and simple approach and the rough basis of the standard graphics pipeline. Another is to describe shapes via signed distance functions that tell you how far from the object you are and use ray-marching, which is a variant of newton’s method iteratively finding a position on a surface along the ray. ShaderToys very often use this technique.

If you describe your objects using algebraic (polynomial) equations, like x^2 + y^2 + z^2 - 1 describes a sphere, there is the possibility of using root finding algorithms, which are readily available. I thought this was kind of neat. Basically the ray hitting the concrete pixel (x_0, y_0) can be parameterized by a univariate polynomial (x,y,z) = (\lambda x_0, \lambda y_0, \lambda) , which can be plugged into the multivariate polynomial (\lambda x_0)^2 + (\lambda y_0)^2 + \lambda^2 - 1. This is a univariate polynomial which can be solved for all possible collision points via root finding. We filter for the collisions that are closest and in front of the camera. We can also use partial differentiation of the surface equations to find normal vectors at that point for the purposes of simple directional lighting.

As is, it really isn’t very fast but it’s short and it works.

Three key packages are

using Images
using LinearAlgebra
using TypedPolynomials
using Polynomials

function raytrace(x2,y2,p)
    z = Polynomials.Polynomial([0,1])
    
    # The ray parameterized by z through the origin and the point [x2,y2,1] 
    x3 = [z*x2, z*y2, z]

    # get all the roots after substitution into the surface equation 
    r = roots(p(x=>x3)) 
    

    # filter to use values of z that are real and in front of the camera
    hits = map(real, filter( x -> isreal(x) & (real(x) > 0.0)  , r)) 

    if length(hits) > 0
        l = minimum(hits) # closest hit only
        x3 = [z(l) for z in x3]
        # get normal vector of surface at that point
        dp = differentiate(p, x) 
        normal = normalize([ z(x=> x3)  for z in dp])
        # a little directional and ambient shading
        return max(0,0.5*dot(normal,normalize([0,1,-1]))) + 0.2 
    else 
        return 0 # Ray did not hit surface
    end
end

@polyvar x[1:3]

# a sphere of radius 1 with center at (0,0,3)
p = x[1]^2 + x[2]^2 + (x[3] - 3)^2 - 1 

box = -1:0.01:1
Gray.([ raytrace(x,y,p) for x=box, y=box ])

Sphere.

@polyvar x[1:3]
R = 2
r = 1

# another way of doing offset
x1 = x .+ [ 0, 0 , -5 ] 

# a torus at (0,0,5)
# equation from https://en.wikipedia.org/wiki/Torus
p = (x1[1]^2 + x1[2]^2 + x1[3]^2 + R^2 - r^2)^2 - 4R^2 * (x1[1]^2 + x1[2]^2) 

box = -1:0.005:1
img = Gray.([ raytrace(x,y,p) for x=box, y=box ])
save("torus.jpg",img)
Torus

Some thoughts on speeding up: Move polynomial manipulations out of the loop. Perhaps partial evaluate with respect to the polynomial? That’d be neat. And of course, parallelize

Checkpoint: Implementing Linear Relations for Linear Time Invariant Systems

I’m feeling a little stuck on this one so I think maybe it is smart to just write up a quick checkpoint for myself and anyone who might have advice.

The idea is to reimplement the ideas here computing linear relations https://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ There is a lot more context written in that post and probably necessary background for this one.

Linear relations algebra is a refreshing perspective for me on systems of linear equations. It has a notion of composition that seems, dare I say, almost as useful as matrix multiplication. Very high praise. This composition has a more bidirectional flavor than matrix multiplication as it a good fit for describing physical systems, in which interconnection always influences both ways.

In the previous post, I used nullspace computations as my workhorse. The nullspace operation allows one to switch between a constraint (nullspace) and a generator (span) picture of a vector subspace. The generator view is useful for projection and linear union, and the constraint view is useful for partial-composition and intersection. The implementation of linear relation composition requires flipping between both views.

I’m reimplementing it in Julia for 2 reasons

  • To use the Julia ecosystems implementation of module operations
  • to get a little of that Catlab.jl magic to shine on it.

It was a disappointment of the previous post that I could only treat resistor-like circuits. The new twist of using module packages allows treatment of inductor/capacitor circuits and signal flow diagrams.

When you transform into Fourier space, systems of linear differential equations become systems of polynomial equations \frac{d}{dx} \rightarrow i \omega. From this perspective, modules seem like the appropriate abstraction rather vector spaces. Modules are basically vector spaces where one doesn’t assume the operation of scalar division, in other words the scalar are rings rather than fields. Polynomials are rings, not fields. In order to treat the new systems, I still need to be able to do linear algebraic-ish operations like nullspaces, except where the entries of the matrix are polynomials rather than floats.

Syzygies are basically the module analog of nullspaces. Syzygies are the combinations of generators that combine to zero. Considering the generators of a submodule as being column vectors, stacking them together makes a matrix. Taking linear combinations of the columns is what happens when you multiply a matrix by a vector. So the syzygies are the space of vectors for which this matrix multiplication gives 0, the “nullspace”.

Computer algebra packages offer syzygy computations. Julia has bindings to Singular, which does this. I have been having a significant and draining struggle to wrangle these libraries though. Am I going against the grain? Did the library authors go against the grain? Here’s what I’ve got trying to match the catlab naming conventions:

using Singular

import Nemo

using LinearAlgebra # : I

CC = Nemo.ComplexField(64)
P, (s,) = PolynomialRing(CC, ["s"])
i = Nemo.onei(CC) # P(i) ? The imaginary number

#helpers to deal with Singular.jl
eye(m) = P.(Matrix{Int64}(I, m, m)) # There is almost certainly a better way of doing this. Actually dispatching Matrix?
zayro(m,n) = P.(zeros(Int64,m,n)) #new zeros method?
mat1(m::Int64) = fill(P(m), (1,1) )
mat1(m::Float64) = fill(P(m), (1,1) )
mat1(m::spoly{Singular.n_unknown{Nemo.acb}}) = fill(m, (1,1))

# Objects are the dimensionality of the vector space
struct DynOb
    m::Int
end

# Linear relations represented 
struct DynMorph
  input::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
  output::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
end

dom(x::DynMorph) = DynOb(size(x.input)[2])
codom(x::DynMorph) = DynOb(size(x.output)[2])
id(X::DynOb) = DynMorph(eye(X.m), -eye(X.m))

# add together inputs
plus(X::DynOb) = DynMorph( [eye(X.m) eye(X.m)] , - eye(X.m) )


mcopy(X::DynOb) = Dyn( [eye(X.m) ; eye(X.m)] , -eye(2*X.m) ) # copy input

delete(A::DynOb) = DynMorph( fill(P.(0),(0,A.m)) , fill(P.(0),(0,0)) )   
create(A::DynOb) = DynMorph( fill(P.(0),(0,0)) , fill(P.(0),(0,A.m)) )
dagger(x::DynMorph) = DynMorph(x.output, x.input)

# cup and cap operators
dunit(A::DynOb) = compose(create(A), mcopy(A))
dcounit(A::DynOb) = compose(mmerge(A), delete(A))


scale(M) = DynMorph( mat1(M),mat1(-1))
diff =  scale(i*s) # differentiation = multiplying by i omega
integ = dagger(diff)
#cupboy = DynMorph( [mat1(1) mat1(-1)] , fill(P.(0),(1,0)) )
#capboy = transpose(cupboy)

#terminal

# relational operations
# The meet
# Inclusion

# I think this is a nullspace calculation?
# almost all the code is trying to work around Singular's interface to one i can understand
function quasinullspace(A)
   rows, cols = size(A)
   vs = Array(gens(Singular.FreeModule(P, rows)))
   q = [sum(A[:,i] .* vs) for i in 1:cols]
   M = Singular.Module(P,q...)
   S = Singular.Matrix(syz(M)) # syz is the only meat of the computation
   return Base.transpose([S[i,j] for j=1:Singular.ncols(S), i=1:Singular.nrows(S) ])
end

function compose(x::DynMorph,y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    A = [ x.input                x.output P.(zeros(Int64,nx,yo)) ;
          P.(zeros(Int64,ny,xi)) y.input  y.output    ]
    B = quasinullspace(A)
    projB = [B[1:xi       ,:] ;
             B[xi+yi+1:end,:] ]
    C = Base.transpose(quasinullspace(Base.transpose(projB)))
    return DynMorph( C[:, 1:xi] ,C[:,xi+1:end] )
end

# basically the direct sum. The monoidal product of linear relations
function otimes( x::DynMorph, y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    return DynMorph( [ x.input                P.(zeros(Int64,nx,yi));
                       P.(zeros(Int64,ny,xi)) y.input               ],
                      [x.output                P.(zeros(Int64,nx,yo));
                       P.(zeros(Int64,ny,xo))  y.output               ])
    
end

I think this does basically work but it’s clunky.

Thoughts

I need to figure out Catlab’s diagram drawing abilities enough to show some circuits and some signal flow diagrams. Wouldn’t that be nice?

I should show concrete examples of composing passive filter circuits together.

There is a really fascinating paper by Jan Willems where he digs into a beautiful picture of this that I need to revisit https://homes.esat.kuleuven.be/~sistawww/smc/jwillems/Articles/JournalArticles/2007.1.pdf

https://golem.ph.utexas.edu/category/2018/06/the_behavioral_approach_to_sys.html

Is all this module stuff stupid? Should I just use rational polynomials and be done with it? Sympy? \frac{d^2}{dx^2}y = 0 and \frac{d}{dx}y = 0 are different equations, describing different behaviors. Am I even capturing that though? Is my syzygy powered composition even right? It seemed to work on a couple small examples and I think it makes sense. I dunno. Open to comments.

Because univariate polynomials are a principal ideal domain (pid), we can also use smith forms rather than syzygies is my understanding. Perhaps AbstractAlgebra.jl might be a better tool?

Will the syzygy thing be good for band theory? We’re in the multivariate setting then so smith normal form no longer applies.

A Buchberger in Julia

Similarly to how Gaussian elimination putting linear equations into LU form solves most linear algebra problems one might care about, Buchberger’s algorithm for finding a Grobner basis of a system of multivariate polynomial equations solves most questions you might ask. Some fun applications

  • Linkages
  • Geometrical Theorem proving. Circles are x^2 + y^2 – 1 = 0 and so on.
  • Optics
  • Constraint satisfaction problems. x^2 – 1 = 0 gives you a boolean variable. It’s a horrible method but it works if your computer doesn’t explode.
  • Energy and momentum conservation. “Classical Feynman Diagrams” p1 + p2 = p3 + p4 and so on.
  • Frequency domain circuits and linear dynamical systems 😉 more on this another day

To learn more about Grobner bases I highly recommend Cox Little O’Shea

To understand what a Grobner basis is, first know that univariate polynomial long division is a thing. It’s useful for determining if one polynomial is a multiple of another. If so, then you’ll find the remainder is zero.

One could want to lift the problem of determining if a polynomial is a multiple of others to multivariate polynomials. Somewhat surprisingly the definition of long division has some choice in it. Sure, x^2 is a term that is ahead of x, but is x a larger term than y? y^2? These different choices are admissible. In addition now one has systems of equations. Which equation do we divide by first? It turns out to matter and change the result. That is unless one has converted into a Grobner Basis.

A Grobner basis is a set of polynomials such that remainder under multinomial division becomes unique regardless of the order in which division occurs.

How does one find such a basis? In essence kind of by brute force. You consider all possible polynomials that could divide two ways depending on your choice.

Julia has packages for multivariate polynomials. https://github.com/JuliaAlgebra/MultivariatePolynomials.jl defines an abstract interface and generic functions. DynamicPolynomials gives flexible representation for construction. TypedPolynomials gives a faster representation.

These already implement a bulk of what we need to get a basic Buchberger going: Datastructures, arithmetic, and division with remainder. With one caveat, there is already a picked monomial ordering. And it’s not lexicographic, which is the nice one for eliminating variables. This would not be too hard to change though?

Polynomial long division with respect to a set of polynomials is implemented here

https://github.com/JuliaAlgebra/MultivariatePolynomials.jl/blob/9a0f7bf531ba3346f0c2ccf319ae92bf4dc261af/src/division.jl#L60

Unfortunately, (or fortunately? A good learning experience. Learned some stuff about datastructures and types in julia so that’s nice) quite late I realized that a very similar Grobner basis algorithm to the below is implemented inside of of SemiAlgebraic.jl package. Sigh.

using MultivariatePolynomials
using DataStructures


function spoly(p,q)
    pq = lcm(leadingmonomial(p),leadingmonomial(q))
    return div(  pq , leadingterm(p) ) * p - div(pq , leadingterm(q)) * q
end

function isgrobner(F::Array{T}) where {T <: AbstractPolynomialLike} # check buchberger criterion
    for (i, f1) in enumerate(F)
        for f2 in F[i+1:end]
            s = spoly(f1,f2)
            _,s = divrem(s,F)
            if !iszero(s)
                return false
            end
        end
    end
    return true
end

function buchberger(F::Array{T}) where {T <: AbstractPolynomialLike}
    pairs = Queue{Tuple{T,T}}()
    # intialize with all pairs from F
    for (i, f1) in enumerate(F)
        for f2 in F[i+1:end]
            enqueue!(pairs, (f1,f2))
        end
    end
    
    # consider all possible s-polynomials and reduce them
    while !isempty(pairs)
        (f1,f2) = dequeue!(pairs)
        s = spoly(f1,f2)
        _,s = divrem(s,F)
        if !iszero(s) #isapproxzero? Only add to our set if doesn't completely reduce
            for f in F
                enqueue!(pairs, (s,f))
            end
            push!(F,s)
        end
    end

    # reduce redundant entries in grobner basis.
    G = Array{T}(undef, 0)
    while !isempty(F)
        f = pop!(F)
        _,r = divrem(f, vcat(F,G))
        if !iszero(r)
            push!(G,r)
        end
    end
    
    return G
end

Some usage. You can see here that Gaussian elimination implemented by the backslash operator is a special case of taking the Grobner basis of a linear set of equations


using DynamicPolynomials
@polyvar x y

buchberger( [ x + 1.0 + y   , 2.0x + 3y + 7  ] )
#= 
2-element Array{Polynomial{true,Float64},1}:
 -0.5y - 2.5
 x - 4.0
=#

[ 1 1 ; 2  3 ] \ [-1 ; -7]
#=
2-element Array{Float64,1}:
  4.0
 -5.0
=#


buchberger( [ x^3 - y , x^2 - x*y ])
#=
3-element Array{Polynomial{true,Int64},1}:
 -xy + y²
 y³ - y
 x² - y²
=#

Improvements

Many. This is not a good Buchberger implementation, but it is simple. See http://www.scholarpedia.org/article/Buchberger%27s_algorithm for some tips, which include criterion for avoiding unneeded spolynomial pairs, and smart ordering. Better Buchberger implementations will use the f4 or f5 algorithm, which use sparse matrix facilities to perform many division steps in parallel. My vague impression of this f4 algorithm is that you prefill a sparse matrix (rows correspond to an spolynomial or monomial multiple of your current basis, columns correspond to monomials) with monomial multiples of your current basis that you know you might need.

In my implementation, I’m tossing away the div part of divrem. It can be useful to retain these so you know how to write your Grobner basis in terms of the original basis.

You may want to look at the julia bindings to Singular.jl

Links

Unification in Julia

Unification is a workhorse of symbolic computations. Comparing two terms (two syntax trees with named variables spots) we can figure out the most general substitution for the variables to make them syntactically match.

It is a sister to pattern matching, but it has an intrinsic bidirectional flavor that makes it feel more powerful and declarative.

Unification can be implemented efficiently (not that I have done so yet) with some interesting variants of the disjoint set / union-find data type.

  • The magic of Prolog is basically built in unification + backtracking search.
  • The magic of polymorphic type inference in Haskell and OCaml comes from unification of type variables.
  • Part of magic of SMT solvers using the theory of uninterpreted functions is unification.
  • Automatic and Interactive Theorem provers have unification built in somewhere.

To describe terms I made a simple data types for variables modelled of those in SymbolicUtils (I probably should just use the definitions in SymbolicUtils but i was trying to keep it simple).

#variables
struct Sym
    name::Symbol
end

struct Term
    f::Symbol
    arguments::Array{Any} # Array{Union{Term,Sym}} faster/better?
end

The implementation by Norvig and Russell for the their AI book is an often copied simple implementation of unification. It is small and kind of straightforward. You travel down the syntax trees and when you hit variables you try to put them into your substitution dictionary. Although, like anything that touches substitution, it can be easy to get wrong. See his note below.

I used the multiple dispatch as a kind of pattern matching on algebraic data types whether the variables are terms or variables. It’s kind of nice, but unclear to me whether obscenely slow or not. This is not a high performance implementation of unification in any case.

occur_check(x::Sym,y::Term,s) = any(occur_check(x, a, s) for a in y.arguments)

function occur_check(x::Sym,y::Sym,s)
    if x == y
        return s
    elseif haskey(s,y)
        return occur_check(x, s[y], s)
    else
        return nothing
    end  
end


function unify(x::Sym, y::Union{Sym,Term}, s) 
   if x == y
        return s
   elseif haskey(s,x)
        return unify(s[x], y, s)
   elseif haskey(s,y) # This is the norvig twist
        return unify(x, s[y], s)
   elseif occur_check(x,y,s)
        return nothing
   else
        s[x] = y
        return s
   end
end

unify(x::Term, y::Sym, s) = unify(y,x,s)

function unify(x :: Term, y :: Term, s)
    if x.f == y.f && length(x.arguments) == length(y.arguments)
        for (x1, y1) in zip(x.arguments, y.arguments)
            if unify(x1,y1,s) == nothing
                return nothing
            end
        end
        return s
    else
        return nothing
    end
end

unify(x,y) = unify(x,y,Dict())

I also made a small macro function for converting simple julia expressions to my representation. It uses the prolog convention that capital letter starting names are variables.

function string2term(x)
    if x isa Symbol
        name = String(x)
        if isuppercase(name[1])
           return Sym( x)
        else
           return Term( x, []  )
        end
    elseif x isa Expr
        @assert(x.head == :call)
        arguments = [string2term(y) for y in x.args[2:end] ]
        return Term( x.args[1], arguments )
    end
end
macro string2term(x)
    return :( $(string2term(x)) )
end

print(unify( @string2term(p(X,g(a), f(a, f(a)))) , @string2term(p(f(a), g(Y), f(Y, Z)))))
# Dict{Any,Any}(Sym(:X) => Term(:f, Any[Term(:a, Any[])]),Sym(:Y) => Term(:a, Any[]),Sym(:Z) => Term(:f, Any[Term(:a, Any[])]))

Links

Unification: Multidisciplinary Survey by Knight https://kevincrawfordknight.github.io/papers/unification-knight.pdf

https://github.com/roberthoenig/FirstOrderLogic.jl/tree/master/src A julia project for first order logic that also has a unification implementation, and other stuff

An interesting category theoretic perspective on unification http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3615 what is unification by Goguen

There is also a slightly hidden implementation in sympy (it does not appear in the docs?) http://matthewrocklin.com/blog/work/2012/11/01/Unification https://github.com/sympy/sympy/tree/master/sympy/unify

PyRes https://github.com/eprover/PyRes/blob/master/unification.py

Norvig unify
https://github.com/aimacode/aima-python/blob/9ea91c1d3a644fdb007e8dd0870202dcd9d078b6/logic4e.py#L1307

norvig – widespread error
http://norvig.com/unify-bug.pdf

Efficient unification note
ftp://ftp.cs.indiana.edu/pub/techreports/TR242.pdf

blog post
https://eli.thegreenplace.net/2018/unification/

Efficient representations for triangular substitituions
https://users.soe.ucsc.edu/~lkuper/papers/walk.pdf

conor mcbride – first order substitition structurly recursive dependent types
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=880725E316FA5E3540EFAD83C0C2FD88?doi=10.1.1.25.1516&rep=rep1&type=pdf

z3 unifier – an example of an actually performant unifier
https://github.com/Z3Prover/z3/blob/520ce9a5ee6079651580b6d83bc2db0f342b8a20/src/ast/substitution/unifier.cpp

Warren Abstract Machine Tutorial Reconstruction http://wambook.sourceforge.net/wambook.pdf

Handbook of Automated Reasoning – has a chapter on unification

Higher Order Unification – LambdaProlog, Miller unification

Syntax trees with variables in them are a way in which to represent sets of terms (possibly infinite sets!). In that sense it asks can we form the union or intersection of these sets. The intersection is the most general unifier. The union is not expressible via a single term with variables in general. We can only over approximate it, like how the union of convex sets is not necessarily convex, however it’s hull is. This is a join on a term lattice. This is the process of anti-unification.

What about the complement of these sets? Not really. Not with the representation we’ve chosen, we can’t have an interesting negation. What about the difference of two sets?

I had an idea a while back about programming with relations, where I laid out some interesting combinators. I represented only finite relations, as those can be easily enumerated.

Walk on Spheres Method in Julia

I saw a cool tweet (and corresponding conference paper) by Keenan Crane

http://www.cs.cmu.edu/~kmcrane/Projects/MonteCarloGeometryProcessing/index.html

I was vaguely aware that one can use a Monte Carlo method to solve the boundary value Laplace equation \nabla^2 \phi = 0 , but I don’t think I had seen the walk on spheres variant of it before. I think Crane’s point is how similar all this is to stuff graphics people already do and do well. It’s a super cool paper. Check it out.

Conceptually, I think it is plausible that the Laplace equation and a monte carlo walk are related because the static diffusion equation \nabla^2 n = 0 from Fick’s law ultimately comes from the brownian motion of little guys wobbling about from a microscopic perspective.

Slightly more abstractly, both linear differential equations and random walks can be describe by matrices, a finite difference matrix (for concreteness) K and a transition matrix of jump probabilities T. The differential equation is discretized to Kx=b and the stationary probability distribution is Tp=b, where b are sources and sinks at the boundary.

The mean value property of the Laplace equation allows one to speed this process up. Instead of having a ton of little walks, you can just walk out randomly sampling on the surface of big spheres. en.wikipedia.org/wiki/Walk-on-spheres_method. Alternatively you can think of it as eventually every random walk exits a sphere, and it is at a random spot on it.

So here’s the procedure. Pick a point you want the value of \phi at. Make the biggest sphere you can that stays in the domain. Pick a random point on the sphere. If that point is on the boundary, record that boundary value, otherwise iterate. Do this many many times, then the average value of the boundaries you recorded it the value of \phi

This seems like a good example for Julia use. It would be somewhat difficult to code this up efficiently in python using vectorized numpy primitives. Maybe in the future we could try parallelize or do this on the GPU? Monte carlo methods like these are quite parallelizable.

The solution of the 1-d Laplace equation is absolutely trivial. If the second derivative is 0, then $\phi = a + b x $. This line is found by fitting it to the two endpoint values.

So we’re gonna get a line out

using LinearAlgebra
avg = 0
phi0 = 0
phi1 = 10
x_0 = 0.75
function monte_run(x)
    while true
            l = rand(Bool) # go left?
            if (l && x <= 0.5) # finish at left edge 0
                return phi0
            elseif (!l && x >= 0.5) # finish at right edge 1
                return phi1
            else
                if x <= 0.5 # move away from 0
                    x += x
                else
                    x -= 1 - x # move away from 1
                end
            end
    end
end

monte_runs = [monte_run(x) for run_num =1:100, x=0:0.05:1 ]
import Statistics
avgs = vec(Statistics.mean( monte_runs , dims=1))
stddevs = vec(Statistics.std(monte_runs, dims=1)) ./ sqrt(size(monte_runs)[1]) # something like this right?

plot(0:0.05:1, avgs, yerror=stddevs)
plot!(0:0.05:1,  (0:0.05:1) * 10 )

And indeed we do.

You can do a very similar thing in 2d. Here I use the boundary values on a disc corresponding to x^2 – y^2 (which is a simple exact solution of the Laplace equation).



function monte_run_2d(phi_b, x)
    while true
            r = norm(x)
            if r > 0.95 # good enough
                return phi_b(x)
            else
                dr = 1.0 - r #assuming big radius of 1
                θ = 2 * pi * rand(Float64) #
                x[1] += dr * cos(θ)
                x[2] += dr * sin(θ)
            end
    end
end


monte_run_2d( x -> x[1],  [0.0 0.0] )


monte_runs = [monte_run_2d(x -> x[1]^2 - x[2]^2 ,  [x 0.0] ) for run_num =1:1000, x=0:0.05:1 ]

import Statistics
avgs = vec(Statistics.mean( monte_runs , dims=1))
stddevs = vec(Statistics.std(monte_runs, dims=1)) ./ sqrt(size(monte_runs)[1]) # something like this right?
plot(0:0.05:1, avgs, yerror=stddevs)
plot!(0:0.05:1,  (0:0.05:1) .^2 )

There’s more notes and derivations in my notebook here https://github.com/philzook58/thoughtbooks/blob/master/monte_carlo_integrate.ipynb