Notes on Synthesis and Equation Proving for Catlab.jl

Catlab is a library and growing ecosystem (I guess the ecosystem is called AlgebraicJulia now) for computational or applied category theory, whatever that may end up meaning.

I have been interested to see if I could find low hanging fruit by applying off the shelf automated theorem proving tech to Catlab.jl.

There area couple problems that seem like some headway might be made in this way:

  • Inferring the type of expressions. Catlab category syntax is pretty heavily annotated by objects so this is relatively easy. (id is explicitly tagged by the object at which it is based for example)
  • Synthesizing morphisms of a given type.
  • Proving equations

In particular two promising candidates for these problems are to use eprover/vampire style automated theorem provers or prolog/kanren logic programming.

Generalized Algebraic Theories (GATs)

Catlab is built around something known as a Generalized Algebraic Theory. https://algebraicjulia.github.io/Catlab.jl/dev/#What-is-a-GAT? In order to use more conventional tooling, we need to understand GATs in a way that is acceptable to these tools. Basically, can we strip the GAT down to first order logic?

I found GATs rather off putting at first glance. Who ordered that? The nlab article is 1/4 enlightening and 3/4 obscuring. https://ncatlab.org/nlab/show/generalized+algebraic+theory But, in the end of the day, I think it it’s not such a crazy thing.

Because of time invested and natural disposition, I understand things much better when they are put in programming terms. As seems to be not uncommon in Julia, one defines a theory in Catlab using some specialized macro mumbo jumbo.

@theory Category{Ob,Hom} begin
  @op begin
    (→) := Hom
    (⋅) := compose
  end

  Ob::TYPE
  Hom(dom::Ob, codom::Ob)::TYPE

  id(A::Ob)::(A → A)
  compose(f::(A → B), g::(B → C))::(A → C) ⊣ (A::Ob, B::Ob, C::Ob)

  (f ⋅ g) ⋅ h == f ⋅ (g ⋅ h) ⊣ (A::Ob, B::Ob, C::Ob, D::Ob,
                                f::(A → B), g::(B → C), h::(C → D))
  f ⋅ id(B) == f ⊣ (A::Ob, B::Ob, f::(A → B))
  id(A) ⋅ f == f ⊣ (A::Ob, B::Ob, f::(A → B))
end

Ok, but this macro boils down to a data structure describing the syntax, typing relations, and axioms of the theory. This data structure is not necessarily meant to be used by end users, and may change in it’s specifics, but I find it clarifying to see it.

Just like my python survival toolkit involves calling dir on everything, my Julia survival toolkit involves hearty application of dump and @macroexpand on anything I can find.

We can see three slots for types terms and axioms. The types describe the signature of the types, how many parameters they have and of what type. The terms describe the appropriate functions and constants of the theory. It’s all kind of straightforward I think. Try to come up with a data structure for this and you’ll probably come up with something similar

I’ve cut some stuff out of the dump because it’s so huge. I’ve placed the full dump at the end of the blog post.

>>> dump(theory(Category))

Catlab.GAT.Theory
  types: Array{Catlab.GAT.TypeConstructor}((2,))
    1: Catlab.GAT.TypeConstructor
      name: Symbol Ob
      params: Array{Symbol}((0,))
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((0,))
        vals: Array{Union{Expr, Symbol}}((0,))
        ndel: Int64 0
        dirty: Bool false
      doc: String " Object in a category "
    2: ... More stuff
  terms: Array{Catlab.GAT.TermConstructor}((2,))
    1: Catlab.GAT.TermConstructor
      name: Symbol id
      params: Array{Symbol}((1,))
        1: Symbol A
      typ: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol Hom
          2: Symbol A
          3: Symbol A
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((1,))
          1: Symbol A
        vals: Array{Union{Expr, Symbol}}((1,))
          1: Symbol Ob
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: ... More stuff
  axioms: Array{Catlab.GAT.AxiomConstructor}((3,))
    1: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol f
              3: Symbol g
          3: Symbol h
      right: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Symbol f
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol g
              3: Symbol h
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[5, 0, 0, 0, 1, 0, 4, 0, 2, 7, 0, 6, 0, 0, 0, 3]
        keys: Array{Symbol}((7,))
          1: Symbol A
          2: Symbol B
          3: Symbol C
          4: Symbol D
          5: Symbol f
          6: Symbol g
          7: Symbol h
        vals: Array{Union{Expr, Symbol}}((7,))
          1: Symbol Ob
          2: Symbol Ob
          3: Symbol Ob
          4: Symbol Ob
          5: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
          6: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol B
              3: Symbol C
          7: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol C
              3: Symbol D
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: ... More stuff
  aliases: ... Stuff

This infrastructure is not necessarily for category theory alone despite being in a package called Catlab. You can describe other algebraic theories, like groups, but you won’t need the full flexibility in typing relations that the “Generalized” of the GAT gets you. The big hangup of category theory that needs this extra power is that categorical composition is a partial function. It is only defined for morphisms whose types line up correctly, whereas any two group elements can be multiplied.

@theory Group(G) begin

  G::TYPE

  id()::G
  mul(f::G, g::G)::G
  inv(x::G)::G

  mul(mul(f, g), h) == mul(f,  mul(g , h)) ⊣ ( f::G, g::G, h::G)
   # and so on
end

Back to the first order logic translation. If you think about it, the turnstile ⊣ separating the context appearing in the Catlab theory definition is basically an implication. The definition id(A)::Hom(A,A) ⊣ (A::Ob) can be read like so: for all A, given A has type Ob it implies that id(A) has type Hom(A,A). We can write this in first order logic using a predicate for the typing relation. \forall A,  type(A,Ob) \implies type(id(A), Hom(A,A)).

The story I tell about this is that the way this deals with the partiality of compose is that when everything is well typed, compose behaves as it axiomatically should, but when something is not well typed, compose can return total garbage. This is one way to make a partial function total. Just define it to return random trash for the undefined domain values or rather be unwilling to commit to what it does in that case.

Even thought they are the same thing, I have great difficulty getting over the purely syntactical barrier of _::_ vs type(_,_). Infix punctuation never feels like a predicate to me. Maybe I’m crazy.

Turnstiles in general are usually interchangeable with or reflections of implication in some sense. So are the big horizontal lines of inference rules for that matter. I find this all very confusing.

Everything I’ve said above is a bold claim that could be actually proven by demonstrating a rigorous correspondence, but I don’t have enough interest to overcome the tremendous skill gap I’m lacking needed to do so. It could very easily be that I’m missing subtleties.

Automated Theorem Provers

While the term automated theorem prover could describe any theorem prover than is automated, it happens to connote a particular class of first order logic automated provers of which the E prover and Vampire are canonical examples.

In a previous post, I tried axiomatizing category theory to these provers in a different way https://www.philipzucker.com/category-theory-in-the-e-automated-theorem-prover/ , with a focus on the universal properties of categorical constructions. Catlab has a different flavor and a different encoding seems desirable.

What is particularly appealing about this approach is that these systems are hard wired to handle equality efficiently. So they can handle the equational specification of a Catlab theory. I don’t currently know to interpret to proofs it outputs into something more human comprehensible.

Also, I wasn’t originally aware of this, but eprover has a mode --conjectures-are-questions that will return the answers to existential queries. In this way, eprover can be used as a synthesizer for morphisms of a particular type. This flag gives eprover query capabilities similar to a prolog.

eprover cartcat.tptp --conjectures-are-questions --answers=1 --silent

One small annoying hiccup is that TPTP syntax takes the prolog convention of making quantified variables capitalized. This is not the Catlab convention. A simple way to fix this is to append a prefix of Var* to quantified objects and const* to constant function symbols.

All of the keys in the context dictionary are the quantified variables in an declaration. We can build a map to symbols where they are prefixed with Var

varmap = Dict(map(kv -> kv[1] => Symbol("Var$(kv[1])")  , collect(myterm.context )))

And then we can use this map to prefixify other expressions.

prefixify(x::Symbol, varmap) = haskey(varmap,x) ?  varmap[x] : Symbol( "const$x")
prefixify(x::Expr, varmap) = Expr(x.head, map(y -> prefixify(y, varmap),  x.args)... )

Given these, it has just some string interpolation hackery to port a catlab typing definition into a TPTP syntax axiom about a typing relation

function build_typo(terms)
    map(myterm ->  begin
                varmap = Dict(map(kv -> kv[1] => Symbol("Var$(kv[1])")  , collect(myterm.context )))
                prefix_context = Dict(map(kv -> kv[1] => prefixify(kv[2] , varmap) , collect(myterm.context )))
                context_terms = map( kv -> "typo($(varmap[kv[1]]), $(kv[2]))", collect(prefix_context))
                conc = "typo( const$(myterm.name)($(join(map(p -> prefixify(p,varmap) , myterm.params), ", "))) , $(prefixify(myterm.typ, varmap)) )"
                if length(myterm.context) > 0
                    "
                    ![$(join(values(varmap),","))]: 
                        ($conc <=
                            ($(join( context_terms , " &\n\t"))))"
                else # special case for empty context
                    "$conc"
                end
                    end
    , terms)
end

You can spit out the axioms for a theory like so

query = join(map(t -> "fof( axiom$(t[1]) , axiom, $(t[2]) ).", enumerate(build_typo(theory(CartesianCategory).terms))), "\n")
fof( axiom1 , axiom, 
![VarA]: 
    (typo( constid(VarA) , constHom(VarA, VarA) ) <=
        (typo(VarA, constOb))) ).
fof( axiom2 , axiom, 
![Varf,VarA,VarB,Varg,VarC]: 
    (typo( constcompose(Varf, Varg) , constHom(VarA, VarC) ) <=
        (typo(Varf, constHom(VarA, VarB)) &
	typo(VarA, constOb) &
	typo(VarB, constOb) &
	typo(Varg, constHom(VarB, VarC)) &
	typo(VarC, constOb))) ).
fof( axiom3 , axiom, 
![VarA,VarB]: 
    (typo( constotimes(VarA, VarB) , constOb ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).
fof( axiom4 , axiom, 
![Varf,VarA,VarD,VarB,Varg,VarC]: 
    (typo( constotimes(Varf, Varg) , constHom(constotimes(VarA, VarC), constotimes(VarB, VarD)) ) <=
        (typo(Varf, constHom(VarA, VarB)) &
	typo(VarA, constOb) &
	typo(VarD, constOb) &
	typo(VarB, constOb) &
	typo(Varg, constHom(VarC, VarD)) &
	typo(VarC, constOb))) ).
fof( axiom5 , axiom, typo( constmunit() , constOb ) ).
fof( axiom6 , axiom, 
![VarA,VarB]: 
    (typo( constbraid(VarA, VarB) , constHom(constotimes(VarA, VarB), constotimes(VarB, VarA)) ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).
fof( axiom7 , axiom, 
![VarA]: 
    (typo( constmcopy(VarA) , constHom(VarA, constotimes(VarA, VarA)) ) <=
        (typo(VarA, constOb))) ).
fof( axiom8 , axiom, 
![VarA]: 
    (typo( constdelete(VarA) , constHom(VarA, constmunit()) ) <=
        (typo(VarA, constOb))) ).
fof( axiom9 , axiom, 
![Varf,VarA,VarB,Varg,VarC]: 
    (typo( constpair(Varf, Varg) , constHom(VarA, constotimes(VarB, VarC)) ) <=
        (typo(Varf, constHom(VarA, VarB)) &
	typo(VarA, constOb) &
	typo(VarB, constOb) &
	typo(Varg, constHom(VarA, VarC)) &
	typo(VarC, constOb))) ).
fof( axiom10 , axiom, 
![VarA,VarB]: 
    (typo( constproj1(VarA, VarB) , constHom(constotimes(VarA, VarB), VarA) ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).
fof( axiom11 , axiom, 
![VarA,VarB]: 
    (typo( constproj2(VarA, VarB) , constHom(constotimes(VarA, VarB), VarB) ) <=
        (typo(VarA, constOb) &
	typo(VarB, constOb))) ).

% example synthesis queries
%fof(q , conjecture, ?[F]: (typo( F, constHom(a , a) )  <=  ( typo(a, constOb)  )   ) ).
%fof(q , conjecture, ?[F]: (typo( F, constHom( constotimes(a,b) , constotimes(b,a)) )  <=  ( typo(a, constOb) & typo(b,constOb) )   ) ).
%fof(q , conjecture, ?[F]: (typo( F, constHom( constotimes(a,constotimes(b,constotimes(c,d))) , d) )  <=  ( typo(a, constOb) & typo(b,constOb) & typo(c,constOb) & typo(d,constOb) )   ) ). % this one hurts already without some axiom pruning

For dealing with the equations of the theory, I believe we can just ignore the typing relations. Each equation axiom preserves well-typedness, and as long as our query is also well typed, I don’t think anything will go awry. Here it would be nice to have the proof output of the tool be more human readable, but I don’t know how to do that yet. Edit: It went awry. I currently think this is completely wrong.

function build_eqs(axioms)
        map(axiom -> begin
            @assert axiom.name == :(==)
            varmap = Dict(map(kv -> kv[1] => Symbol("Var$(kv[1])")  , collect(axiom.context )))
            l = prefixify(axiom.left, varmap)
            r = prefixify(axiom.right, varmap)
            "![$(join(values(varmap), ", "))]: $l = $r" 
            end,
        axioms)
end

t = join( map( t -> "fof( axiom$(t[1]), axiom, $(t[2]))."  , enumerate(build_eqs(theory(CartesianCategory).axioms))), "\n")
print(t)
fof( axiom1, axiom, ![Varf, VarA, VarD, VarB, Varh, Varg, VarC]: constcompose(constcompose(Varf, Varg), Varh) = constcompose(Varf, constcompose(Varg, Varh))).
fof( axiom2, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constid(VarB)) = Varf).
fof( axiom3, axiom, ![Varf, VarA, VarB]: constcompose(constid(VarA), Varf) = Varf).
fof( axiom4, axiom, ![Varf, VarA, VarB, Varg, VarC]: constpair(Varf, Varg) = constcompose(constmcopy(VarC), constotimes(Varf, Varg))).
fof( axiom5, axiom, ![VarA, VarB]: constproj1(VarA, VarB) = constotimes(constid(VarA), constdelete(VarB))).
fof( axiom6, axiom, ![VarA, VarB]: constproj2(VarA, VarB) = constotimes(constdelete(VarA), constid(VarB))).
fof( axiom7, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constmcopy(VarB)) = constcompose(constmcopy(VarA), constotimes(Varf, Varf))).
fof( axiom8, axiom, ![Varf, VarA, VarB]: constcompose(Varf, constdelete(VarB)) = constdelete(VarA)).

% silly example query
fof( q, conjecture, ![Varf, Varh, Varg, Varj ]: constcompose(constcompose(constcompose(Varf, Varg), Varh), Varj) = constcompose(Varf, constcompose(Varg, constcompose(Varh,Varj)) )).

It is possible and perhaps desirable fully automating the call to eprover as an external process and then parsing the results back into Julia. Julia has some slick external process facilities https://docs.julialang.org/en/v1/manual/running-external-programs/

Prolog and Kanrens

It was an interesting revelation to me that the typing relations for morphisms as described in catlab seems like it is already basically in the form amenable to prolog or a Kanren. The variables are universally quantified and there is only one term to the left of the turnstile (which is basically prolog’s :-) This is a Horn clause.

In a recent post I showed how to implement something akin to a minikanren in Julia https://www.philipzucker.com/yet-another-microkanren-in-julia/ I built that with this application in mind

Here’s an example I wrote by hand in in minikaren

(define (typo f t)
(conde
  [(fresh (a) (== f 'id) (== t `(hom ,a ,a))) ]
  [(== f 'f) (== t '(hom a c))]
  [(fresh (a b) (== f 'snd) (== t `(hom ( ,a ,b) ,b)))]
  [(fresh (a b) (== f 'fst) (== t `(hom ( ,a ,b) ,a)))]
  [(fresh (g h a b c) (== f `(comp ,g ,h))
                       (== t `(hom ,a ,c)) 
                       (typo g `(hom ,a ,b ))
                       (typo h `(hom ,b ,c)))]
  [ (fresh (g h a b c) (== f `(fan ,g ,h))
                       (== t `(hom ,a (,b ,c))) 
                       (typo g `(hom ,a ,b ))
                       (typo h `(hom ,a ,c)))  ]
  )
  )

;queries
; could lose the hom
;(run 3 (q) (typo  q '(hom (a b) a)))
;(run 3 (q) (typo  q '(hom ((a b) c) a)))
(run 3 (q) (typo  q '(hom (a b) (b a))))

And here is a similar thing written in my Julia minikanren. I had to depth limit it because I goofed up the fair interleaving in my implementation.

function typo(f, t, n)
    fresh2( (a,b) -> (f ≅ :fst) ∧ (t  ≅ :(Hom(tup($a,$b),$a)))) ∨
    fresh2( (a,b) -> (f ≅ :snd) ∧ (t  ≅ :(Hom(tup($a,$b),$b)))) ∨
    freshn( 6, (g,h,a,b,c,n2) -> (n ≅ :(succ($n2))) ∧ (f ≅ :(comp($g, $h)))  ∧ (t  ≅ :(Hom($a,$c))) ∧ @Zzz(typo(g, :(Hom($a,$b)), n2))  ∧ @Zzz(typo(h, :(Hom($b,$c)), n2))) ∨
    fresh(a -> (f ≅ :(id($a))) ∧ (t  ≅ :(Hom($a,$a))))
end


run(1, f ->  typo( f  , :(Hom(tup(a,tup(b,tup(c,d))),d)), nat(5)))

Bits and Bobbles

Discussion on the Catlab zulip. Some interesting discussion here such as an alternative encoding of GATs to FOL https://julialang.zulipchat.com/#narrow/stream/230248-catlab.2Ejl/topic/Automatic.20Theorem.20Proving/near/207919104

Of course, it’d be great it these solvers were bullet proof. But they aren’t. They are solving very hard questions more or less by brute force. So the amount of scaling they can achieve can be resolved by experimentation only. It may be that using these solvers is a dead end. These solvers do have a number of knobs to turn. The command line argument list to eprover is enormous.

These solvers are all facing some bad churn problems

  • Morphism composition is known to be a thing that makes dumb search go totally off the rails.
  • The identity morphism can be composed arbitrary number of times. This also makes solvers churn
  • Some catlab theories are overcomplete.
  • Some catlab theories are capable are building up and breaking down the same thing over and over (complicated encodings of id like pair(fst,snd))).

use SMT? https://github.com/ahumenberger/Z3.jl SMT is capable of encoding the equational problems if you use quantifiers (which last I checked these bindings do not yet export) . Results may vary. SMT with quantifiers is not the place where they shine the most. Is there anything else that can be fruitfully encoded to SMT? SAT?

Custom heuristics for search. Purely declarative is too harsh a goal. Having pure Julia solution is important here.

GAP.jl https://github.com/oscar-system/GAP.jl has facilities for knuth-bendix. This might be useful for finitely presented categories. It would be interesting to explore what pieces of computational group theory are applicable or analogous to computational category theory

>>> dump(theory(Category))

Catlab.GAT.Theory
  types: Array{Catlab.GAT.TypeConstructor}((2,))
    1: Catlab.GAT.TypeConstructor
      name: Symbol Ob
      params: Array{Symbol}((0,))
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((0,))
        vals: Array{Union{Expr, Symbol}}((0,))
        ndel: Int64 0
        dirty: Bool false
      doc: String " Object in a category "
    2: Catlab.GAT.TypeConstructor
      name: Symbol Hom
      params: Array{Symbol}((2,))
        1: Symbol dom
        2: Symbol codom
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0]
        keys: Array{Symbol}((2,))
          1: Symbol dom
          2: Symbol codom
        vals: Array{Union{Expr, Symbol}}((2,))
          1: Symbol Ob
          2: Symbol Ob
        ndel: Int64 0
        dirty: Bool true
      doc: String " Morphism in a category "
  terms: Array{Catlab.GAT.TermConstructor}((2,))
    1: Catlab.GAT.TermConstructor
      name: Symbol id
      params: Array{Symbol}((1,))
        1: Symbol A
      typ: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol Hom
          2: Symbol A
          3: Symbol A
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((1,))
          1: Symbol A
        vals: Array{Union{Expr, Symbol}}((1,))
          1: Symbol Ob
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: Catlab.GAT.TermConstructor
      name: Symbol compose
      params: Array{Symbol}((2,))
        1: Symbol f
        2: Symbol g
      typ: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol Hom
          2: Symbol A
          3: Symbol C
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[4, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 5, 0, 0, 0, 3]
        keys: Array{Symbol}((5,))
          1: Symbol A
          2: Symbol B
          3: Symbol C
          4: Symbol f
          5: Symbol g
        vals: Array{Union{Expr, Symbol}}((5,))
          1: Symbol Ob
          2: Symbol Ob
          3: Symbol Ob
          4: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
          5: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol B
              3: Symbol C
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
  axioms: Array{Catlab.GAT.AxiomConstructor}((3,))
    1: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol f
              3: Symbol g
          3: Symbol h
      right: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Symbol f
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol compose
              2: Symbol g
              3: Symbol h
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[5, 0, 0, 0, 1, 0, 4, 0, 2, 7, 0, 6, 0, 0, 0, 3]
        keys: Array{Symbol}((7,))
          1: Symbol A
          2: Symbol B
          3: Symbol C
          4: Symbol D
          5: Symbol f
          6: Symbol g
          7: Symbol h
        vals: Array{Union{Expr, Symbol}}((7,))
          1: Symbol Ob
          2: Symbol Ob
          3: Symbol Ob
          4: Symbol Ob
          5: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
          6: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol B
              3: Symbol C
          7: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol C
              3: Symbol D
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    2: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Symbol f
          3: Expr
            head: Symbol call
            args: Array{Any}((2,))
              1: Symbol id
              2: Symbol B
      right: Symbol f
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[3, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((3,))
          1: Symbol A
          2: Symbol B
          3: Symbol f
        vals: Array{Union{Expr, Symbol}}((3,))
          1: Symbol Ob
          2: Symbol Ob
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
    3: Catlab.GAT.AxiomConstructor
      name: Symbol ==
      left: Expr
        head: Symbol call
        args: Array{Any}((3,))
          1: Symbol compose
          2: Expr
            head: Symbol call
            args: Array{Any}((2,))
              1: Symbol id
              2: Symbol A
          3: Symbol f
      right: Symbol f
      context: OrderedCollections.OrderedDict{Symbol,Union{Expr, Symbol}}
        slots: Array{Int32}((16,)) Int32[3, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
        keys: Array{Symbol}((3,))
          1: Symbol A
          2: Symbol B
          3: Symbol f
        vals: Array{Union{Expr, Symbol}}((3,))
          1: Symbol Ob
          2: Symbol Ob
          3: Expr
            head: Symbol call
            args: Array{Any}((3,))
              1: Symbol Hom
              2: Symbol A
              3: Symbol B
        ndel: Int64 0
        dirty: Bool true
      doc: Nothing nothing
  aliases: Dict{Symbol,Symbol}
    slots: Array{UInt8}((16,)) UInt8[0x01, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]
    keys: Array{Symbol}((16,))
      1: Symbol ⋅
      2: #undef
      3: #undef
      4: #undef
      5: #undef
      ...
      12: #undef
      13: #undef
      14: #undef
      15: #undef
      16: #undef
    vals: Array{Symbol}((16,))
      1: Symbol compose
      2: #undef
      3: #undef
      4: #undef
      5: #undef
      ...
      12: #undef
      13: #undef
      14: #undef
      15: #undef
      16: #undef
    ndel: Int64 0
    count: Int64 2
    age: UInt64 0x0000000000000002
    idxfloor: Int64 1
    maxprobe: Int64 0

Checkpoint: Implementing Linear Relations for Linear Time Invariant Systems

I’m feeling a little stuck on this one so I think maybe it is smart to just write up a quick checkpoint for myself and anyone who might have advice.

The idea is to reimplement the ideas here computing linear relations https://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ There is a lot more context written in that post and probably necessary background for this one.

Linear relations algebra is a refreshing perspective for me on systems of linear equations. It has a notion of composition that seems, dare I say, almost as useful as matrix multiplication. Very high praise. This composition has a more bidirectional flavor than matrix multiplication as it a good fit for describing physical systems, in which interconnection always influences both ways.

In the previous post, I used nullspace computations as my workhorse. The nullspace operation allows one to switch between a constraint (nullspace) and a generator (span) picture of a vector subspace. The generator view is useful for projection and linear union, and the constraint view is useful for partial-composition and intersection. The implementation of linear relation composition requires flipping between both views.

I’m reimplementing it in Julia for 2 reasons

  • To use the Julia ecosystems implementation of module operations
  • to get a little of that Catlab.jl magic to shine on it.

It was a disappointment of the previous post that I could only treat resistor-like circuits. The new twist of using module packages allows treatment of inductor/capacitor circuits and signal flow diagrams.

When you transform into Fourier space, systems of linear differential equations become systems of polynomial equations \frac{d}{dx} \rightarrow i \omega. From this perspective, modules seem like the appropriate abstraction rather vector spaces. Modules are basically vector spaces where one doesn’t assume the operation of scalar division, in other words the scalar are rings rather than fields. Polynomials are rings, not fields. In order to treat the new systems, I still need to be able to do linear algebraic-ish operations like nullspaces, except where the entries of the matrix are polynomials rather than floats.

Syzygies are basically the module analog of nullspaces. Syzygies are the combinations of generators that combine to zero. Considering the generators of a submodule as being column vectors, stacking them together makes a matrix. Taking linear combinations of the columns is what happens when you multiply a matrix by a vector. So the syzygies are the space of vectors for which this matrix multiplication gives 0, the “nullspace”.

Computer algebra packages offer syzygy computations. Julia has bindings to Singular, which does this. I have been having a significant and draining struggle to wrangle these libraries though. Am I going against the grain? Did the library authors go against the grain? Here’s what I’ve got trying to match the catlab naming conventions:

using Singular

import Nemo

using LinearAlgebra # : I

CC = Nemo.ComplexField(64)
P, (s,) = PolynomialRing(CC, ["s"])
i = Nemo.onei(CC) # P(i) ? The imaginary number

#helpers to deal with Singular.jl
eye(m) = P.(Matrix{Int64}(I, m, m)) # There is almost certainly a better way of doing this. Actually dispatching Matrix?
zayro(m,n) = P.(zeros(Int64,m,n)) #new zeros method?
mat1(m::Int64) = fill(P(m), (1,1) )
mat1(m::Float64) = fill(P(m), (1,1) )
mat1(m::spoly{Singular.n_unknown{Nemo.acb}}) = fill(m, (1,1))

# Objects are the dimensionality of the vector space
struct DynOb
    m::Int
end

# Linear relations represented 
struct DynMorph
  input::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
  output::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
end

dom(x::DynMorph) = DynOb(size(x.input)[2])
codom(x::DynMorph) = DynOb(size(x.output)[2])
id(X::DynOb) = DynMorph(eye(X.m), -eye(X.m))

# add together inputs
plus(X::DynOb) = DynMorph( [eye(X.m) eye(X.m)] , - eye(X.m) )


mcopy(X::DynOb) = Dyn( [eye(X.m) ; eye(X.m)] , -eye(2*X.m) ) # copy input

delete(A::DynOb) = DynMorph( fill(P.(0),(0,A.m)) , fill(P.(0),(0,0)) )   
create(A::DynOb) = DynMorph( fill(P.(0),(0,0)) , fill(P.(0),(0,A.m)) )
dagger(x::DynMorph) = DynMorph(x.output, x.input)

# cup and cap operators
dunit(A::DynOb) = compose(create(A), mcopy(A))
dcounit(A::DynOb) = compose(mmerge(A), delete(A))


scale(M) = DynMorph( mat1(M),mat1(-1))
diff =  scale(i*s) # differentiation = multiplying by i omega
integ = dagger(diff)
#cupboy = DynMorph( [mat1(1) mat1(-1)] , fill(P.(0),(1,0)) )
#capboy = transpose(cupboy)

#terminal

# relational operations
# The meet
# Inclusion

# I think this is a nullspace calculation?
# almost all the code is trying to work around Singular's interface to one i can understand
function quasinullspace(A)
   rows, cols = size(A)
   vs = Array(gens(Singular.FreeModule(P, rows)))
   q = [sum(A[:,i] .* vs) for i in 1:cols]
   M = Singular.Module(P,q...)
   S = Singular.Matrix(syz(M)) # syz is the only meat of the computation
   return Base.transpose([S[i,j] for j=1:Singular.ncols(S), i=1:Singular.nrows(S) ])
end

function compose(x::DynMorph,y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    A = [ x.input                x.output P.(zeros(Int64,nx,yo)) ;
          P.(zeros(Int64,ny,xi)) y.input  y.output    ]
    B = quasinullspace(A)
    projB = [B[1:xi       ,:] ;
             B[xi+yi+1:end,:] ]
    C = Base.transpose(quasinullspace(Base.transpose(projB)))
    return DynMorph( C[:, 1:xi] ,C[:,xi+1:end] )
end

# basically the direct sum. The monoidal product of linear relations
function otimes( x::DynMorph, y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    return DynMorph( [ x.input                P.(zeros(Int64,nx,yi));
                       P.(zeros(Int64,ny,xi)) y.input               ],
                      [x.output                P.(zeros(Int64,nx,yo));
                       P.(zeros(Int64,ny,xo))  y.output               ])
    
end

I think this does basically work but it’s clunky.

Thoughts

I need to figure out Catlab’s diagram drawing abilities enough to show some circuits and some signal flow diagrams. Wouldn’t that be nice?

I should show concrete examples of composing passive filter circuits together.

There is a really fascinating paper by Jan Willems where he digs into a beautiful picture of this that I need to revisit https://homes.esat.kuleuven.be/~sistawww/smc/jwillems/Articles/JournalArticles/2007.1.pdf

https://golem.ph.utexas.edu/category/2018/06/the_behavioral_approach_to_sys.html

Is all this module stuff stupid? Should I just use rational polynomials and be done with it? Sympy? \frac{d^2}{dx^2}y = 0 and \frac{d}{dx}y = 0 are different equations, describing different behaviors. Am I even capturing that though? Is my syzygy powered composition even right? It seemed to work on a couple small examples and I think it makes sense. I dunno. Open to comments.

Because univariate polynomials are a principal ideal domain (pid), we can also use smith forms rather than syzygies is my understanding. Perhaps AbstractAlgebra.jl might be a better tool?

Will the syzygy thing be good for band theory? We’re in the multivariate setting then so smith normal form no longer applies.

Category Theory in the E Automated Theorem Prover

At least in the circles I travel in, interactive theorem provers like Agda, Coq, Lean, Isabelle have more mindspace than automatic theorem provers. I haven’t seen much effort to explore category theory in the automatic provers so I thought I’d try it.

Interactive Systems

There are a number of projects formalizing category theory in these systems

All of these systems are using some variant of higher order logic where you can quantify over propositions. This is very expressive, but also are more difficult to automate (They do have significant automation in them though, but this tends to be for filling in the relatively obvious intermediate details of a proof, not complete automation). Perhaps this has some relation to Godel’s incompleteness theorem

Automated Theorem Provers

There are other classes of theorem proving systems: automatic theorem provers and SMT solvers. Can we do category theory in them? Doesn’t Automatic sound real nice in principle?

ATP and SMT are similar in some respects but are architected differently and have slightly different use cases and strengths:

  • SMT solvers are based around SAT solvers and tractable sub problems (theories). They have subsystems that deeply understand linear equations, linear inequalities, systems of polynomials, theory of uninterpreted functions, bit-blasting, others. They combine these facilities via the Nelson-Oppen procedure. They are fairly weak at quantifier reasoning. They are good at problems that require lots of domain specific understanding. Examples of SMT solvers include Z3, CVC4, Alt Ergo, Boolector. You can find more and comparisons at the SMT competition
  • While the term Automatic Theorem Prover (ATP) could mean anything, it has a tendency to denote a class of first order logic solvers based around resolution. Examples of such provers include Vampire, E, and Prover9. You can find more at the CADE competition. They are more oriented to abstract first order logic structures and quantifier reasoning.

A big downside of automatic methods is that once they start to fail, you’re more hosed than the interactive provers. Until then, it’s great though.

Categories in ATP

Category theory proofs have a feeling of being close to trivial (at least the ones I’ve seen, but I’ve mostly seen the trivial ones so…? ), amounting to laboriously expanding definitions and rewrite equations corresponding to commutation conditions. An automatic system to verify these seems useful.

What are the kinds of questions one wants to ask these provers?

  • Confirmation that a concrete mathematical structure (integers, reals, bools, abstract/concrete preorder, group, lattice) obeys the required categorical axioms/interface. The axioms of category structures are the conjectures here
  • Confirmation that abstract categorical constructions do what they’re supposed to. One presupposes categorical axioms and structures and asks conjecture conclusions. For example, given this square, does this other diagram commute? Is this “diagram chasing“?

These are yes/no questions. Although in the case of no, often a counterexample is emitted which can help show where you went awry. A third task that is more exciting to me, but harder and not an obvious stock capability of these provers is

  • Calculate/construct something categorical. For example, we might want to construct a condensed or efficient version of some program given by a categorical spec, or emit a categorical construction that has certain properties. There are clear analogies with program verification vs. program synthesis.

Now it appears that to some degree this is possible. I have noted that in the proof output, one can sometimes find terms that may correspond to the thing you desire, especially if you ask an existential conjecture, “does there exists a morphism with such and such a property”.

Formulating Category Theory in First Order Logic

TPTP is both a problem library and specification language for first order problems for the purposes of computer provers. There is a nice overview video here. There is a nice web interface to explore different provers here. The TPTP library contains four different axiomatizations for categories, and a number of problems

There is a good set of videos explaining how to formalize category axioms in a first order setting. https://www.youtube.com/watch?v=NjDZMWdDJKM&list=PL4FD0wu2mjWOtmhJsiVrCpzOAk42uhdz8&index=6&t=0s he has a couple different formulation actually. It’s interesting. Here’s a stack overflow https://math.stackexchange.com/questions/2383503/category-theory-from-the-first-order-logic-point-of-view along with a small discussion of why it’s wrong headed to even do such a thing. I’m not sure I agree with the second part.

Here is my encoding. I am not 100% confident anything I’ve done here is right. Note that composition is expressed as a ternary relation. This is one way of handling the fact that without stronger typing discipline, composition is a partial binary function. In order to compose, morphisms need to meet on an intermediate object. Categorical “typing” is expressed via logical constraint on the relation.

A trick one can use is to identify the identity arrows and the objects at which they are based. Since every object is required to have an identity arrow and every identity arrow points from and to a single object, they are in isomorphism. There is some conceptual disclarity that occurs from this trick though. I’m not totally sold.

TPTP syntax is mostly straightforward but note that ! [X] is forall X, ? [X] is exists X, capital names are variables, lowercase names are constants. Quantifiers bind tighter than I personally expected, hence my parenthesis explosion.

% axioms of a category
% we would resupply this for every category involved?
% ! [X] is forall X, ? [X] is exists X. Capital names are variables
% lowercase names are constants.
fof( dom_cod, axiom, ![X] : dom(cod(X)) = cod(X)).
fof( cod_dom, axiom, ![X] : cod(dom(X)) = dom(X)).
fof( comp_is_unique, axiom, ![F, G, FG1, FG2] : ((comp(F,G,FG1) & comp(F,G,FG2)) => FG1 = FG2)   ).
fof( comp_objects_middle, axiom, ![F, G] : ((? [FG] : comp(F,G,FG)) <=> dom(F) = cod(G))).
fof( comp_dom, axiom, ![F, G, FG] : (comp(F,G,FG) => dom(G) = dom(FG))).
fof( comp_cod, axiom, ![F, G, FG] : (comp(F,G,FG) => cod(F) = cod(FG))).
fof( left_id, axiom, ![F] : comp(cod(F),F,F) ).
fof( right_id, axiom, ![F] : comp(F,dom(F),F) ).
% I've heard that composition axioms cause churn?
fof( comp_assoc, axiom, ![F, G, H, FG, GH, FGH1, FGH2] : ((comp(F,G,FG) & comp(FG,H,FGH1) & comp(F,GH,FGH2) & comp(G,H,GH)) => FGH1 = FGH2  )).

Here are some definitions. One could also just inline these definitions with a macro system. Uniqueness quantification occurs in universal properties. It’s sort of a subtle idea to encode into ordinary first order logic without uniqueness quantification. Are some encoding better than others? Another place where macros to generate TPTP files would be useful. Uniqueness quantification is naturally expressible as a higher order predicate.

fof(monic_def, axiom,  
    ![M] : (monic(M) <=> (! [F,G] : (( ? [H] : (comp(M, F, H) & comp(M,G,H))) => F = G)))).

fof(commute_square_def, axiom, 
    ![F,G,H,K] : (commute_square(F,G,H,K) <=> (? [M] : (comp(F,G,M) & comp(H,K,M))))).

fof(pullback_def, axiom,
   ![F,G,P1,P2] : (pullback(F,G,P1,P2) <=>
        (commute_square(F,P1,G,P2) &
        (![Q1,Q2] : (commute_square(F,Q1,G,Q2) => 
            (?[U] : (! [U1] : ((comp(P1,U1,Q1) & comp(P2,U1,Q2)) <=> (U1 = U))))
        ))))).

Here are some pretty simple problems:

% should be a trivial statement, but isn't literally an axiom.
fof( codcod, conjecture, ![F] : cod(cod(F)) = cod(F) ).


% paste two commuting squares together gives another commuting square
fof( pasting_square,conjecture,   ![A,B,C,D, I,J,K, BI, CJ] : ((commute_square(B,A,D,C) & commute_square(I,D,K,J) & comp(I,B, IB) & comp( J, C,JC)) 
                                                        =>  commute_square( IB,A, K,JC ) )).

One theorem that is not quite so trivial is the pullback of a monic is monic https://math.stackexchange.com/questions/2957202/proving-the-pullback-of-monics-is-monic. It’s ultimately not that complicated and yet difficult enough that it takes me a lot of head scratching. It crucially uses the uniqueness property of the pullback. Here’s an encoding of the conjecture.

include('cat.tptp').
include('constructions.tptp').

% warmup?
%fof(pullback_monic, conjecture, ![M, P1,P2] : ((monic(M) & pullback(cod(M),M,P1,P2)) => %monic(P2))).

% pullback of monic is monic
fof(pullback_monic, conjecture, ![M, F, P1,P2] : ((monic(M) & pullback(F,M,P1,P2)) => monic(P1))).

Invoking the prover.

eprover --auto-schedule --cpu-limit=60 --proof-object monic_pullback.tptp

Vampire appears to do it faster. Again the easiest way to try it yourself or compare other solvers is the web interface System on TPTP http://www.tptp.org/cgi-bin/SystemOnTPTP. Caveat: given how many times I’ve screwed up writing this post, I’d give a 40% chance that that final theorem is actually expressing what I intended it to.

Bits and Bobbles

Encoding our questions into first order logic, which is powerful but not fully expressive, requires a lot of “macro” like repetitiveness when possible at all. I have found through experience that the extra macro capabilities given by python for emitting Z3 problems to be extremely powerful. For this reason, we should use a real programming language to emit these problems. I think the logical candidate is Julia and the Catlab library. One unknown question, will this repetitiveness choke the theorem prover?

Categorical Constructions that one might want to encode:

  • Monic
  • Epic
  • Commuting Squares
  • Pullbacks
  • Pushouts
  • products
  • coproducts
  • exponential objects
  • Subobject classifiers
  • Finite categories
  • Functors
  • Natural transformations
  • Adjunctions
  • Kan Extensions
  • PreSheaves

Theorems that seem possible. (Surely there are many more.

Some observations on actually using these provers: Just because it says proved is not very convincing. It is very easy to have your axioms and/or conjecture stated incorrectly. Forall ! and Exists ? bind tighter than I naively expect them to in the syntax. I ended up putting parenthesis nearly everywhere. I had a lot of very difficult to debug problems due to bad binding assumptions. Typos are also a disaster. These things are hard to debug. It is helpful to alternatively ask for satisfiability (disproving the conjecture). One should also at least look at which axioms it’s using. If it is using less axioms than makes sense, something is up. These are all good reasons that it might be better to automatically generate these problem files. Ultimately I feel like that is the way to go, because encoding what you’re interested in into first order logic can require some repetitiveness.

Sanity checking my files with http://www.tptp.org/cgi-bin/SystemB4TPTP proved to be helpful. Also looking at the parenthesis structure in the output.

I think using the typed tff format could also help sanity significantly. It really sucks that a typo on one of your predicates or variables can fail silently.

Even ignoring syntax screwups, the scoping of quantifiers is tough to think about.

I suspect that these systems will be very good for proofs that amount to unrolling definitions.

What is the best formulation for category theory properties?

An interesting property is that these provers seem to want a time limit given to them. And they schedule themselves in such a way to use the full time limit, even if they shouldn’t need it.

Categorical “type checking” as an external predicate. In order to compose, morphisms need to meet on an intermediate object.

The proof output is rather difficult to read. It is the equivalent of trying to read assembly code. The high level structure has been transformed into something more amenable to the machine and many names have been mangled. This proof is short enough that I think a person could stare at it for a while an eventually kind of understand it.

Perhaps we want to directly input our constructions in cnf form with skolemization manually applied. This might make the output more readable?

In terms of automated category theory proving, I’m not aware of that much work, but there must be more that I don’t know how to find.

Example proof term: Is this even right? Hard to know.

# Proof found!
# SZS status Theorem
# SZS output start CNFRefutation
fof(pullback_monic, conjecture, ![X11, X13, X14]:((monic(X11)&pullback(cod(X11),X11,X13,X14))=>monic(X14)), file('properties.tptp', pullback_monic)).
fof(pullback_def, axiom, ![X2, X3, X13, X14]:(pullback(X2,X3,X13,X14)<=>(commute_square(X2,X13,X3,X14)&![X15, X16]:(commute_square(X2,X15,X3,X16)=>?[X17]:![X18]:((comp(X13,X18,X15)&comp(X14,X18,X16))<=>X18=X17)))), file('properties.tptp', pullback_def)).
fof(commute_square_def, axiom, ![X2, X3, X7, X12]:(commute_square(X2,X3,X7,X12)<=>?[X11]:(comp(X2,X3,X11)&comp(X7,X12,X11))), file('properties.tptp', commute_square_def)).
fof(comp_objects_middle, axiom, ![X2, X3]:(?[X6]:comp(X2,X3,X6)<=>dom(X2)=cod(X3)), file('cat.tptp', comp_objects_middle)).
fof(dom_cod, axiom, ![X1]:dom(cod(X1))=cod(X1), file('cat.tptp', dom_cod)).
fof(left_id, axiom, ![X2]:comp(cod(X2),X2,X2), file('cat.tptp', left_id)).
fof(comp_is_unique, axiom, ![X2, X3, X4, X5]:((comp(X2,X3,X4)&comp(X2,X3,X5))=>X4=X5), file('cat.tptp', comp_is_unique)).
fof(monic_def, axiom, ![X11]:(monic(X11)<=>![X2, X3]:(?[X7]:(comp(X11,X2,X7)&comp(X11,X3,X7))=>X2=X3)), file('properties.tptp', monic_def)).
fof(comp_cod, axiom, ![X2, X3, X6]:(comp(X2,X3,X6)=>cod(X2)=cod(X6)), file('cat.tptp', comp_cod)).
fof(comp_dom, axiom, ![X2, X3, X6]:(comp(X2,X3,X6)=>dom(X3)=dom(X6)), file('cat.tptp', comp_dom)).
fof(comp_assoc, axiom, ![X2, X3, X7, X6, X8, X9, X10]:((((comp(X2,X3,X6)&comp(X6,X7,X9))&comp(X2,X8,X10))&comp(X3,X7,X8))=>X9=X10), file('cat.tptp', comp_assoc)).
fof(c_0_11, negated_conjecture, ~(![X11, X13, X14]:((monic(X11)&pullback(cod(X11),X11,X13,X14))=>monic(X14))), inference(assume_negation,[status(cth)],[pullback_monic])).
fof(c_0_12, plain, ![X68, X69, X70, X71, X72, X73, X75, X76, X77, X78, X79, X80, X83]:(((commute_square(X68,X70,X69,X71)|~pullback(X68,X69,X70,X71))&((~comp(X70,X75,X72)|~comp(X71,X75,X73)|X75=esk7_6(X68,X69,X70,X71,X72,X73)|~commute_square(X68,X72,X69,X73)|~pullback(X68,X69,X70,X71))&((comp(X70,X76,X72)|X76!=esk7_6(X68,X69,X70,X71,X72,X73)|~commute_square(X68,X72,X69,X73)|~pullback(X68,X69,X70,X71))&(comp(X71,X76,X73)|X76!=esk7_6(X68,X69,X70,X71,X72,X73)|~commute_square(X68,X72,X69,X73)|~pullback(X68,X69,X70,X71)))))&((commute_square(X77,esk8_4(X77,X78,X79,X80),X78,esk9_4(X77,X78,X79,X80))|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80))&((~comp(X79,esk10_5(X77,X78,X79,X80,X83),esk8_4(X77,X78,X79,X80))|~comp(X80,esk10_5(X77,X78,X79,X80,X83),esk9_4(X77,X78,X79,X80))|esk10_5(X77,X78,X79,X80,X83)!=X83|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80))&((comp(X79,esk10_5(X77,X78,X79,X80,X83),esk8_4(X77,X78,X79,X80))|esk10_5(X77,X78,X79,X80,X83)=X83|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80))&(comp(X80,esk10_5(X77,X78,X79,X80,X83),esk9_4(X77,X78,X79,X80))|esk10_5(X77,X78,X79,X80,X83)=X83|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80)))))), inference(distribute,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[pullback_def])])])])])])).
fof(c_0_13, negated_conjecture, ((monic(esk11_0)&pullback(cod(esk11_0),esk11_0,esk12_0,esk13_0))&~monic(esk13_0)), inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[c_0_11])])])).
fof(c_0_14, plain, ![X58, X59, X60, X61, X63, X64, X65, X66, X67]:(((comp(X58,X59,esk6_4(X58,X59,X60,X61))|~commute_square(X58,X59,X60,X61))&(comp(X60,X61,esk6_4(X58,X59,X60,X61))|~commute_square(X58,X59,X60,X61)))&(~comp(X63,X64,X67)|~comp(X65,X66,X67)|commute_square(X63,X64,X65,X66))), inference(distribute,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[commute_square_def])])])])])])).
cnf(c_0_15, plain, (commute_square(X1,X2,X3,X4)|~pullback(X1,X3,X2,X4)), inference(split_conjunct,[status(thm)],[c_0_12])).
cnf(c_0_16, negated_conjecture, (pullback(cod(esk11_0),esk11_0,esk12_0,esk13_0)), inference(split_conjunct,[status(thm)],[c_0_13])).
fof(c_0_17, plain, ![X25, X26, X27, X28, X29]:((~comp(X25,X26,X27)|dom(X25)=cod(X26))&(dom(X28)!=cod(X29)|comp(X28,X29,esk1_2(X28,X29)))), inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_objects_middle])])])])])).
cnf(c_0_18, plain, (comp(X1,X2,esk6_4(X1,X2,X3,X4))|~commute_square(X1,X2,X3,X4)), inference(split_conjunct,[status(thm)],[c_0_14])).
cnf(c_0_19, negated_conjecture, (commute_square(cod(esk11_0),esk12_0,esk11_0,esk13_0)), inference(spm,[status(thm)],[c_0_15, c_0_16])).
fof(c_0_20, plain, ![X19]:dom(cod(X19))=cod(X19), inference(variable_rename,[status(thm)],[dom_cod])).
fof(c_0_21, plain, ![X37]:comp(cod(X37),X37,X37), inference(variable_rename,[status(thm)],[left_id])).
cnf(c_0_22, plain, (dom(X1)=cod(X2)|~comp(X1,X2,X3)), inference(split_conjunct,[status(thm)],[c_0_17])).
cnf(c_0_23, negated_conjecture, (comp(cod(esk11_0),esk12_0,esk6_4(cod(esk11_0),esk12_0,esk11_0,esk13_0))), inference(spm,[status(thm)],[c_0_18, c_0_19])).
cnf(c_0_24, plain, (dom(cod(X1))=cod(X1)), inference(split_conjunct,[status(thm)],[c_0_20])).
fof(c_0_25, plain, ![X21, X22, X23, X24]:(~comp(X21,X22,X23)|~comp(X21,X22,X24)|X23=X24), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_is_unique])])).
cnf(c_0_26, plain, (comp(cod(X1),X1,X1)), inference(split_conjunct,[status(thm)],[c_0_21])).
cnf(c_0_27, negated_conjecture, (cod(esk12_0)=cod(esk11_0)), inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_22, c_0_23]), c_0_24])).
fof(c_0_28, plain, ![X46, X47, X48, X49, X50]:((~monic(X46)|(~comp(X46,X47,X49)|~comp(X46,X48,X49)|X47=X48))&(((comp(X50,esk2_1(X50),esk4_1(X50))|monic(X50))&(comp(X50,esk3_1(X50),esk4_1(X50))|monic(X50)))&(esk2_1(X50)!=esk3_1(X50)|monic(X50)))), inference(distribute,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[monic_def])])])])])])).
cnf(c_0_29, plain, (X3=X4|~comp(X1,X2,X3)|~comp(X1,X2,X4)), inference(split_conjunct,[status(thm)],[c_0_25])).
cnf(c_0_30, negated_conjecture, (comp(cod(esk11_0),esk12_0,esk12_0)), inference(spm,[status(thm)],[c_0_26, c_0_27])).
fof(c_0_31, plain, ![X34, X35, X36]:(~comp(X34,X35,X36)|cod(X34)=cod(X36)), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_cod])])).
cnf(c_0_32, negated_conjecture, (~monic(esk13_0)), inference(split_conjunct,[status(thm)],[c_0_13])).
cnf(c_0_33, plain, (comp(X1,esk2_1(X1),esk4_1(X1))|monic(X1)), inference(split_conjunct,[status(thm)],[c_0_28])).
cnf(c_0_34, plain, (comp(X1,X2,esk6_4(X3,X4,X1,X2))|~commute_square(X3,X4,X1,X2)), inference(split_conjunct,[status(thm)],[c_0_14])).
cnf(c_0_35, plain, (comp(X1,esk3_1(X1),esk4_1(X1))|monic(X1)), inference(split_conjunct,[status(thm)],[c_0_28])).
cnf(c_0_36, negated_conjecture, (X1=esk12_0|~comp(cod(esk11_0),esk12_0,X1)), inference(spm,[status(thm)],[c_0_29, c_0_30])).
cnf(c_0_37, plain, (cod(X1)=cod(X3)|~comp(X1,X2,X3)), inference(split_conjunct,[status(thm)],[c_0_31])).
cnf(c_0_38, negated_conjecture, (comp(esk13_0,esk2_1(esk13_0),esk4_1(esk13_0))), inference(spm,[status(thm)],[c_0_32, c_0_33])).
cnf(c_0_39, negated_conjecture, (comp(esk11_0,esk13_0,esk6_4(cod(esk11_0),esk12_0,esk11_0,esk13_0))), inference(spm,[status(thm)],[c_0_34, c_0_19])).
cnf(c_0_40, negated_conjecture, (comp(esk13_0,esk3_1(esk13_0),esk4_1(esk13_0))), inference(spm,[status(thm)],[c_0_32, c_0_35])).
fof(c_0_41, plain, ![X31, X32, X33]:(~comp(X31,X32,X33)|dom(X32)=dom(X33)), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_dom])])).
cnf(c_0_42, negated_conjecture, (esk6_4(cod(esk11_0),esk12_0,esk11_0,esk13_0)=esk12_0), inference(spm,[status(thm)],[c_0_36, c_0_23])).
cnf(c_0_43, negated_conjecture, (cod(esk4_1(esk13_0))=cod(esk13_0)), inference(spm,[status(thm)],[c_0_37, c_0_38])).
cnf(c_0_44, negated_conjecture, (cod(esk13_0)=dom(esk11_0)), inference(spm,[status(thm)],[c_0_22, c_0_39])).
cnf(c_0_45, plain, (comp(X1,X2,esk1_2(X1,X2))|dom(X1)!=cod(X2)), inference(split_conjunct,[status(thm)],[c_0_17])).
cnf(c_0_46, negated_conjecture, (cod(esk3_1(esk13_0))=dom(esk13_0)), inference(spm,[status(thm)],[c_0_22, c_0_40])).
cnf(c_0_47, plain, (dom(X2)=dom(X3)|~comp(X1,X2,X3)), inference(split_conjunct,[status(thm)],[c_0_41])).
cnf(c_0_48, negated_conjecture, (comp(esk11_0,esk13_0,esk12_0)), inference(rw,[status(thm)],[c_0_39, c_0_42])).
cnf(c_0_49, negated_conjecture, (cod(esk4_1(esk13_0))=dom(esk11_0)), inference(rw,[status(thm)],[c_0_43, c_0_44])).
fof(c_0_50, plain, ![X39, X40, X41, X42, X43, X44, X45]:(~comp(X39,X40,X42)|~comp(X42,X41,X44)|~comp(X39,X43,X45)|~comp(X40,X41,X43)|X44=X45), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_assoc])])).
cnf(c_0_51, negated_conjecture, (comp(X1,esk3_1(esk13_0),esk1_2(X1,esk3_1(esk13_0)))|dom(X1)!=dom(esk13_0)), inference(spm,[status(thm)],[c_0_45, c_0_46])).
cnf(c_0_52, negated_conjecture, (dom(esk12_0)=dom(esk13_0)), inference(spm,[status(thm)],[c_0_47, c_0_48])).
cnf(c_0_53, negated_conjecture, (comp(X1,esk4_1(esk13_0),esk1_2(X1,esk4_1(esk13_0)))|dom(X1)!=dom(esk11_0)), inference(spm,[status(thm)],[c_0_45, c_0_49])).
cnf(c_0_54, plain, (X5=X7|~comp(X1,X2,X3)|~comp(X3,X4,X5)|~comp(X1,X6,X7)|~comp(X2,X4,X6)), inference(split_conjunct,[status(thm)],[c_0_50])).
cnf(c_0_55, negated_conjecture, (comp(esk12_0,esk3_1(esk13_0),esk1_2(esk12_0,esk3_1(esk13_0)))), inference(spm,[status(thm)],[c_0_51, c_0_52])).
cnf(c_0_56, negated_conjecture, (cod(esk2_1(esk13_0))=dom(esk13_0)), inference(spm,[status(thm)],[c_0_22, c_0_38])).
cnf(c_0_57, plain, (commute_square(X1,X2,X4,X5)|~comp(X1,X2,X3)|~comp(X4,X5,X3)), inference(split_conjunct,[status(thm)],[c_0_14])).
cnf(c_0_58, negated_conjecture, (comp(esk11_0,esk4_1(esk13_0),esk1_2(esk11_0,esk4_1(esk13_0)))), inference(er,[status(thm)],[c_0_53])).
cnf(c_0_59, negated_conjecture, (esk1_2(esk12_0,esk3_1(esk13_0))=X1|~comp(X2,esk3_1(esk13_0),X3)|~comp(X4,X2,esk12_0)|~comp(X4,X3,X1)), inference(spm,[status(thm)],[c_0_54, c_0_55])).
cnf(c_0_60, negated_conjecture, (comp(X1,esk2_1(esk13_0),esk1_2(X1,esk2_1(esk13_0)))|dom(X1)!=dom(esk13_0)), inference(spm,[status(thm)],[c_0_45, c_0_56])).
cnf(c_0_61, plain, (X2=esk7_6(X6,X7,X1,X4,X3,X5)|~comp(X1,X2,X3)|~comp(X4,X2,X5)|~commute_square(X6,X3,X7,X5)|~pullback(X6,X7,X1,X4)), inference(split_conjunct,[status(thm)],[c_0_12])).
cnf(c_0_62, negated_conjecture, (commute_square(X1,X2,esk11_0,esk4_1(esk13_0))|~comp(X1,X2,esk1_2(esk11_0,esk4_1(esk13_0)))), inference(spm,[status(thm)],[c_0_57, c_0_58])).
cnf(c_0_63, negated_conjecture, (cod(esk1_2(esk11_0,esk4_1(esk13_0)))=cod(esk11_0)), inference(spm,[status(thm)],[c_0_37, c_0_58])).
cnf(c_0_64, negated_conjecture, (esk1_2(esk12_0,esk3_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))|~comp(X1,esk3_1(esk13_0),esk4_1(esk13_0))|~comp(esk11_0,X1,esk12_0)), inference(spm,[status(thm)],[c_0_59, c_0_58])).
cnf(c_0_65, negated_conjecture, (comp(esk12_0,esk2_1(esk13_0),esk1_2(esk12_0,esk2_1(esk13_0)))), inference(spm,[status(thm)],[c_0_60, c_0_52])).
cnf(c_0_66, negated_conjecture, (X1=esk7_6(cod(esk11_0),esk11_0,esk12_0,esk13_0,X2,X3)|~commute_square(cod(esk11_0),X2,esk11_0,X3)|~comp(esk13_0,X1,X3)|~comp(esk12_0,X1,X2)), inference(spm,[status(thm)],[c_0_61, c_0_16])).
cnf(c_0_67, negated_conjecture, (commute_square(cod(esk11_0),esk1_2(esk11_0,esk4_1(esk13_0)),esk11_0,esk4_1(esk13_0))), inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_62, c_0_26]), c_0_63])).
cnf(c_0_68, negated_conjecture, (esk1_2(esk12_0,esk3_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_64, c_0_48]), c_0_40])])).
cnf(c_0_69, negated_conjecture, (esk1_2(esk12_0,esk2_1(esk13_0))=X1|~comp(X2,esk2_1(esk13_0),X3)|~comp(X4,X2,esk12_0)|~comp(X4,X3,X1)), inference(spm,[status(thm)],[c_0_54, c_0_65])).
cnf(c_0_70, negated_conjecture, (X1=esk7_6(cod(esk11_0),esk11_0,esk12_0,esk13_0,esk1_2(esk11_0,esk4_1(esk13_0)),esk4_1(esk13_0))|~comp(esk12_0,X1,esk1_2(esk11_0,esk4_1(esk13_0)))|~comp(esk13_0,X1,esk4_1(esk13_0))), inference(spm,[status(thm)],[c_0_66, c_0_67])).
cnf(c_0_71, negated_conjecture, (comp(esk12_0,esk3_1(esk13_0),esk1_2(esk11_0,esk4_1(esk13_0)))), inference(rw,[status(thm)],[c_0_55, c_0_68])).
cnf(c_0_72, negated_conjecture, (esk1_2(esk12_0,esk2_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))|~comp(X1,esk2_1(esk13_0),esk4_1(esk13_0))|~comp(esk11_0,X1,esk12_0)), inference(spm,[status(thm)],[c_0_69, c_0_58])).
cnf(c_0_73, negated_conjecture, (esk7_6(cod(esk11_0),esk11_0,esk12_0,esk13_0,esk1_2(esk11_0,esk4_1(esk13_0)),esk4_1(esk13_0))=esk3_1(esk13_0)), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_70, c_0_40]), c_0_71])])).
cnf(c_0_74, negated_conjecture, (esk1_2(esk12_0,esk2_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_72, c_0_48]), c_0_38])])).
cnf(c_0_75, negated_conjecture, (X1=esk3_1(esk13_0)|~comp(esk12_0,X1,esk1_2(esk11_0,esk4_1(esk13_0)))|~comp(esk13_0,X1,esk4_1(esk13_0))), inference(rw,[status(thm)],[c_0_70, c_0_73])).
cnf(c_0_76, negated_conjecture, (comp(esk12_0,esk2_1(esk13_0),esk1_2(esk11_0,esk4_1(esk13_0)))), inference(rw,[status(thm)],[c_0_65, c_0_74])).
cnf(c_0_77, plain, (monic(X1)|esk2_1(X1)!=esk3_1(X1)), inference(split_conjunct,[status(thm)],[c_0_28])).
cnf(c_0_78, negated_conjecture, (esk3_1(esk13_0)=esk2_1(esk13_0)), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_75, c_0_38]), c_0_76])])).
cnf(c_0_79, negated_conjecture, ($false), inference(sr,[status(thm)],[inference(spm,[status(thm)],[c_0_77, c_0_78]), c_0_32]), ['proof']).
# SZS output end CNFRefutation
# Proof object total steps             : 80
# Proof object clause steps            : 57
# Proof object formula steps           : 23
# Proof object conjectures             : 44
# Proof object clause conjectures      : 41
# Proof object formula conjectures     : 3
# Proof object initial clauses used    : 18
# Proof object initial formulas used   : 11
# Proof object generating inferences   : 34
# Proof object simplifying inferences  : 16
# Training examples: 0 positive, 0 negative
# Parsed axioms                        : 14
# Removed by relevancy pruning/SinE    : 0
# Initial clauses                      : 31
# Removed in clause preprocessing      : 0
# Initial clauses in saturation        : 31
# Processed clauses                    : 4871
# ...of these trivial                  : 248
# ...subsumed                          : 2833
# ...remaining for further processing  : 1790
# Other redundant clauses eliminated   : 2
# Clauses deleted for lack of memory   : 0
# Backward-subsumed                    : 68
# Backward-rewritten                   : 917
# Generated clauses                    : 12894
# ...of the previous two non-trivial   : 12112
# Contextual simplify-reflections      : 0
# Paramodulations                      : 12870
# Factorizations                       : 0
# Equation resolutions                 : 24
# Propositional unsat checks           : 0
#    Propositional check models        : 0
#    Propositional check unsatisfiable : 0
#    Propositional clauses             : 0
#    Propositional clauses after purity: 0
#    Propositional unsat core size     : 0
#    Propositional preprocessing time  : 0.000
#    Propositional encoding time       : 0.000
#    Propositional solver time         : 0.000
#    Success case prop preproc time    : 0.000
#    Success case prop encoding time   : 0.000
#    Success case prop solver time     : 0.000
# Current number of processed clauses  : 772
#    Positive orientable unit clauses  : 155
#    Positive unorientable unit clauses: 0
#    Negative unit clauses             : 1
#    Non-unit-clauses                  : 616
# Current number of unprocessed clauses: 5468
# ...number of literals in the above   : 16715
# Current number of archived formulas  : 0
# Current number of archived clauses   : 1016
# Clause-clause subsumption calls (NU) : 303883
# Rec. Clause-clause subsumption calls : 253866
# Non-unit clause-clause subsumptions  : 2890
# Unit Clause-clause subsumption calls : 1530
# Rewrite failures with RHS unbound    : 0
# BW rewrite match attempts            : 1110
# BW rewrite match successes           : 80
# Condensation attempts                : 0
# Condensation successes               : 0
# Termbank termtop insertions          : 267453

Computational Category Theory in Python III: Monoids, Groups, and Preorders

Parts 1 and 2 are found here and here

From one perspective, categories are just another algebraic structure, like groups, monoids and rings. They are these abstract things that have some abstract equational axioms and operations. They are the next stop on our magnificent category journey.

A monoid is a thing that has an associative operation with a unit. Addition and 0 make numbers a monoid. Multiplication and 1 are a separate monoid for numbers. Concatenation and empty lists make lists a monoid. Union and empty set make sets a monoid. We can encode this in python like so:

class PlusIntMonoid(int):
def mplus(self,b):
return self + b
def mzero():
return 0
class TimesIntMonoid(int):
def mplus(self,b):
return self * b
def mzero():
return 1
class ListMonoid(list):
def mplus(self,b):
return self + b
def mzero():
return []
class UnionMonoid(set):
def mplus(self,b):
return self.union(b)
def mzero():
return set()
ListMonoid([1,2]).mplus(ListMonoid([1,2])) # [1,2,1,2]
UnionMonoid({1,2}).mplus(UnionMonoid({1,4})) # {1,2,4}
TimesIntMonoid(3).mplus(TimesIntMonoid.mzero()) # 3
view raw monoid.py hosted with ❤ by GitHub

What does this have to do with categories? Well, if some thing is a category, it obeys the axioms that define what it means to be a category. It has morphisms and objects. The morphisms compose if head meets tail on an object. There are always identity morphism.

The morphisms in a category with 1 object automatically obey the monoid axioms. In this case, the category axioms imply the monoid axioms. Everything composes because there is only one object. It’s a kind of degenerate case where we are not using the partiality of the composition operator. There is automatically a unit for composition because the identity morphism is a unit. Composition is already required to be associative. Boom. The thing is a monoid.

Continuing with our representation from previous posts, we make a python class for each category. An instance of this class is a morphism in this category. If you ask for the domain or codomain of any morphism, you always get back () because it is a single object category. Compare these classes with the above classes.

class PlusIntCat(int):
def compose(self,b):
return self + b
def idd():
return 0
def dom(self):
return () # always return (), the only object
def cod(self):
return ()
class TimesIntCat(int):
def compose(self,b):
return self * b
def idd():
return 1
def dom(self):
return ()
def cod(self):
return ()
class ListCat(int):
def compose(self,b):
return self + b
def idd():
return []
def dom(self):
return ()
def cod(self):
return ()
class UnionSetCat(set):
def compose(self,b):
return self.union(b)
def idd(self,b):
return set()
def dom(self):
return ()
def cod(self):
return ()
PlusIntCat(3).compose(PlusIntCat.idd()) # 3
view raw monoidcat.py hosted with ❤ by GitHub

Some monoids are also groups if there is a natural inverse operation. The integers are a group under addition where the negative gives you the inverse. Some aren’t though. The natural numbers (0,1,2…) aren’t a group under addition though.

Similarly groups can be thought of as a category with one object, with the additional requirement that every morphism is invertible, that there is always a f^{-1} such that f \circ f^{-1} = id.

Sympy has groups in it. We can make a wrapper of that functionality that looks like a categorical interface. To match our pattern of using python classes to represent categories, it is convenient to do the slightly uncommon thing of making a class definition generator function fp_group_cat. Every time you call this function, it makes a different class and a different category. I have only here wrapped the finitely presented group functionality, but there are also free groups, permutation groups, and named groups available in sympy.

from sympy.combinatorics.free_groups import free_group, vfree_group, xfree_group
from sympy.combinatorics.fp_groups import FpGroup, CosetTable, coset_enumeration_r
def fp_group_cat(G, catname):
# A Category generator that turns a finitely presented group into categorical python class
Cat = type(catname, (), vars(G))
def cat_init(self,a):
self.f = a
Cat.__init__ = cat_init
Cat.compose = lambda self,b : G.reduce(self.f * b.f)
Cat.dom = lambda : ()
Cat.cod = lambda : ()
Cat.idd = lambda : Cat(G.identity)
return Cat
F, a, b = free_group("a, b")
G = FpGroup(F, [a**2, b**3, (a*b)**4])
MyCat = fp_group_cat(G, "MyCat")
MyCat(a*a).compose(MyCat.idd())
MyCat.dom()
view raw sympygroup.py hosted with ❤ by GitHub

Many objects, at most one arrow per pair: Preorders

We can simplify the power of a category in a different direction. Instead of having only 1 object, we’ll have few arrows.

A category with many objects but at most a single morphism between a pair of them obeys the axioms of a preorder. In categorical terminology this is sometimes called a thin category Any actual order like like \le on numbers is also a preorder, but preorders have slightly weaker requirements. Here is a categorical representation of the ordering on integers (although really the same implementation will work for any python type that implements <= and == )

class IntOrderCat():
def __init__(self, dom, cod):
assert(dom <= cod)
self.cod = cod
self.dom = dom
self.f = ()
def idd(n):
return IntOrderCat(n,n)
def compose(f,g):
assert( f.dom == g.cod )
return IntOrderCat( g.dom, f.cod )
def __repr__(self):
return f"[{self.dom} <= {self.cod}]"
# our convention for the order of composition feels counterintuitive here.
IntOrderCat(3,5).compose(IntOrderCat(2,3)) # [2 <= 5]
IntOrderCat.idd(3) # [3 <= 3]
view raw intorder.py hosted with ❤ by GitHub

An example of a partial order is the subset relationship, which we can represent using python sets. This is an important but perhaps confusing example. Haven’t we already defined FinSet? Yes, but these are different categories. In FinSet, morphisms are functions. In SubSetCat a morphisms is the subset relationship (of which there can either be one or not one). They just plain are not the same thing even though there are sets in the mix for both. The situation is made even more confusing by the fact that the subset relationship can be talked about indirectly inside FinSet using monic morphisms, which have as their image the subset of interest.

class SubSetCat():
def __init__(self,dom,cod):
assert( dom.issubset(cod))
self.cod = cod
self.dom = dom
def compose(f,g):
assert(f.dom == g.cod)
return SubSetCat(g.dom, f.cod)
def idd(s):
return SubSetCat(s,s)
def __repr__(self):
return f"[{self.dom} <= {self.cod}]"
SubSetCat( {1,2,3} , {1,2,3,7} ).compose(SubSetCat( {1,2} , {1,2,3} )) # [{1, 2} <= {1, 2, 3, 7}]
view raw subset.py hosted with ❤ by GitHub

Preorders are related to directed acyclic graphs (DAG), the directed graphs that have no loops. If you give me a DAG, there is a preorder that is generated by that DAG. Exercise for the reader (AKA I’m lazy): Can you turn a Networkx DAG into a category?

Thoughts

This is nice and all just to explain categories in terms of some perhaps more familiar concepts. It feels a little ho-hum to me. We are not getting really any benefit from the concept of a category from this post. However, the examples of monoids, groups and preorders are always something you should think about when presented when a new categorical concept, because it probably reduces to something more familiar in this case. In addition, mappings to/from these simple objects to more complicated categories can be very interesting.

The methods of computational group theory are intriguing. It seems like some of them should extend to category theory. See this book by RFC Walters for example https://www.cambridge.org/core/books/categories-and-computer-science/203EBBEE29BEADB035C9DD80191E67B1 A very interesting book in other ways too. (Thanks to Evan Patterson for the tip)

Next time I think we’ll talk about finite categories and the finite Yoneda lemma.

Artwork courtesy of David

Edit: Hacker News discussion: https://news.ycombinator.com/item?id=23058551

Computational Category Theory in Python II: Numpy for FinVect

Linear algebra seems to be the place where any energy you put in to learning it seems to pay off massively in understanding other subjects and applications. It is the beating heart of numerical computing. I can’t find the words to overstate the importance of linear algebra.

Here’s some examples:

  • Least Squares Fitting – Goddamn is this one useful.
  • Quadratic optimization problems
  • Partial Differential Equations – Heat equations, electricity and magnetism, elasticity, fluid flow. Differential equations can be approximated as finite difference matrices acting on vectors representing the functions you’re solving for.
  • Linear Dynamical systems – Solving, frequency analysis, control, estimation, stability
  • Signals – Filtering, Fourier transforms
  • Quantum mechanics – Eigenvalues for energy, evolving in time, perturbation theory
  • Probability – Transition matrices, eigenvectors for steady state distributions.
  • Multidimensional Gaussian integrals – A canonical model in quantum mechanics and probability because they are solvable in closed form. Gaussian integrals are linear algebra in disguise. Their solution is describable in terms of the matrices and vectors in the exponent. More on this another day.

Where does category theory come in to this?

On one side, exploring what categorical constructions mean concretely and computationally in linear algebra land helps explain the category theory. I personally feel very comfortable with linear algebra. Matrices make me feel good and safe and warm and fuzzy. You may or may not feel the same way depending on your background.

In particular, understanding what the categorical notion of a pullback means in the context of matrices is the first time the concept clicked for me thanks to discussions with James Fairbanks and Evan Patterson.

But the other direction is important too. A categorical interface to numpy has the promise of making certain problems easier to express and solve. It gives new tools for thought and programming. The thing that seems the most enticing to me about the categorical approach to linear algebra is that it gives you a flexible language to discuss gluing together rectangular subpieces of a numerical linear algebra problem and it gives a high level algebra for manipulating this gluing. Down this road seems to be an actionable, applicable, computational, constructible example of open systems.

Given how important linear algebra is, given that I’ve been tinkering and solving problems (PDEs, fitting problems, control problems, boundary value problems, probabilistic dynamics, yada yada ) using numpy/scipy for 10 years now and given that I actually have a natural reluctance towards inscrutable mathematics for its own sake, I hope that lends some credence to when I say that there really is something here with this category theory business.

It frankly boggles my mind that these implementations aren’t available somewhere already! GAH!

Uh oh. I’m foaming. I need to take my pills now.

FinVect

The objects in the category FinVect are the vector spaces. We can represent a vector space by its dimensionality n (an integer). The morphisms are linear maps which are represented by numpy matrices. ndarray.shape basically tells you what are the domain and codomain of the morphism. We can get a lot of mileage by subclassing ndarray to make our FinVect morphisms. Composition is matrix multiplication (which is associative) and identity morphisms are identity matrices. We’ve checked our category theory boxes.

import numpy as np
import scipy
import scipy.linalg
# https://docs.scipy.org/doc/numpy/user/basics.subclassing.html
class FinVect(np.ndarray):
def __new__(cls, input_array, info=None):
# Input array is an already formed ndarray instance
# We first cast to be our class type
obj = np.asarray(input_array).view(cls)
assert(len(obj.shape) == 2) # should be matrix
# add the new attribute to the created instance
# Finally, we must return the newly created object:
return obj
@property
def dom(self):
''' returns the domain of the morphism. This is the number of columns of the matrix'''
return self.shape[1]
@property
def cod(self):
''' returns the codomain of the morphism. This is the numer of rows of the matrix'''
return self.shape[0]
def idd(n):
''' The identity morphism is the identity matrix. Isn't that nice? '''
return FinVect(np.eye(n))
def compose(f,g):
''' Morphism composition is matrix multiplication. Isn't that nice?'''
return f @ g
view raw class.py hosted with ❤ by GitHub

A part of the flavor of category theory comes from taking the focus away from the objects and putting focus on the morphisms.

One does not typically speak of the elements of a set, or subsets of a set in category theory. One takes the slight indirection of using the map whose image is that subset or the element in question when/if you need to talk about such things.

This actually makes a lot of sense from the perspective of numerical linear algebra. Matrices are concrete representations of linear maps. But also sometimes we use them as data structures for collections of vectors. When one wants to describe a vector subspace concretely, you can describe it either as the range of a matrix or the nullspace of a matrix. This is indeed describing a subset in terms of a mapping. In the case of the range, we are describing the subspace as all possible linear combinations of the columns \lambda_1 c_1 + \lambda_2 c_2 + ... . It is a matrix mapping from the space of parameters \lambda to the subspace (1 dimension for each generator vector / column). In the case of the nullspace it is a matrix mapping from the subspace to the space of constraints (1 dimension for each equation / row).

The injectivity or surjectivity of a matrix is easily detectable as a question about its rank. These kinds of morphisms are called monomorphisms and epimorphisms respectively. They are characterized by whether you can “divide” out by the morphism on the left or on the right. In linear algebra terms, whether there is a left or right inverse to these possibly rectangular, possibly ill-posed matrices. I personally can never remember which is which (surf/ing, left/right, mono/epi) without careful thought, but then again I’m an ape.

def monic(self):
''' Is mapping injective.
In other words, maps every incoming vector to distinct outgoing vector.
In other words, are the columns independent.
In other words, does `self @ g == self @ f` imply `g == f` forall g,f
https://en.wikipedia.org/wiki/Monomorphism '''
return np.linalg.matrix_rank(self) == self.dom
def epic(self):
''' is mapping surjective? '''
return np.linalg.matrix_rank(self) == self.cod
view raw monic.py hosted with ❤ by GitHub

Some categorical constructions are very simple structural transformation that correspond to merely stacking matrices, shuffling elements, or taking transposes. The product and coproduct are examples of this. The product is an operation that takes in 2 objects and returns a new object, two projections \pi_1 \pi_2 and a function implementing the universal property that constructs f from f_1 f_2.

The diagram for a product

Here is the corresponding python program. The matrix e (called f in the diagram. Sorry about mixed conventions. ) is the unique morphism that makes those triangles commute, which is checked numerically in the assert statements.

def product(a,b):
''' The product takes in two object (dimensions) a,b and outputs a new dimension d , two projection morphsisms
and a universal morphism.
The "product" dimension is the sum of the individual dimensions (funky, huh?)
'''
d = a + b # new object
p1 = FinVect(np.hstack( [np.eye(a) , np.zeros((a,b))] ))
p2 = FinVect(np.hstack( [np.zeros((b,a)), np.eye(b) ] ))
def univ(f,g):
assert(f.dom == g.dom) # look at diagram. The domains of f and g must match
e = np.vstack(f,g)
# postconditions. True by construction.
assert( np.allclose(p1 @ e , f )) # triangle condition 1
assert( np.allclose(p2 @ e , g ) ) # triangle condition 2
return e
return d, p1, p2, univ
view raw product.py hosted with ❤ by GitHub

The coproduct proceeds very similarly. Give it a shot. The coproduct is more similar to the product in FinVect than it is in FinSet.

The initial and terminal objects are 0 dimensional vector spaces. Again, these are more similar to each other in FinVect than they are in FinSet. A matrix with one dimension as 0 really is unique. You have no choice for entries.

def terminal():
''' terminal object has unique morphism to it '''
return 0, lambda x : FinVect(np.ones((0, x)))
def initial():
''' the initial object has a unique morphism from it
The initial and final object are the same in FinVect'''
return 0, lambda x : FinVect(np.ones((x, 0)))
view raw terminal.py hosted with ❤ by GitHub

Where the real meat and potatoes lives is in the pullback, pushout, equalizer, and co-equalizer. These are the categorical constructions that hold equation solving capabilities. There is a really nice explanation of the concept of a pullback and the other above constructions here .

Vector subspaces can be described as the range of the matrix or the nullspace of a matrix. These representations are dual to each other in some sense. RN=0. Converting from one representation to the other is a nontrivial operation.

In addition to thinking of these constructions as solving equations, you can also think of a pullback/equalizer as converting a nullspace representation of a subspace into a range representation of a subspace and the coequalizer/pushout as converting the range representation into a nullspace representation.

The actual heart of the computation lies in the scipy routine null_space and orth. Under the hood these use an SVD decomposition, which seems like the most reasonable numerical approach to questions about nullspaces. (An aside: nullspaces are not a very numerical question. The dimensionality of a nullspace of a collection of vectors is pretty sensitive to perturbations. This may or may not be an issue. Not sure. )

Let’s talk about how to implement the pullback. The input is the two morphisms f and g. The output is an object P, two projections p1 p2, and a universal property function that given q1 q2 constructs u. This all seems very similar to the product. The extra feature is that the squares are required to commute, which corresponds to the equation f p_1 = g p_2 and is checked in assert statements in the code. This is the equation that is being solved. Computationally this is done by embedding this equation into a nullspace calculation \begin{bmatrix} F & -G \end{bmatrix} \begin{bmatrix} x  \\ y \end{bmatrix} = 0. The universal morphism is calculated by projecting q1 and q2 onto the calculated orthogonal basis for the nullspace. Because q1 and q2 are required to be in a commuting square with f and g by hypothesis, their columns live in the nullspace of the FG stacked matrix. There is extra discussion with James and Evan and some nice derivations as mentioned before here

def pullback(f,g):
assert( f.cod == g.cod ) # Most have common codomain
# horizontally stack the two operations.
null = scipy.linalg.null_space( np.hstack([f,-g]) )
d = null.shape[1] # dimension object of corner of pullback
p1 = FinVect(null[:f.dom, :])
p2 = FinVect(null[f.dom:, :])
def univ(q1,q2):
# preconditions
assert(q1.dom == q2.dom )
assert( np.allclose(f @ q1 , g @ q2 ) ) # given a new square. This means u,v have to inject into the nullspace
u = null.T @ np.vstack([q1,q2]) # calculate universal morphism == p1 @ u + p2 @ v
# postcondition
assert( np.allclose(p1 @ u, q1 )) # verify triangle 1
assert( np.allclose(p2 @ u, q2 ) ) # verify triangle 2
return u
# postcondition
assert( np.allclose( f @ p1, g @ p2 ) ) # These projections form a commutative square.
return d, p1, p2, univ
view raw pullback.py hosted with ❤ by GitHub

The equalizer, coequalizer, and pushout can all be calculated similarly. A nice exercise for the reader (AKA I’m lazy)!

Thoughts

I think there are already some things here for you to chew on. Certainly a lot of polish and filling out of the combinators is required.

I am acutely aware that I haven’t shown any of this being used. So I’ve only shown the side where the construction helps teach us category theory and not entirely fulfilled the lofty promises I set in the intro. I only have finite endurance. I’m sure the other direction, where this helps us formulate problems, will show up on this blog at some point. For what I’m thinking, it will be something like this post http://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ but in a different pullback-y style. Mix together FinSet and FinVect. Something something decorated cospans? https://arxiv.org/abs/1609.05382

One important thing is we really need to extend this to affine maps rather than linear maps (affine maps allow an offset y = Ax + b. This is important for applications. The canonical linear algebra problem is Ax=b which we haven’t yet shown how to represent.

One common approach to embed the affine case in the linear case is to use homogenous coordinates. https://en.wikipedia.org/wiki/Homogeneous_coordinates.

Alternatively, we could make a new python class FinAff that just holds the b vector as a separate field. Which approach will be more elegant is not clear to me at the moment.

Another very enticing implementation on the horizon is a nice wrapper for compositionally calculating gaussian integrals + linear delta functions. Gaussian integrals + delta functions are solved by basically a minimization problem over the exponent. I believe this can be formulated by describing the linear subspace we are in as a span over the input and output variables, associating a quadratic form with the vertex of the span. You’ll see.

References

Blah Blah Blah

Whenever I write a post, I just let it flow, because I am entranced by the sound of my own keyboard clackin’. But it would deeply surprise me if you are as equally entranced, so I take sections out that are just musings and not really on the main point. So let’s toss em down here if you’re interested.

I like to draw little schematic matrices sometimes so I can visually see with dimensions match with which dimensions.

Making the objects just the dimension is a structural approach and you could make other choices. It may also make sense to not necessarily identify two vector spaces of the same dimensionality. It is nonsensical to consider a vector of dog’s nose qualities to be interchangeable with a vector of rocket ship just because they both have dimensionality 7.

High Level Linear Algebra

Linear algebra already has some powerful high level abstractions in common use.

Numpy indexing and broadcasting can sometimes be a little cryptic, but it is also very, very powerful. You gain both concision and speed.

Matrix notation is the most commonly used “pointfree” notation in the world. Indexful expressions can be very useful, but the calculus of matrices lets us use intuition built about algebraic manipulation of ordinary numbers to manipulate large systems of equations in a high level way. There are simple rules governing matrix inverse, transpose, addition, multiplication, identity.

Another powerful notion in linear algebra is that of block matrices. Block matrices are the standard high level notation to talk about subpieces of a numerical linear algebra problem. For example, you might be solving the heat equation on two hunks of metal attached at a joint. It is natural to consider this system in block form with the off diagonal blocks corresponding to the coupling of the two hunks. https://en.wikipedia.org/wiki/Domain_decomposition_methods

One does not typically speak of the elements of a set, or subsets of a set in category theory. One takes the slight indirection of using the map whose image is that subset or the element in question when/if you need to talk about such things. This pays off in a couple ways. There is a nice minimalism in that you don’t need a whole new entity (python class, data structure, what have you) when you already have morphisms lying around. More importantly though the algebraic properties of what it means to be an element or subset are more clearly stated and manipulated in this form. On the flipside, given that we often return to subset or element based thinking when we’re confused or explaining something to a beginner shows that I think it is a somewhat difficult game to play.

The analogy is that a beginner will often write for loops for a numpy calculation that an expert knows how to write more concisely and efficiently using broadcasting and vectorization. And sometimes the expert just can’t figure out how to vectorize some complicated construction and defeatedly writes the dirty feeling for loop.

What about in a language where the for loops are fast, like Julia? Then isn’t the for loop version just plain better, since any beginner can read and write it and it runs fast too? Yes, I think learning some high level notation or interface is a harder sell here. Nevertheless, there is utility. High level formulations enable optimizing compilers to do fancier things. They open up opportunities for parallelism. They aid reasoning about code. See query optimization for databases. Succinctness is surprising virtue in and of itself.

Aaron Hsu (who is an APL beast) said something that has stuck with me. APL has a reputation for being incredibly unscrutable. It uses characters you can’t type, each of which is a complex operation on arrays. It is the epitome of concision. A single word in APL is an entire subroutine. A single sentence is a program. He says that being able to fit your entire huge program on a single screen puts you in a different domain of memory and mindspace. That it is worth the inscrutability because once trained, you can hold everything in your extended mind at once. Sometimes I feel when I’m writing stuff down on paper that it is an extension of my mind, that it is part of my short term memory. So too the computer screen. I’m not on board for APL yet, but food for thought ya know?

Differences between the pure mathematical perspective of Linear Algebra, and the Applied/Numerical Linear Alegbra

I think there a couple conceptual points of disconnect between the purely mathematical conception of vector spaces and the applied numerical perspective.

First off, the numerical world is by and large focused on full rank square matrices. The canonical problem is solving the matrix equation Ax=b for the unknown vector x. If the matrix isn’t full rank or square, we find some way to make it square and full rank.

The mathematical world is more fixated on the concept of a vector subspace, which is a set of vectors.

It is actually extremely remarkable and I invite you for a moment to contemplate that a vector subspace over the real numbers is a very very big set. Completely infinite. And yet it is tractable because it is generated by only a finite number of vectors, which we can concretely manipulate.

Ok. Here’s another thing. I am perfectly willing to pretend unless I’m being extra careful that machine floats are real numbers. This makes some mathematicians vomit blood. I’ve seen it. Cody gave me quite a look. Floats upon closer inspection are not at all the mathematical real numbers though. They’re countable first off.

From a mathematical perspective, many people are interested in precise vector arithmetic, which requires going to somewhat unusual fields. Finite fields are discrete mathematical objects that just so happen to actually have a division operation like the rationals or reals. Quite the miracle. In pure mathematics they more often do linear algebra over these things rather than the rationals or reals.

The finite basis theorem. This was brought up in conversation as a basic result in linear algebra. I’m not sure I’d ever even heard of it. It is so far from my conceptualization of these things.

Monoidal Products

The direct sum of matrices is represented by taking the block diagonal. It is a monoidal product on FinVect. Monoidal products are binary operations on morphisms in a category that play nice with it in certain ways. For example, the direct sum of two identity matrices is also an identity matrix.

The kronecker product is another useful piece of FinVect. It is a second monoidal product on the catgeory FinVect It is useful for probability and quantum mechanics. When you take the pair of the pieces of state to make a combined state, you

    def par(f,g):
        ''' One choice of monoidal product is the direct sum '''
        r, c = f.shape
        rg, cg = g.shape
        return Vect(np.block( [ [f       ,           np.zeros((r,cg))]  ,
                                [np.zeros((rg,c))  , g              ]]  ))
    def par2(f,g):
        '''  another choice is the kroncker product'''
        return np.kron(f,g)

We think about row vectors as being matrices where the number of columns is 1 or column vectors as being matrices where the number of rows is 1. This can be considered as a mapping from/to the 1 dimensional vector. These morphisms are points.

The traditional focus of category theory in linear algebra has been on the kronecker product, string diagrams as quantum circuits/ penrose notation, and applications to quantum mechanics.

However, the direct sum structure and the limit/co-limit structures of FinVect are very interesting and more applicable to everyday engineering. I associate bringing more focus to this angle with John Baez’s group and his collaborators.

Anyway, that is enough blithering.

Computational Category Theory in Python I: Dictionaries for FinSet

Category theory is a mathematical theory with reputation for being very abstract.

Category theory is an algebraic theory of functions. It has the flavor of connecting up little pipes and ports that is reminiscent of dataflow languages or circuits, but with some hearty mathematical underpinnings.

So is this really applicable to programming at all? Yes, I think so.

Here’s one argument. Libraries present an interface to their users. One of the measures of the goodness or badness of an interface is how often you are inclined to peek under the hood to get it to do the thing that you need. Designing these interfaces is hard. Category theory has taken off as a field because it has been found to be a useful and uniform interface to a surprising variety of very different mathematics. I submit that it is at least plausible that software interfaces designed with tasteful mimicry of category theory may achieve similar uniformity across disparate software domains. This is epitomized for me in Conal Elliott’s Compiling to Categories. http://conal.net/papers/compiling-to-categories/

I think it is easy to have the miscomprehension that a fancy language like Haskell or Agda is necessary to even begin writing software that encapsulates category theory based ideas, but this is simply not the case. I’ve been under this misapprehension before.

It just so happens that category theory is especially useful in those languages for explaining some programming patterns especially those concerning polymorphism. See Bartosz Milewski’s Category theory for Programmers.

But this is not the only way to use category theory.

There’s a really delightful book by Rydeheard and Burstall called Computational Category Theory. The first time I looked at it, I couldn’t make heads or tails of it, going on the double uphill battle of category theory and Standard ML. But looking at it now, it seems extremely straightforward and well presented. It’s a cookbook of how to build category theoretic interfaces for software.

So I think it is interesting to perform some translation of its concepts and style into python, the lingua franca of computing today.

In particular, there is a dual opportunity to both build a unified interface between some of the most commonly used powerful libraries in the python ecosystem and also use these implementations to help explain categorical concepts in concrete detail. I hope to have the attention span to do this following:

A very simple category is that of finite sets. The objects in the category can be represented by python sets. The morphisms can be represented by python dictionaries. Nothing abstract here. We can rip and tear these things apart any which way we please.

The manipulations are made even more pleasant by the python features of set and dictionary comprehension which will mimic the definitions you’ll find on the wikipedia page for these constructions quite nicely.

Composition is defined as making a new dictionary by feeding the output of the first dictionary into the second. The identity dictionary over a set is one that has the same values as keys. The definition of products and coproducts (disjoint union) are probably not too surprising.

One really interesting thing about the Rydeheard and Burstall presentation is noticing what are the inputs to these constructions and what are the outputs. Do you need to hand it objects? morphisms? How many? How can we represent the universal property? We do so by outputting functions that construct the required universal morphisms. They describe this is a kind of skolemization . The constructive programmatic presentation of the things is incredibly helpful to my understanding, and I hope it is to yours as well.

Here is a python class for FinSet. I’ve implemented a couple of interesting constructions, such as pullbacks and detecting monomorphisms and epimorphisms.

I’m launching you into the a deep end here if you have never seen category theory before (although goddamn does it get deeper). Do not be surprised if this doesn’t make that much sense. Try reading Rydeheard and Burstall chapter 3 and 4 first or other resources.

from collections import Counter
class FinSet():
def init(self, dom ,cod , f):
''' In order to specify a morphism, we need to give a python set that is the domain, a python set
that is the codomain, and a dictionary f that encodes a function between these two sets.
We could by assumption just use f.keys() implicitly as the domain, however the codomain is not inferable from just f.
In other categories that domain might not be either, so we chose to require both symmettrically.
'''
assert( dom == set(f.keys())) # f has value for everything in domain
assert( all( [y in cod for y in f.value()] )) # f has only values in codomain
self.cod = cod
self.dom = dom
self.f = f
def __getitem__(self,i):
# a convenient overloading.
return self.f[i]
def compose(f,g):
''' Composition is function composition. Dictionary comprehension syntax for the win! '''
return FinSet( g.dom, f.cod, { x : f[g[x]] for x in g.dom })
def idd(dom):
''' The identity morphism on an object dom. A function mapping every x to itself'''
return FinSet(dom, dom, { x : x for x in dom})
def __equal__(f,g):
assert(f.dom == g.dom) # I choose to say the question of equality only makes sense if the arrows are parallel.
assert(f.cod == g.cod) # ie. they have the same object at head and tail
return f.f == g.f
def terminal(dom):
''' The terminal object is an object such that for any other object, there is a unique morphism
to the terminal object
This function returns the object itself {()} and the universal morphism from dom to that object'''
return {()} , FinSet(dom, {()} , {x : () for x in dom} )
def initial(cod):
''' The initial object is an object such that for any other object, there is a unique morphsm
from the initial object to that object.
It is the dual of the terminal object.
In FinSet, the initial object is the empty set set({}). The mapping is then an empty dictionary dict({})'''
return set({}) , FinSet(set({}), cod, dict({}))
def monic(self):
''' Returns bool of whether mapping is injective.
In other words, maps every incoming element to unique outgoing element.
In other words, does `self @ g == self @ f` imply `g == f` forall g,f
https://en.wikipedia.org/wiki/Monomorphism
Counter class counters occurences'''
codomain_vals = self.f.values()
counts = Counter(codomain_vals).values() # https://docs.python.org/3/library/collections.html#collections.Counter
return all([count == 1 for count in counts]) # no elements map to same element
def epic(self):
''' is mapping surjective? In other words does the image of the map f cover the entire codomain '''
codomain_vals = self.f.keys()
return set(codomain_vals) == self.cod # cover the codomain
def product(a,b): # takes a sepcific product
ab = { (x,y) for x in a for y in b }
p1 = FinSet( ab, a, { (x,y) : x for (x,y) in ab } )
p2 = FinSet( ab, b, { (x,y) : y for (x,y) in ab } )
return ab, p1, p2 , lambda f,g: FinSet( f.dom, ab, { x : (f[x],g[x]) for x in f.dom } ) # assert f.dom == g.dom, f.cod == a, g.cod == b
def coproduct(a,b):
ab = { (0,x) for x in a }.union({ (1,y) for y in b })
i1 = FinSet( a, ab, { x : (0,x) for x in a } )
i2 = FinSet( b, ab, { y : (1,y) for y in b } )
def fanin(f,g):
return { (tag,x) : (f[x] if tag == 0 else g[x]) for (tag,x) in ab }
return ab, i1, i2, fanin
def equalizer(f,g):
''' The equalizer is a construction that allows one to talk about the solution to an equation in a categorical manner
An equation is f(x) = g(x). It has two mappings f and g that we want to somehow be the same. The solution to this equation
should be a subset of the shared domain of f and g. Subsets are described from within FinSet by maps that map into the
subset.
'''
assert(f.dom == g.dom)
assert(f.cod == g.cod)
e = { x for x in f.dom if f[x] == g[x] }
return e, FinSet(e, f.dom, {x : x for x in e})
def pullback(f,g): # solutions to f(x) = g(y)
assert(f.cod == g.cod)
e = {(x,y) for x in f.dom for y in g.dom if f[x] == g[y]} # subset of (f.dom, g.dom) that solves equation
p1 = FinSet( e, f.dom, { (x,y) : x for (x,y) in e } ) # projection 1
p2 = FinSet( e, g.dom, { (x,y) : y for (x,y) in e } ) # projection 2
def univ(q1,q2):
''' Universal property: Given any other commuting square of f @ q1 == g @ q2, there is a unique morphism
that injects into e such that certain triangles commute. It's best to look at the diagram'''
assert(q1.cod == p1.cod) # q1 points to the head of p1
assert(q2.cod == p2.cod) # q2 points to the head of p2
assert(q1.dom == q2.dom) # tails of q1 and q2 are the same
assert(f @ q1 == g @ q2) # commuting square condition
return FinSet( q1.dom, e , { z : ( q1[z] , q2[z] ) for z in q1.dom } )
return e, p1, p2, univ
view raw finset.py hosted with ❤ by GitHub

Here’s some fun exercises (Ok. Truth time. It’s because I got lazy). Try to implement exponential and pushout for this category.

Uniform Continuity is Kind of Like a Lens

A really interesting topic is exact real arithmetic. It turns out, there are systematic ways of calculating numerical results with arbitrarily fine accuracy.

In practice this is not used much as it is complicated and slow.

There are deep waters here.

The problem is made rather difficult by the fact that you can’t compute real numbers strictly, you have to in some sense compute better and better finite approximations.

One way of doing this is to compute a stream of arbitrarily good approximations. If someone needs a better approximation than you’ve already given, they pop the next one off.

Streams give you some inverted control flow. They allow the results to pull on the input, going against the grain of the ordinary direction of computation. If you are interested in a final result of a certain accuracy, they seem somewhat inefficient. You have to search for the right amount to pull the incoming streams, and the intermediate computations may not be helpful.

Haskell chews infinite lists up for breakfast, so it’s a convenient place for such things https://wiki.haskell.org/Exact_real_arithmetic https://hackage.haskell.org/package/exact-real

A related but slightly different set of methods comes in the form of interval arithmetic. Interval arithmetic also gives precise statements of accuracy, maintain bounds of the accuracy as a number is carried along

Interval arithmetic is very much like forward mode differentiation. In forward mode differentiation, you compute on dual numbers (x,dx) and carry along the derivatives as you go.

type ForwardMode x dx y dy = (x,dx) -> (y,dy)
type IntervalFun x delx y dely = (x,delx) -> (y, dely)

Conceptually, differentiation and these validated bounds are connected as well. They are both telling you something about how the function is behaving nearby. The derivative is mostly meaningful at exactly the point it is evaluated. It is extremely local. The verified bounds being carried along are sort of a very principled finite difference approximation.

But reverse mode differentiation is often where it is at. This is the algorithm that drives deep learning. Reverse mode differentiation can be modeled functionally as a kind of lens. http://www.philipzucker.com/reverse-mode-differentiation-is-kind-of-like-a-lens-ii/ . The thing that makes reverse mode confusing is the backward pass. This is also inverted control flow, where the output pushes information to the input. The Lens structure does this too

type Lens s t a b = s -> (a, b -> t)

It carrier a function that goes in the reverse direction which are being composed in the opposite direction of ordinary control flow. These functions are the “setters” in the ordinary usage of the Lens, but they are the backproppers for differentiation.

By analogy one might try

type RealF x delta y epsilon = Lens x delta y epsilon = x -> (y, epsilon -> delta)

There is something pleasing here compared to interval arithmetic in that the output epsilon drives the input delta. The second function is kind of a Skolemized \delta(\epsilon) from the definition of continuity.

Although it kind of makes sense, there is something unsatisfying about this. How do you compute the x -> y? You already need to know the accuracy before you can make this function?

So it seems to me that actually a better definition is

type RealF x delta y epsilon = Lens epsilon y delta x  = epsilon -> (delta, x -> y)

This type surprised me and is rather nice in many respects. It let’s you actually calculate x -> y, has that lazy pull based feel without infinite streams, and has delta as a function of epsilon.

I have heard, although don’t understand, that uniform continuity is the more constructive definition (see constructive analysis by Bridger) https://en.wikipedia.org/wiki/Uniform_continuity This definition seems to match that.

In addition we are able to use approximations of the actual function if we know the accuracy it needs to be computed to. For example, given we know we need 0.01 accuracy of the output, we know we only need 0.009 accuracy in the input and we only need the x term of a Taylor series of sine (the total inaccuracy of the input and the inaccuracy of our approximation of sine combine to give total inaccuracy of output). If we know the needed accuracy allows it, we can work with fast floating point operations. If we need better we can switch over to mpfr, etc.

This seems nice for MetaOcaml staging or other compile time macro techniques. If the epsilon required is known at compile time, it makes sense to me that one could use MetaOcaml to produce fast unrolled code. In addition, if you know the needed accuracy you can switch between methods and avoid the runtime overhead. The stream based approach seems to have a lot of context switching and perhaps unnecessary intermediate computations. It isn’t as bad as it seems, since these intermediate computations are usually necessary to compute anyhow, but still.

We can play the same monoidal category games with these lenses as ever. We can use dup, par, add, mul, sin, cos etc. and wire things up in diagrams and what have you.

This might be a nice type for use in a theorem prover. The Lens type combined with the appropriate properties that the intervals go to zero and stay consistent for arbitrary epsilon seems like enough? { Realf | something something something}

Relation to Backwards error analysis?

Does this have nice properties like backprop when on high dimensional inputs? That’s where backprop really shines, high to low dimensional functions

Categorical Combinators for Convex Optimization and Model Predictive Control using Cvxpy

We’re gonna pump this well until someone MAKES me stop.

This particular example is something that I’ve been trying to figure out for a long time, and I am pleasantly surprised at how simple it all seems to be. The key difference with my previous abortive attempts is that I’m not attempting the heavy computational lifting myself.

We can take pointful DSLs and convert them into point-free category theory inspired interface. In this case a very excellent pointful dsl for convex optimization: cvxpy. Some similar and related posts converting dsls to categorical form

A convex optimization problem optimizes a convex objective function with constraints that define a convex set like polytopes or balls. They are polynomial time tractable and shockingly useful. We can make a category out of convex optimization problems. We consider some variables to be “input” and some to be “output”. This choice is somewhat arbitrary as is the case for many “relation” feeling things that aren’t really so rigidly oriented.

Convex programming problems do have a natural notion of composition. Check out the last chapter of Rockafeller, where he talks about the convex algebra of bifunctions. Instead of summing over the inner composition variable like in Vect \sum_j A_{ij}B_{jk}, or existentializing like in Rel \{ (a,c) |\exists b. (a,b)\in A, (b,c) \in B \}, we instead minimize over the inner composition variable min_y A(x,y) + B(y,z). These are similar operations in that they all produce bound variables.

The identity morphism is just the simple constraint that the input variables equal to output variables with an objective function of 0. This is an affine constraint, hence convex.

In general, if we ignore the objective part entirely by just setting it to zero, we’re actually working in a very computationally useful subcategory of Rel, ConvexRel, the category of relations which are convex sets. Composition corresponds to an existential operation, which is quickly solvable by convex optimization techniques. In operations research terms, these are feasibility problems rather than optimization problems. Many of the combinators do nothing to the objective.

The monoidal product just stacks variables side by side and adds the objectives and combines the constraints. They really are still independent problems. Writing things in this way opens up a possibility for parallelism. More on that some other day.

We can code this all up in python with little combinators that return the input, output, objective, constraintlist. We need to hide these in inner lambdas for appropriate fresh generation of variables.

import cvxpy as cvx
import matplotlib.pyplot as plt
class ConvexCat():
def __init__(self,res):
self.res = res
def idd(n):
def res():
x = cvx.Variable(n)
return x, x, 0, []
return ConvexCat(res)
def par(f,g):
def res():
x,y,o1, c1 = f()
z,w,o2, c2 = g()
a = cvx.hstack((x,z))
b = cvx.hstack((y,w))
return a , b , o1 + o2, c1 + c2 + [a == a, b == b] # sigh. Don't ask. Alright. FINE. cvxpyp needs hstacks registered to populate them with values
return ConvexCat(res)
def compose(g,f):
def res():
x,y,o1, c1 = f()
y1,z,o2, c2 = g()
return x , z, o1 + o2, c1 + c2 + [y == y1]
return ConvexCat(res)
def dup(n):
def res():
x = cvx.Variable(n)
return x, cvx.vstack(x,x) , 0, []
return ConvexCat(res)
def __call__(self):
return self.res()
def run(self):
x, y, o, c = self.res()
prob = cvx.Problem(cvx.Minimize(o),c)
sol = prob.solve()
return sol, x, y, o
def __matmul__(self,g):
return self.compose(g)
def __mul__(self,g):
return self.par(g)
def const(v):
def res():
x = cvx.Variable(1) # hmm. cvxpy doesn't like 0d variables. That's too bad
return x, x, 0, [x == v]
return ConvexCat(res)
def converse(f):
def res():
a,b,o,c = f()
return b, a, o, c
return ConvexCat(res)
def add(n):
def res():
x = cvx.Variable(n)
return x, cvx.sum(x), 0, []
return ConvexCat(res)
def smul(s,n):
def res():
x = cvx.Variable(n)
return x, s * x, 0, []
return ConvexCat(res)
def pos(n):
def res():
x = cvx.Variable(n, positive=True)
return x, x, 0 , []
return ConvexCat(res)
# And many many more!
view raw convexcat.py hosted with ❤ by GitHub

Now for a somewhat more concrete example: Model Predictive control of an unstable (linearized) pendulum.

Model predictive control is where you solve an optimization problem of the finite time rollout of a control system online. In other words, you take measurement of the current state, update the constraint in an optimization problem, ask the solver to solve it, and then apply the force or controls that the solver says is the best.

This gives the advantage over the LQR controller in that you can set hard inequality bounds on total force available, or positions where you wish to allow the thing to go. You don’t want your system crashing into some wall or falling over some cliff for example. These really are useful constraints in practice. You can also include possibly time dependent aspects of the dynamics of your system, possibly helping you model nonlinear dynamics of your system.

I have more posts on model predictive control here http://www.philipzucker.com/model-predictive-control-of-cartpole-in-openai-gym-using-osqp/ http://www.philipzucker.com/flappy-bird-as-a-mixed-integer-program/

Here we model the unstable point of a pendulum and ask the controller to find forces to balance the pendulum.

def controller(Cat,pend_step, pos, vel):
acc = Cat.idd(3)
for i in range(10):
acc = acc @ pend_step
init = Cat.const(pos) * Cat.const(vel) * Cat.idd(1)
return acc @ init
view raw controller.py hosted with ❤ by GitHub

We can interpret the controller in GraphCat in order to produce a diagram of the 10 step lookahead controller. This is an advantage of the categorical style a la compiling to categories

from graphcat import *
GC = GraphCat
pend_step = GC.block("pend_step", ["x(t)", "v(t)", "f(t)"],["x(t+1)", "v(t+1)", "f(t+1)"])
pos = 0.5
vel = 0
prob = controller(GraphCat, pend_step, pos, vel)
prob.run()
view raw diagram.py hosted with ❤ by GitHub

We can also actually run it in model predictive control configuration in simulation.

CC = ConvexCat
idd = CC.idd
def pend_step_res():
dt = 0.1
x = cvx.Variable(2)
xnext = cvx.Variable(2)
f = cvx.Variable(1)
pos = x[0]
vel = x[1]
posnext = xnext[0]
velnext = xnext[1]
c = []
c += [posnext == pos + vel*dt] # position update
c += [velnext == vel + (pos + f )* dt] # velocity update
c += [-1 <= f, f <= 1] # force constraint
c += [-1 <= posnext, posnext <= 1] # safety constraint
c += [-1 <= pos, pos <= 1]
obj = pos**2 + posnext**2 # + 0.1 * f**2
z = cvx.Variable(1)
c += [z == 0]
return cvx.hstack((x , f)) , cvx.hstack((xnext,z)), obj, c
pend_step = ConvexCat(pend_step_res)
pos = 0.5
vel = 0
poss = []
vels = []
fs = []
dt = 0.1
'''
# we could get this to run faster if we used cvxpy params instead of recompiling the controller every time.
# some other day
p_pos = cvx.Param(1)
p_vel = cvx.Param(1)
p_pos.value = pos
p_pos.value = pos
'''
for j in range(30):
prob = controller(ConvexCat, pend_step, pos, vel)
_, x ,y ,_ = prob.run()
print(x.value[2])
#print(dir(x))
f = x.value[2]
#print(f)
# actually update the state
pos = pos + vel * dt
vel = vel + (pos + f )* dt
poss.append(pos)
vels.append(vel)
fs.append(f)
plt.plot(poss,label="pos")
plt.plot(vels, label="vel")
plt.plot(fs, label="force")
plt.legend()
view raw mpc.py hosted with ❤ by GitHub
And some curves. How bout that.

Bits and Bobbles

LazySets https://github.com/JuliaReach/LazySets.jl

ADMM – It’s a “lens”. I’m pretty sure I know how to do it pointfree. Let’s do it next time.

The minimization can be bubbled out to the top is we are always minimizing. If we mix in maximization, then we can’t and we’re working on a more difficult problem. This is similar to what happens in Rel when you have relational division, which is a kind of universal quantification \forall . Mixed quantifier problems in general are very tough. These kinds of problems include games, synthesis, and robustness. More on this some other day.

Convex-Concave programming minimax https://web.stanford.edu/~boyd/papers/pdf/dccp_cdc.pdf https://web.stanford.edu/class/ee364b/lectures/cvxccv.pdf

The minimization operation can be related to the summation operation by the method of steepest descent in some cases. The method of steepest descent approximates a sum or integral by evaluating it at it’s most dominant position and expanding out from there, hence converts a linear algebra thing into an optimization problem. Examples include the connection between statistical mechanics and thermodynamics and classical mechanics and quantum mechanics.

Legendre Transformation ~ Laplace Transformation via steepest descent https://en.wikipedia.org/wiki/Convex_conjugate yada yada, all kinds of good stuff.

Intersection is easy. Join/union is harder. Does MIP help?

def meet(f,g):
   def res():
      x,y,o,c = f()
      x1,y1,o1,c1 = g()
      return x,y,o+o1, c+ c1 + [x ==  x1, y == y1]
   return res

Quantifier elimination

MIP via adjunction

Rough Ideas on Categorical Combinators for Model Checking Petri Nets using Cvxpy

Petri nets are a framework for modeling dynamical systems that is very intuitive to some people. The vanilla version is that there are discrete tokens at nodes on a graph representing resources of some kind and tokens can be combined according to the firing of transition rules into new tokens in some other location.

This is a natural generalization of chemical reaction kinetics, where tokens are particular kinds of atoms that need to come together. It also is a useful notion for computer systems, where tokens represent some computational resource.

To me, this becomes rather similar to a flow problem or circuit problem. Tokens feel a bit like charge transitions are a bit like current (although not necessarily conservative). In a circuit, one can have such a small current that the particulate nature of electric current in terms of electrons is important. This happens for shot noise or for coulomb blockade for example.

If the number of tokens is very large, it seems intuitively sensible to me that one can well approximate the behavior by relaxing to a continuum. Circuits have discrete electrons and yet are very well approximated by ohm’s laws and the like. Populations are made of individuals, and yet in the thousands their dynamics are well described by differential equations.

It seems to me that mixed integer programming is a natural fit for this problem. Mixed integer programming has had its implementations and theory heavily refined for over 70 years so now very general purpose and performant off the shelf solvers are available. The way mixed integer programs are solved is by considering their quickly solved continuous relaxation (allowing fractional tokens and fractional transitions more akin to continuous electrical circuit flow) and using this information to systematically inform a discrete search process. This seems to me a reasonable starting approximation. Another name for petri nets is Vector Addition Systems, which has more of the matrix-y flavor.

We can encode a bounded model checking for reachability of a petri net into a mixed integer program fairly easily. We use 2-index variables, the first of which will be used for time step. We again turn to the general purpose functional way of encoding pointful dsls into pointfree ones as I have done here and here. The key point is that you need to be careful where you generate fresh variables. This is basically copy catting my posts here. http://www.philipzucker.com/categorical-combinators-for-graphviz-in-python/ http://www.philipzucker.com/a-sketch-of-categorical-relation-algebra-combinators-in-z3py/

I’m like 70% sure what I did here makes sense, but I’m pretty sure the general idea makes sense with some fiddling.

T = 10 # total number of time steps as a global parameter
class PetriCat():
def compose(f,g):
def res():
a, b , fcon = f()
b1, c, gcon = g()
return a, c, fcon + gcon + [b == b1]
def idd():
def res()
x = cvx.Variable((T-1,1), integer = True)
return x, x, [x >= 0]
def par(f,g):
def res():
a, b , fcon = f()
c, d , gcon = g()
return cvx.vstack([a,c]), cvx.vstack([b,d]), fcon + gcon
return res
def weighted_block(wi, wo, wint):
def res():
(Na, Ni) = wi.shape # number inputs, actions
(Na1,No) = wo.shape
(Na2, Nint) = wint.shape
assert(Na == Na1)
assert(Na == Na2)
action = cvx.Variable((T-1, Na), integer=True)
internal = cvx.Variable((T, Nint), integer =True)
flowin = action @ wi
flowout = action @ wo
return flowin, flowout, [internal[1:,:] == internal[:-1,:] + action @ wint, action >= 0, internal >= 0]
return res
def run(f):
a, b, fcon = f()
prob = cvx.Problem(cvx.Minimize(1), fcon)
prob.solve()
return a, b
# We need some way of specifying the initial and final states of things, Just more parameters to constructor functions I think
view raw petricat.py hosted with ❤ by GitHub

The big piece is the weighted_block function. It let’s you build a combinator with an internal state and input and output flow variables. You give matrices with entries for every possible transition. Whether transitions occurred between t and t+1 is indicated by integer variables. There is also possible accumulation of tokens at nodes, which also requires integer variables. Perhaps we’d want to expose the token state of the nodes to the outside too?

Weighted block schematically looks something like this

We can also get out a graphical representation of the net by reinterpreting our program into GraphCat. This is part of the power of the categorical interface. http://www.philipzucker.com/categorical-combinators-for-graphviz-in-python/

When I talked to Zach about this, he seemed skeptical that MIP solvers wouldn’t eat die a horrible death if you threw a moderately large petri net into them. Hard to say without trying.

Thoughts

There is an interesting analogy to be found with quantum field theory in that if you lift up to considering distributions of tokens, it looks like an occupation number representation. See Baez. http://math.ucr.edu/home/baez/stoch_stable.pdf

If you relax the integer constraint and the positivity constraints, this really is a finite difference formulation for capacitor circuits. The internal states would then be the charge of the capacitor. Would the positivity constraint be useful for diodes?

I wonder how relevant the chunky nature of petri nets might be for considering superconducting circuits, which have quantization of quantities from quantum mechanical effects.

Cvxpy let’s you describe convex regions. We can use this to implement a the convex subcategory of Rel which you can ask reachability questions. Relational division won’t work probably. I wonder if there is something fun there with respect the the integerizing operation and galois connections.

Edit: I should have googled a bit first. https://www.sciencedirect.com/science/article/pii/S0377221705009240 mathemtical programming tecchniques for petri net reachability. So it has been tried, and it sounds like the results weren’t insanely bad.

Categorical Combinators for Graphviz in Python

Graphviz is a graph visualization tool https://www.graphviz.org/. In Conal Elliott’s Compiling to categories http://conal.net/papers/compiling-to-categories/, compiling code to its corresponding graphviz representation was one very compelling example. These graphs are very similar to the corresponding string diagram of the monoidal category expression, but they also look like boolean circuit diagrams. Besides in Conal Elliott’s Haskell implementation, there is an implementation in the Julia Catlab.jl project https://epatters.github.io/Catlab.jl/stable/

I’ve basically implemented a toy version of a similar thing in python. It lets you do things like this

plus = GraphCat.block("+", ["a","b"], ["c"])
I = GraphCat.idd()
p1 = plus
p2 = p1 @ (p1 * p1)
p3 = p1 @ (p2 * p2)
p4 = p1 @ (p3 * p3)
d = p4.run()
d.format = "png"
d.render("adders")
view raw adder.py hosted with ❤ by GitHub

Why python?

  • Python is the lingua franca of computing these days. Many people encounter it, even people whose main thing isn’t computers.
  • The python ecosystem is nutso good.
  • Julia is kind of an uphill battle for me. I’m fighting the battle ( https://github.com/philzook58/Rel.jl ) , but I already know python pretty well. I can rip this out and move on.

What I did was implement some wrappers around the graphviz python package. It exposes a not very feature rich stateful interface. It is pretty nice that it prints the graphs inline in jupyter notebooks though.

The code is written in a style very similar (and hopefully overloadable with) to that of z3py relation algebra. http://www.philipzucker.com/a-sketch-of-categorical-relation-algebra-combinators-in-z3py/ . There is a fairly general cookbook method here for categorifying dsl. Since graphviz does not directly expose fresh node creation as far as I can tell, I made my own using a random number generator. The actual combinators are graphviz object processors, so we build up a giant functional chain of processors and then actually execute it with run. The inputs and outputs are represented by lists of node names. The is some design space to consider here.

I also had to use class based wrappers Based on the precedent set by the python 3 matrix multiplication operator @, I think it is a requirement that this also be used for category composition. id is a keyword or something in python unfortunately. For monoidal product, I feel like overloading power ** looks nice even if it is a nonsensical analogy, * is also not too bad. I went with * for now.

The graphviz graphs aren’t quite string diagrams. They make no promise to preserve the ordering of your operations, but they seem to tend to.

import random
import graphviz
class GraphCat():
GC = GraphCat
def __init__(self,res): # just hold the graphviz processor function
self.res = res
def fresh(n): # makes random numbers for fresh node labels
return list([str(random.randint(0,1e9)) for i in range(n)])
def idd(): # identity morhism. 1 input 1 output.
def res(g):
#nodes = GC.fresh(n)
node = GC.fresh(1)[0]
#for node in nodes:
g.node(node, shape="point")
return [node], [node]
return GraphCat(res)
def compose(g, f): # compose morphisms. Makes graphviz edges between node labels generated by g and f
f = f.res
g = g.res
def res(G):
a, b = f(G)
b1, c = g(G)
assert(len(b) == len(b1))
G.edges(zip(b,b1))
return a, c
return GraphCat(res)
def par(f, g): #monoidal product. Puts f and g in parallel
f = f.res
g = g.res
def res(G):
a, b = f(G)
c, d = g(G)
return a + c, b + d
return GraphCat(res)
def dump(): # dump deletes this edge with an empty circle
def res(G):
node = GC.fresh(1)[0]
G.node(node, shape="point", fillcolor="white")
return [node], []
return GraphCat(res)
def dup(): # duplicate this edge
def res(G):
node = GC.fresh(1)[0]
G.node(node, shape="point", fillcolor="green")
return [node], [node, node]
return GraphCat(res)
def converse(f): # turn input to output of this combinator
f = f.res
def res(G):
a, b = f(G)
return b, a
return GraphCat(res)
def named_simple(name): # a simple labeled 1 input 1 output object
def res(G):
node = GC.fresh(1)[0]
G.node(node,name)
return [node], [node]
return GraphCat(res)
def block(name, inputs, outputs): # an object with labelled ports
inputs_label = [f"<{inp}> {inp}" for inp in inputs]
outputs_label = [f"<{outp}> {outp}" for outp in outputs]
#return # use graphviz to build block with label and n ports
def res(G):
node = GC.fresh(1)[0]
inputs1 = [f"{node}:{inp}" for inp in inputs]
outputs1 = [f"{node}:{outp}" for outp in outputs]
grphstring = "{ {" + " | ".join(inputs_label) + "} | " + name + " | {" + "|".join(outputs_label) + "} }"
G.node(node,grphstring, shape="record")
return inputs1, outputs1
return GC(res)
def fst(): # project out first par of tuple. semnatically equal to (id * dump)
return GraphCat.block("fst", ["a","b"], ["a1"])
def snd(): # dump * id
return GraphCat.block("fst", ["a","b"], ["b1"])
def run(self): # will run the object on a fresh graphviz object
dot = graphviz.Digraph()
self.res(dot)
return dot
def __matmul__(self, rhs): # matrix multiplication is a natural analog of composition
return GC.compose(self, rhs)
def __mul__(self, rhs): # monoidal product
return GC.par(self, rhs)
def __add__(self,rhs): # hmm. What should addition be? Join?
pass
def const(label): # inject a constant in. A labelled node with 1 outport and no input
def res(G):
node = GraphCat.fresh(1)[0]
G.node(node, str(label))
return [], [node]
return GC(res)
def cup():
def res(G):
nodes = GraphCat.fresh(2)
for node in nodes:
G.node(node, shape="point")
G.edge(nodes[0],nodes[1])
return [], nodes
return GraphCat(res)
def cap():
return GraphCat.converse(GraphCat.cup())
view raw graphcat.py hosted with ❤ by GitHub

Here’s some example usage

cup = GraphCat.cup()
cap = GraphCat.cap()
d =  cap @ (I * I) @ cup  #(I * cap) @ (I * I * I) @ (cup * I) 
d.run()
d = plus @ (GC.const(1) * GC.const(2))
d = d.run()
GC = GraphCat
f = GraphCat.named_simple("f")
g = GraphCat.named_simple("g")
I = GraphCat.idd()
dump = GC.dump()
dup = GC.dup()
diagram = ((f * I) @ dup @ g @ (dump * I)  @ (I * f) @ (f * f)) * g
diagram.run()

Class based overloading is the main paradigm of overloading in python. You overload a program into different categories, by making a program take in the appropriate category class as a parameter.

# by passing in different category classes, we can make polymorphic functions
# They have to have a uniform interface though, which is hard to constrain in python.
def polymorphic_prog(M):
    idd = M.idd()
    dump = M.dump()
    dup = M.dup()
    return (idd * dump) @ dup
polymorphic_prog(GraphCat).run()

For example something like this ought to work. Then you can get the graph of some matrix computation to go along with it’s numpy implementation

class FinVect(np.ndarray):

    def compose(f,g):
        return f @ g
    def idd(n):
        return np.eye(n)
    def par(f,g):
        return np.kron(f,g)
    def __mult__(self,rhs):
        return np.kron(f,g)
# and so on. 

Maybe outputting tikz is promising? https://github.com/negrinho/sane_tikz