Checkpoint: Implementing Linear Relations for Linear Time Invariant Systems

I’m feeling a little stuck on this one so I think maybe it is smart to just write up a quick checkpoint for myself and anyone who might have advice.

The idea is to reimplement the ideas here computing linear relations https://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ There is a lot more context written in that post and probably necessary background for this one.

Linear relations algebra is a refreshing perspective for me on systems of linear equations. It has a notion of composition that seems, dare I say, almost as useful as matrix multiplication. Very high praise. This composition has a more bidirectional flavor than matrix multiplication as it a good fit for describing physical systems, in which interconnection always influences both ways.

In the previous post, I used nullspace computations as my workhorse. The nullspace operation allows one to switch between a constraint (nullspace) and a generator (span) picture of a vector subspace. The generator view is useful for projection and linear union, and the constraint view is useful for partial-composition and intersection. The implementation of linear relation composition requires flipping between both views.

I’m reimplementing it in Julia for 2 reasons

  • To use the Julia ecosystems implementation of module operations
  • to get a little of that Catlab.jl magic to shine on it.

It was a disappointment of the previous post that I could only treat resistor-like circuits. The new twist of using module packages allows treatment of inductor/capacitor circuits and signal flow diagrams.

When you transform into Fourier space, systems of linear differential equations become systems of polynomial equations \frac{d}{dx} \rightarrow i \omega. From this perspective, modules seem like the appropriate abstraction rather vector spaces. Modules are basically vector spaces where one doesn’t assume the operation of scalar division, in other words the scalar are rings rather than fields. Polynomials are rings, not fields. In order to treat the new systems, I still need to be able to do linear algebraic-ish operations like nullspaces, except where the entries of the matrix are polynomials rather than floats.

Syzygies are basically the module analog of nullspaces. Syzygies are the combinations of generators that combine to zero. Considering the generators of a submodule as being column vectors, stacking them together makes a matrix. Taking linear combinations of the columns is what happens when you multiply a matrix by a vector. So the syzygies are the space of vectors for which this matrix multiplication gives 0, the “nullspace”.

Computer algebra packages offer syzygy computations. Julia has bindings to Singular, which does this. I have been having a significant and draining struggle to wrangle these libraries though. Am I going against the grain? Did the library authors go against the grain? Here’s what I’ve got trying to match the catlab naming conventions:

using Singular

import Nemo

using LinearAlgebra # : I

CC = Nemo.ComplexField(64)
P, (s,) = PolynomialRing(CC, ["s"])
i = Nemo.onei(CC) # P(i) ? The imaginary number

#helpers to deal with Singular.jl
eye(m) = P.(Matrix{Int64}(I, m, m)) # There is almost certainly a better way of doing this. Actually dispatching Matrix?
zayro(m,n) = P.(zeros(Int64,m,n)) #new zeros method?
mat1(m::Int64) = fill(P(m), (1,1) )
mat1(m::Float64) = fill(P(m), (1,1) )
mat1(m::spoly{Singular.n_unknown{Nemo.acb}}) = fill(m, (1,1))

# Objects are the dimensionality of the vector space
struct DynOb
    m::Int
end

# Linear relations represented 
struct DynMorph
  input::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
  output::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
end

dom(x::DynMorph) = DynOb(size(x.input)[2])
codom(x::DynMorph) = DynOb(size(x.output)[2])
id(X::DynOb) = DynMorph(eye(X.m), -eye(X.m))

# add together inputs
plus(X::DynOb) = DynMorph( [eye(X.m) eye(X.m)] , - eye(X.m) )


mcopy(X::DynOb) = Dyn( [eye(X.m) ; eye(X.m)] , -eye(2*X.m) ) # copy input

delete(A::DynOb) = DynMorph( fill(P.(0),(0,A.m)) , fill(P.(0),(0,0)) )   
create(A::DynOb) = DynMorph( fill(P.(0),(0,0)) , fill(P.(0),(0,A.m)) )
dagger(x::DynMorph) = DynMorph(x.output, x.input)

# cup and cap operators
dunit(A::DynOb) = compose(create(A), mcopy(A))
dcounit(A::DynOb) = compose(mmerge(A), delete(A))


scale(M) = DynMorph( mat1(M),mat1(-1))
diff =  scale(i*s) # differentiation = multiplying by i omega
integ = dagger(diff)
#cupboy = DynMorph( [mat1(1) mat1(-1)] , fill(P.(0),(1,0)) )
#capboy = transpose(cupboy)

#terminal

# relational operations
# The meet
# Inclusion

# I think this is a nullspace calculation?
# almost all the code is trying to work around Singular's interface to one i can understand
function quasinullspace(A)
   rows, cols = size(A)
   vs = Array(gens(Singular.FreeModule(P, rows)))
   q = [sum(A[:,i] .* vs) for i in 1:cols]
   M = Singular.Module(P,q...)
   S = Singular.Matrix(syz(M)) # syz is the only meat of the computation
   return Base.transpose([S[i,j] for j=1:Singular.ncols(S), i=1:Singular.nrows(S) ])
end

function compose(x::DynMorph,y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    A = [ x.input                x.output P.(zeros(Int64,nx,yo)) ;
          P.(zeros(Int64,ny,xi)) y.input  y.output    ]
    B = quasinullspace(A)
    projB = [B[1:xi       ,:] ;
             B[xi+yi+1:end,:] ]
    C = Base.transpose(quasinullspace(Base.transpose(projB)))
    return DynMorph( C[:, 1:xi] ,C[:,xi+1:end] )
end

# basically the direct sum. The monoidal product of linear relations
function otimes( x::DynMorph, y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    return DynMorph( [ x.input                P.(zeros(Int64,nx,yi));
                       P.(zeros(Int64,ny,xi)) y.input               ],
                      [x.output                P.(zeros(Int64,nx,yo));
                       P.(zeros(Int64,ny,xo))  y.output               ])
    
end

I think this does basically work but it’s clunky.

Thoughts

I need to figure out Catlab’s diagram drawing abilities enough to show some circuits and some signal flow diagrams. Wouldn’t that be nice?

I should show concrete examples of composing passive filter circuits together.

There is a really fascinating paper by Jan Willems where he digs into a beautiful picture of this that I need to revisit https://homes.esat.kuleuven.be/~sistawww/smc/jwillems/Articles/JournalArticles/2007.1.pdf

https://golem.ph.utexas.edu/category/2018/06/the_behavioral_approach_to_sys.html

Is all this module stuff stupid? Should I just use rational polynomials and be done with it? Sympy? \frac{d^2}{dx^2}y = 0 and \frac{d}{dx}y = 0 are different equations, describing different behaviors. Am I even capturing that though? Is my syzygy powered composition even right? It seemed to work on a couple small examples and I think it makes sense. I dunno. Open to comments.

Because univariate polynomials are a principal ideal domain (pid), we can also use smith forms rather than syzygies is my understanding. Perhaps AbstractAlgebra.jl might be a better tool?

Will the syzygy thing be good for band theory? We’re in the multivariate setting then so smith normal form no longer applies.

A Buchberger in Julia

Similarly to how Gaussian elimination putting linear equations into LU form solves most linear algebra problems one might care about, Buchberger’s algorithm for finding a Grobner basis of a system of multivariate polynomial equations solves most questions you might ask. Some fun applications

  • Linkages
  • Geometrical Theorem proving. Circles are x^2 + y^2 – 1 = 0 and so on.
  • Optics
  • Constraint satisfaction problems. x^2 – 1 = 0 gives you a boolean variable. It’s a horrible method but it works if your computer doesn’t explode.
  • Energy and momentum conservation. “Classical Feynman Diagrams” p1 + p2 = p3 + p4 and so on.
  • Frequency domain circuits and linear dynamical systems 😉 more on this another day

To learn more about Grobner bases I highly recommend Cox Little O’Shea

To understand what a Grobner basis is, first know that univariate polynomial long division is a thing. It’s useful for determining if one polynomial is a multiple of another. If so, then you’ll find the remainder is zero.

One could want to lift the problem of determining if a polynomial is a multiple of others to multivariate polynomials. Somewhat surprisingly the definition of long division has some choice in it. Sure, x^2 is a term that is ahead of x, but is x a larger term than y? y^2? These different choices are admissible. In addition now one has systems of equations. Which equation do we divide by first? It turns out to matter and change the result. That is unless one has converted into a Grobner Basis.

A Grobner basis is a set of polynomials such that remainder under multinomial division becomes unique regardless of the order in which division occurs.

How does one find such a basis? In essence kind of by brute force. You consider all possible polynomials that could divide two ways depending on your choice.

Julia has packages for multivariate polynomials. https://github.com/JuliaAlgebra/MultivariatePolynomials.jl defines an abstract interface and generic functions. DynamicPolynomials gives flexible representation for construction. TypedPolynomials gives a faster representation.

These already implement a bulk of what we need to get a basic Buchberger going: Datastructures, arithmetic, and division with remainder. With one caveat, there is already a picked monomial ordering. And it’s not lexicographic, which is the nice one for eliminating variables. This would not be too hard to change though?

Polynomial long division with respect to a set of polynomials is implemented here

https://github.com/JuliaAlgebra/MultivariatePolynomials.jl/blob/9a0f7bf531ba3346f0c2ccf319ae92bf4dc261af/src/division.jl#L60

Unfortunately, (or fortunately? A good learning experience. Learned some stuff about datastructures and types in julia so that’s nice) quite late I realized that a very similar Grobner basis algorithm to the below is implemented inside of of SemiAlgebraic.jl package. Sigh.

using MultivariatePolynomials
using DataStructures


function spoly(p,q)
    pq = lcm(leadingmonomial(p),leadingmonomial(q))
    return div(  pq , leadingterm(p) ) * p - div(pq , leadingterm(q)) * q
end

function isgrobner(F::Array{T}) where {T <: AbstractPolynomialLike} # check buchberger criterion
    for (i, f1) in enumerate(F)
        for f2 in F[i+1:end]
            s = spoly(f1,f2)
            _,s = divrem(s,F)
            if !iszero(s)
                return false
            end
        end
    end
    return true
end

function buchberger(F::Array{T}) where {T <: AbstractPolynomialLike}
    pairs = Queue{Tuple{T,T}}()
    # intialize with all pairs from F
    for (i, f1) in enumerate(F)
        for f2 in F[i+1:end]
            enqueue!(pairs, (f1,f2))
        end
    end
    
    # consider all possible s-polynomials and reduce them
    while !isempty(pairs)
        (f1,f2) = dequeue!(pairs)
        s = spoly(f1,f2)
        _,s = divrem(s,F)
        if !iszero(s) #isapproxzero? Only add to our set if doesn't completely reduce
            for f in F
                enqueue!(pairs, (s,f))
            end
            push!(F,s)
        end
    end

    # reduce redundant entries in grobner basis.
    G = Array{T}(undef, 0)
    while !isempty(F)
        f = pop!(F)
        _,r = divrem(f, vcat(F,G))
        if !iszero(r)
            push!(G,r)
        end
    end
    
    return G
end

Some usage. You can see here that Gaussian elimination implemented by the backslash operator is a special case of taking the Grobner basis of a linear set of equations


using DynamicPolynomials
@polyvar x y

buchberger( [ x + 1.0 + y   , 2.0x + 3y + 7  ] )
#= 
2-element Array{Polynomial{true,Float64},1}:
 -0.5y - 2.5
 x - 4.0
=#

[ 1 1 ; 2  3 ] \ [-1 ; -7]
#=
2-element Array{Float64,1}:
  4.0
 -5.0
=#


buchberger( [ x^3 - y , x^2 - x*y ])
#=
3-element Array{Polynomial{true,Int64},1}:
 -xy + y²
 y³ - y
 x² - y²
=#

Improvements

Many. This is not a good Buchberger implementation, but it is simple. See http://www.scholarpedia.org/article/Buchberger%27s_algorithm for some tips, which include criterion for avoiding unneeded spolynomial pairs, and smart ordering. Better Buchberger implementations will use the f4 or f5 algorithm, which use sparse matrix facilities to perform many division steps in parallel. My vague impression of this f4 algorithm is that you prefill a sparse matrix (rows correspond to an spolynomial or monomial multiple of your current basis, columns correspond to monomials) with monomial multiples of your current basis that you know you might need.

In my implementation, I’m tossing away the div part of divrem. It can be useful to retain these so you know how to write your Grobner basis in terms of the original basis.

You may want to look at the julia bindings to Singular.jl

Links

Unification in Julia

Unification is a workhorse of symbolic computations. Comparing two terms (two syntax trees with named variables spots) we can figure out the most general substitution for the variables to make them syntactically match.

It is a sister to pattern matching, but it has an intrinsic bidirectional flavor that makes it feel more powerful and declarative.

Unification can be implemented efficiently (not that I have done so yet) with some interesting variants of the disjoint set / union-find data type.

  • The magic of Prolog is basically built in unification + backtracking search.
  • The magic of polymorphic type inference in Haskell and OCaml comes from unification of type variables.
  • Part of magic of SMT solvers using the theory of uninterpreted functions is unification.
  • Automatic and Interactive Theorem provers have unification built in somewhere.

To describe terms I made a simple data types for variables modelled of those in SymbolicUtils (I probably should just use the definitions in SymbolicUtils but i was trying to keep it simple).

#variables
struct Sym
    name::Symbol
end

struct Term
    f::Symbol
    arguments::Array{Any} # Array{Union{Term,Sym}} faster/better?
end

The implementation by Norvig and Russell for the their AI book is an often copied simple implementation of unification. It is small and kind of straightforward. You travel down the syntax trees and when you hit variables you try to put them into your substitution dictionary. Although, like anything that touches substitution, it can be easy to get wrong. See his note below.

I used the multiple dispatch as a kind of pattern matching on algebraic data types whether the variables are terms or variables. It’s kind of nice, but unclear to me whether obscenely slow or not. This is not a high performance implementation of unification in any case.

occur_check(x::Sym,y::Term,s) = any(occur_check(x, a, s) for a in y.arguments)

function occur_check(x::Sym,y::Sym,s)
    if x == y
        return s
    elseif haskey(s,y)
        return occur_check(x, s[y], s)
    else
        return nothing
    end  
end


function unify(x::Sym, y::Union{Sym,Term}, s) 
   if x == y
        return s
   elseif haskey(s,x)
        return unify(s[x], y, s)
   elseif haskey(s,y) # This is the norvig twist
        return unify(x, s[y], s)
   elseif occur_check(x,y,s)
        return nothing
   else
        s[x] = y
        return s
   end
end

unify(x::Term, y::Sym, s) = unify(y,x,s)

function unify(x :: Term, y :: Term, s)
    if x.f == y.f && length(x.arguments) == length(y.arguments)
        for (x1, y1) in zip(x.arguments, y.arguments)
            if unify(x1,y1,s) == nothing
                return nothing
            end
        end
        return s
    else
        return nothing
    end
end

unify(x,y) = unify(x,y,Dict())

I also made a small macro function for converting simple julia expressions to my representation. It uses the prolog convention that capital letter starting names are variables.

function string2term(x)
    if x isa Symbol
        name = String(x)
        if isuppercase(name[1])
           return Sym( x)
        else
           return Term( x, []  )
        end
    elseif x isa Expr
        @assert(x.head == :call)
        arguments = [string2term(y) for y in x.args[2:end] ]
        return Term( x.args[1], arguments )
    end
end
macro string2term(x)
    return :( $(string2term(x)) )
end

print(unify( @string2term(p(X,g(a), f(a, f(a)))) , @string2term(p(f(a), g(Y), f(Y, Z)))))
# Dict{Any,Any}(Sym(:X) => Term(:f, Any[Term(:a, Any[])]),Sym(:Y) => Term(:a, Any[]),Sym(:Z) => Term(:f, Any[Term(:a, Any[])]))

Links

Unification: Multidisciplinary Survey by Knight https://kevincrawfordknight.github.io/papers/unification-knight.pdf

https://github.com/roberthoenig/FirstOrderLogic.jl/tree/master/src A julia project for first order logic that also has a unification implementation, and other stuff

An interesting category theoretic perspective on unification http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3615 what is unification by Goguen

There is also a slightly hidden implementation in sympy (it does not appear in the docs?) http://matthewrocklin.com/blog/work/2012/11/01/Unification https://github.com/sympy/sympy/tree/master/sympy/unify

PyRes https://github.com/eprover/PyRes/blob/master/unification.py

Norvig unify
https://github.com/aimacode/aima-python/blob/9ea91c1d3a644fdb007e8dd0870202dcd9d078b6/logic4e.py#L1307

norvig – widespread error
http://norvig.com/unify-bug.pdf

Efficient unification note
ftp://ftp.cs.indiana.edu/pub/techreports/TR242.pdf

blog post
https://eli.thegreenplace.net/2018/unification/

Efficient representations for triangular substitituions
https://users.soe.ucsc.edu/~lkuper/papers/walk.pdf

conor mcbride – first order substitition structurly recursive dependent types
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=880725E316FA5E3540EFAD83C0C2FD88?doi=10.1.1.25.1516&rep=rep1&type=pdf

z3 unifier – an example of an actually performant unifier
https://github.com/Z3Prover/z3/blob/520ce9a5ee6079651580b6d83bc2db0f342b8a20/src/ast/substitution/unifier.cpp

Warren Abstract Machine Tutorial Reconstruction http://wambook.sourceforge.net/wambook.pdf

Handbook of Automated Reasoning – has a chapter on unification

Higher Order Unification – LambdaProlog, Miller unification

Syntax trees with variables in them are a way in which to represent sets of terms (possibly infinite sets!). In that sense it asks can we form the union or intersection of these sets. The intersection is the most general unifier. The union is not expressible via a single term with variables in general. We can only over approximate it, like how the union of convex sets is not necessarily convex, however it’s hull is. This is a join on a term lattice. This is the process of anti-unification.

What about the complement of these sets? Not really. Not with the representation we’ve chosen, we can’t have an interesting negation. What about the difference of two sets?

I had an idea a while back about programming with relations, where I laid out some interesting combinators. I represented only finite relations, as those can be easily enumerated.