Checkpoint: Implementing Linear Relations for Linear Time Invariant Systems

I’m feeling a little stuck on this one so I think maybe it is smart to just write up a quick checkpoint for myself and anyone who might have advice.

The idea is to reimplement the ideas here computing linear relations https://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ There is a lot more context written in that post and probably necessary background for this one.

Linear relations algebra is a refreshing perspective for me on systems of linear equations. It has a notion of composition that seems, dare I say, almost as useful as matrix multiplication. Very high praise. This composition has a more bidirectional flavor than matrix multiplication as it a good fit for describing physical systems, in which interconnection always influences both ways.

In the previous post, I used nullspace computations as my workhorse. The nullspace operation allows one to switch between a constraint (nullspace) and a generator (span) picture of a vector subspace. The generator view is useful for projection and linear union, and the constraint view is useful for partial-composition and intersection. The implementation of linear relation composition requires flipping between both views.

I’m reimplementing it in Julia for 2 reasons

  • To use the Julia ecosystems implementation of module operations
  • to get a little of that Catlab.jl magic to shine on it.

It was a disappointment of the previous post that I could only treat resistor-like circuits. The new twist of using module packages allows treatment of inductor/capacitor circuits and signal flow diagrams.

When you transform into Fourier space, systems of linear differential equations become systems of polynomial equations \frac{d}{dx} \rightarrow i \omega. From this perspective, modules seem like the appropriate abstraction rather vector spaces. Modules are basically vector spaces where one doesn’t assume the operation of scalar division, in other words the scalar are rings rather than fields. Polynomials are rings, not fields. In order to treat the new systems, I still need to be able to do linear algebraic-ish operations like nullspaces, except where the entries of the matrix are polynomials rather than floats.

Syzygies are basically the module analog of nullspaces. Syzygies are the combinations of generators that combine to zero. Considering the generators of a submodule as being column vectors, stacking them together makes a matrix. Taking linear combinations of the columns is what happens when you multiply a matrix by a vector. So the syzygies are the space of vectors for which this matrix multiplication gives 0, the “nullspace”.

Computer algebra packages offer syzygy computations. Julia has bindings to Singular, which does this. I have been having a significant and draining struggle to wrangle these libraries though. Am I going against the grain? Did the library authors go against the grain? Here’s what I’ve got trying to match the catlab naming conventions:

using Singular

import Nemo

using LinearAlgebra # : I

CC = Nemo.ComplexField(64)
P, (s,) = PolynomialRing(CC, ["s"])
i = Nemo.onei(CC) # P(i) ? The imaginary number

#helpers to deal with Singular.jl
eye(m) = P.(Matrix{Int64}(I, m, m)) # There is almost certainly a better way of doing this. Actually dispatching Matrix?
zayro(m,n) = P.(zeros(Int64,m,n)) #new zeros method?
mat1(m::Int64) = fill(P(m), (1,1) )
mat1(m::Float64) = fill(P(m), (1,1) )
mat1(m::spoly{Singular.n_unknown{Nemo.acb}}) = fill(m, (1,1))

# Objects are the dimensionality of the vector space
struct DynOb
    m::Int
end

# Linear relations represented 
struct DynMorph
  input::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
  output::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
end

dom(x::DynMorph) = DynOb(size(x.input)[2])
codom(x::DynMorph) = DynOb(size(x.output)[2])
id(X::DynOb) = DynMorph(eye(X.m), -eye(X.m))

# add together inputs
plus(X::DynOb) = DynMorph( [eye(X.m) eye(X.m)] , - eye(X.m) )


mcopy(X::DynOb) = Dyn( [eye(X.m) ; eye(X.m)] , -eye(2*X.m) ) # copy input

delete(A::DynOb) = DynMorph( fill(P.(0),(0,A.m)) , fill(P.(0),(0,0)) )   
create(A::DynOb) = DynMorph( fill(P.(0),(0,0)) , fill(P.(0),(0,A.m)) )
dagger(x::DynMorph) = DynMorph(x.output, x.input)

# cup and cap operators
dunit(A::DynOb) = compose(create(A), mcopy(A))
dcounit(A::DynOb) = compose(mmerge(A), delete(A))


scale(M) = DynMorph( mat1(M),mat1(-1))
diff =  scale(i*s) # differentiation = multiplying by i omega
integ = dagger(diff)
#cupboy = DynMorph( [mat1(1) mat1(-1)] , fill(P.(0),(1,0)) )
#capboy = transpose(cupboy)

#terminal

# relational operations
# The meet
# Inclusion

# I think this is a nullspace calculation?
# almost all the code is trying to work around Singular's interface to one i can understand
function quasinullspace(A)
   rows, cols = size(A)
   vs = Array(gens(Singular.FreeModule(P, rows)))
   q = [sum(A[:,i] .* vs) for i in 1:cols]
   M = Singular.Module(P,q...)
   S = Singular.Matrix(syz(M)) # syz is the only meat of the computation
   return Base.transpose([S[i,j] for j=1:Singular.ncols(S), i=1:Singular.nrows(S) ])
end

function compose(x::DynMorph,y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    A = [ x.input                x.output P.(zeros(Int64,nx,yo)) ;
          P.(zeros(Int64,ny,xi)) y.input  y.output    ]
    B = quasinullspace(A)
    projB = [B[1:xi       ,:] ;
             B[xi+yi+1:end,:] ]
    C = Base.transpose(quasinullspace(Base.transpose(projB)))
    return DynMorph( C[:, 1:xi] ,C[:,xi+1:end] )
end

# basically the direct sum. The monoidal product of linear relations
function otimes( x::DynMorph, y::DynMorph) 
    nx, xi = size(x.input)
    nx1, xo = size(x.output)
    @assert nx1 == nx
    ny, yi = size(y.input)
    ny1, yo = size(y.output)
    @assert ny1 == ny
    return DynMorph( [ x.input                P.(zeros(Int64,nx,yi));
                       P.(zeros(Int64,ny,xi)) y.input               ],
                      [x.output                P.(zeros(Int64,nx,yo));
                       P.(zeros(Int64,ny,xo))  y.output               ])
    
end

I think this does basically work but it’s clunky.

Thoughts

I need to figure out Catlab’s diagram drawing abilities enough to show some circuits and some signal flow diagrams. Wouldn’t that be nice?

I should show concrete examples of composing passive filter circuits together.

There is a really fascinating paper by Jan Willems where he digs into a beautiful picture of this that I need to revisit https://homes.esat.kuleuven.be/~sistawww/smc/jwillems/Articles/JournalArticles/2007.1.pdf

https://golem.ph.utexas.edu/category/2018/06/the_behavioral_approach_to_sys.html

Is all this module stuff stupid? Should I just use rational polynomials and be done with it? Sympy? \frac{d^2}{dx^2}y = 0 and \frac{d}{dx}y = 0 are different equations, describing different behaviors. Am I even capturing that though? Is my syzygy powered composition even right? It seemed to work on a couple small examples and I think it makes sense. I dunno. Open to comments.

Because univariate polynomials are a principal ideal domain (pid), we can also use smith forms rather than syzygies is my understanding. Perhaps AbstractAlgebra.jl might be a better tool?

Will the syzygy thing be good for band theory? We’re in the multivariate setting then so smith normal form no longer applies.

System Identification of a Pendulum with scikit-learn

System identification is the name of the problem of finding the dynamics of a physical system. It is an old topic. For some interesting material on system identification, check out Steve Brunton’s video here https://www.youtube.com/watch?v=6F2YVsT9dOs

We’ve been building a raspberry pi controlled pendulum to be controlled from the internet and the problem came up of trying to get a simulation to match the physical pendulum.

Look at ‘er go. Energy shaping for the win.

We weighed the pendulum and calculated the torque due to gravity mg\frac{l}{2} \sin(\theta) (you can think of it as the full force of gravity acting on the level arm of the center of the pole \frac{l}{2}\sin(\theta)) and moment of inertia of a rod about it’s end \frac{1}{3}m L^2.

However, It is difficult to estimate the torque supplied by the motor. Motors have surprisingly complicated behavior. It is also difficult from first principles to estimate damping or friction terms.

There are a couple different experimental stratagems for a pendulum. One thing we tried was setting the pendulum on it’s side and setting the motor duty cycle to different values. From this you can fit a parabola to those curves and get a acceleration constant for the different motor settings. Experimentally speaking, it seemed roughly linear acceleration to motor PWM duty cycle.

Another stratagem is to take resonance curves for the pendulum. Try exciting it with different sinusoidal torques at a sweep of frequencies. From this curve you can recover a resonance frequency and damping coefficients.

These all make sense as kind of ersatz methods. We’re taking our intuitive understanding of the system and other results from simpler or related systems and combining them together.

An interesting alternative approach to the above is to drive the pendulum with a random torque and then fit a parameterized model of the equations of motion to the observed acceleration. The model should include at the least the gravity \beta_1 sin(\theta) term, motor torque term \beta_2 u, and a damping terms \beta_3 \dot{\theta}. A simple start is \alpha = \beta_1 sin(\theta) + \beta_2 u + \beta_3 \dot{\theta} . This is a linear model with respect to the coefficients \beta_i and can be solved by least squares.

I’ve come to appreciate sci-kit learn for fitting. It doesn’t have the hottest most high falutin’ fads, but it’s got a lot of good algorithms in it that just work, are damn easy to use, and are easy to swap different possibilities out of there. Even though I know how to more manually set up a least squares system or solve a LASSO problem via cvxpy, it makes it really easy and clean. I’ve started reaching for it for fast attacks on fitting problems.

We mocked out our interface to behave similarly to an OpenAI gym interface. Because of this, the observations already have the cosine and sine terms that might be of interest and the angular velocity value that would be used for a simple damping term \beta \dot{\theta}.



import gym
import time
import numpy as np
env = gym.make('pendulum-v0')
observation = env.reset()

action = 0
dt = 0.05
obs = []
rews = []
actions = []
for i in range(1000):

    # A random walk for actions.
    # we need the actions to be slow changing enough to see trends
    # but fast enough to see interesting behavior
    # tune this by hand 
    action += np.random.randn() * dt
    action = max( min(action, 2 ), -2) 
    observation, reward, done, info = env.step([action])
    obs.append(observation)
    actions.append(action)
    rews.append(reward)
    time.sleep(0.05)


obs = np.array(obs) # obs includes thetadot, cos(theta), sin(theta). A good start.
actions = np.array(actions) # the pwm value used

# data to predict alpha from. Each row is a data point from one time step.
X = np.hstack( (obs[:-1, :] , actions[:-1].reshape(-1,1)) ) 

alphas = (obs[1:,2] - obs[:-1,2]  ) / dt  #angular acceleration

# feel free to swap in LASSO or other regressors
from sklearn.linear_model import LinearRegression 


# fit the observed angular acceleration as a function of X
reg = LinearRegression().fit(X, alphas)
print(f"intercept : {reg.intercept_},  coeffs : {reg.coef_} ")

The number that came out for gravity term matched the number calculated from first principles by within 10%. Not bad!

A thing that is nice about this approach is that one is able to add terms into the dynamics for which we don’t have good intuitive models to compare to like your good ole Physics I Coulombic friction term F \propto -sign(v)  or nonlinearities.

A Buchberger in Julia

Similarly to how Gaussian elimination putting linear equations into LU form solves most linear algebra problems one might care about, Buchberger’s algorithm for finding a Grobner basis of a system of multivariate polynomial equations solves most questions you might ask. Some fun applications

  • Linkages
  • Geometrical Theorem proving. Circles are x^2 + y^2 – 1 = 0 and so on.
  • Optics
  • Constraint satisfaction problems. x^2 – 1 = 0 gives you a boolean variable. It’s a horrible method but it works if your computer doesn’t explode.
  • Energy and momentum conservation. “Classical Feynman Diagrams” p1 + p2 = p3 + p4 and so on.
  • Frequency domain circuits and linear dynamical systems 😉 more on this another day

To learn more about Grobner bases I highly recommend Cox Little O’Shea

To understand what a Grobner basis is, first know that univariate polynomial long division is a thing. It’s useful for determining if one polynomial is a multiple of another. If so, then you’ll find the remainder is zero.

One could want to lift the problem of determining if a polynomial is a multiple of others to multivariate polynomials. Somewhat surprisingly the definition of long division has some choice in it. Sure, x^2 is a term that is ahead of x, but is x a larger term than y? y^2? These different choices are admissible. In addition now one has systems of equations. Which equation do we divide by first? It turns out to matter and change the result. That is unless one has converted into a Grobner Basis.

A Grobner basis is a set of polynomials such that remainder under multinomial division becomes unique regardless of the order in which division occurs.

How does one find such a basis? In essence kind of by brute force. You consider all possible polynomials that could divide two ways depending on your choice.

Julia has packages for multivariate polynomials. https://github.com/JuliaAlgebra/MultivariatePolynomials.jl defines an abstract interface and generic functions. DynamicPolynomials gives flexible representation for construction. TypedPolynomials gives a faster representation.

These already implement a bulk of what we need to get a basic Buchberger going: Datastructures, arithmetic, and division with remainder. With one caveat, there is already a picked monomial ordering. And it’s not lexicographic, which is the nice one for eliminating variables. This would not be too hard to change though?

Polynomial long division with respect to a set of polynomials is implemented here

https://github.com/JuliaAlgebra/MultivariatePolynomials.jl/blob/9a0f7bf531ba3346f0c2ccf319ae92bf4dc261af/src/division.jl#L60

Unfortunately, (or fortunately? A good learning experience. Learned some stuff about datastructures and types in julia so that’s nice) quite late I realized that a very similar Grobner basis algorithm to the below is implemented inside of of SemiAlgebraic.jl package. Sigh.

using MultivariatePolynomials
using DataStructures


function spoly(p,q)
    pq = lcm(leadingmonomial(p),leadingmonomial(q))
    return div(  pq , leadingterm(p) ) * p - div(pq , leadingterm(q)) * q
end

function isgrobner(F::Array{T}) where {T <: AbstractPolynomialLike} # check buchberger criterion
    for (i, f1) in enumerate(F)
        for f2 in F[i+1:end]
            s = spoly(f1,f2)
            _,s = divrem(s,F)
            if !iszero(s)
                return false
            end
        end
    end
    return true
end

function buchberger(F::Array{T}) where {T <: AbstractPolynomialLike}
    pairs = Queue{Tuple{T,T}}()
    # intialize with all pairs from F
    for (i, f1) in enumerate(F)
        for f2 in F[i+1:end]
            enqueue!(pairs, (f1,f2))
        end
    end
    
    # consider all possible s-polynomials and reduce them
    while !isempty(pairs)
        (f1,f2) = dequeue!(pairs)
        s = spoly(f1,f2)
        _,s = divrem(s,F)
        if !iszero(s) #isapproxzero? Only add to our set if doesn't completely reduce
            for f in F
                enqueue!(pairs, (s,f))
            end
            push!(F,s)
        end
    end

    # reduce redundant entries in grobner basis.
    G = Array{T}(undef, 0)
    while !isempty(F)
        f = pop!(F)
        _,r = divrem(f, vcat(F,G))
        if !iszero(r)
            push!(G,r)
        end
    end
    
    return G
end

Some usage. You can see here that Gaussian elimination implemented by the backslash operator is a special case of taking the Grobner basis of a linear set of equations


using DynamicPolynomials
@polyvar x y

buchberger( [ x + 1.0 + y   , 2.0x + 3y + 7  ] )
#= 
2-element Array{Polynomial{true,Float64},1}:
 -0.5y - 2.5
 x - 4.0
=#

[ 1 1 ; 2  3 ] \ [-1 ; -7]
#=
2-element Array{Float64,1}:
  4.0
 -5.0
=#


buchberger( [ x^3 - y , x^2 - x*y ])
#=
3-element Array{Polynomial{true,Int64},1}:
 -xy + y²
 y³ - y
 x² - y²
=#

Improvements

Many. This is not a good Buchberger implementation, but it is simple. See http://www.scholarpedia.org/article/Buchberger%27s_algorithm for some tips, which include criterion for avoiding unneeded spolynomial pairs, and smart ordering. Better Buchberger implementations will use the f4 or f5 algorithm, which use sparse matrix facilities to perform many division steps in parallel. My vague impression of this f4 algorithm is that you prefill a sparse matrix (rows correspond to an spolynomial or monomial multiple of your current basis, columns correspond to monomials) with monomial multiples of your current basis that you know you might need.

In my implementation, I’m tossing away the div part of divrem. It can be useful to retain these so you know how to write your Grobner basis in terms of the original basis.

You may want to look at the julia bindings to Singular.jl

Links

Has My Blog Been Hacked?

This morning I got a DM saying that a third party had seen some kind of plugin installation popup show up on my website, specifically on my about me page.

If you experience anything like this PLEASE PLEASE LET ME KNOW. (comments below, or DM me @sandmouth https://twitter.com/SandMouth )

I haven’t been able to replicate this behavior in any way, nor can anyone else I’ve asked, upon cursory examination of the code of my blog I can’t find anything, everything is up to date, and scans aren’t finding anything.

Maybe it’s time to finally transfer out of wordpress. I liked that it was easy, but the internet is too scary.

Also, if you’re a hacker, please pick on someone else. I really like my blog.

Dump of Nonlinear Algebra / Algebraic geometry Notes. Good Links Though

Old notes on nonlinear algebra. I dunno about the content but some very good links and book suggestions in here, so I’m gonna dump this one out there. Maybe it’ll help someone.

Systems of multivariable polynomial equations are more solvable than people realize. There are algebraic and numeric methods. Look at Macaulay, Singular, Sympy for algebraic methods. phcpack and bertini  and homotopycontinuation.jl for numerical .

Algebraic methods are fixated on Groebner bases, which are a special equvialent form your set of equations can be manipulated to. You can disentangle the variables using repeated polynomial division (buchberger’s algorithm) turning your set of equations into an equivalent set that has one more variable per equation. This is like gaussian elimination which is actually the extremely simple version of buchberger for linear equations.

The numerical methods use perturbation theory to take a system of equations you know how to solve and smoothly perturb them to a new system. Each small perturbation only moves the roots a little bit, which you can track with a differential equation solver. Then you can fix it up with some Newton steps. People who really care about this stuff make sure that there are no pathological cases and worry about roots merging or going off to infinity and other things.

You need to know how many roots to build and track in your solvable system. For that two theorems are important

bezout thereom – for dense systems, number of solutions is bound by product of total degree of equations.

bernstein bound – newton polytope gives bound of number of solutions of polynomial system. useful for sparse

One could make an argument for the homotopy continuation methods being the analog of iterative solutions for linear equations if grobner basis are gaussian elimibnation. Take equation we know how to solve (~preconditioner) and perform some iterative thing on it.

add enough random linear equations to make system full (points).

Then you have a membership algorithm due to sweeping of planes. Once you have points on actual varieites, pairwise compare them.

Cox OShea book is often reccomended. It’s really good.

https://www.springer.com/us/book/9781441922571

More advanced Cox et al book

https://www.springer.com/us/book/9780387207063

Bernd Sturmfels,  Mateusz Michalek (including video lectures)

https://personal-homepages.mis.mpg.de/michalek/ringvorlesung.html

https://personal-homepages.mis.mpg.de/michalek/NonLinearAlgebra.pdf

(Bernd is da man!)

https://math.berkeley.edu/~bernd/math275.html

Maculay 2 book

https://faculty.math.illinois.edu/Macaulay2/Book/

Singular books

https://www.singular.uni-kl.de/index.php/publications/singular-related-publications.html

https://www.springer.com/us/book/9783662049631

https://www.ima.umn.edu/2006-2007

Planning Algorithms, in particular chapter 6

http://planning.cs.uiuc.edu/

Gröbner bases in Haskell: Part I

Summer school on tensor methods

https://www.mis.mpg.de/calendar/conferences/2018/nc2018.html

Extensions of

https://ieeexplore.ieee.org/document/4399968

Numerical Polynomial Algebra by Hans Stetter

https://epubs.siam.org/doi/book/10.1137/1.9780898717976?mobileUi=0&

Introduction to Non-Linear Algebra V. Dolotin and A. Morozov. A high energy physics perspective

Click to access 0609022.pdf

Nonlinear algebra can also be approach via linear algebra surprisingly. Resultants. As soon as you see any nonlinearity, the linear part of your brain shuts down, but a good question is linear in WHAT? Consider least squares fitting, which works via linear algebra. Even though you’re fitting nonlinear functions, the expressions are linear in the parameters/coefficients so you’re all good.

Similarly you can encode root finding into a linear algebra problem. A matrix has the same eigenvalues as it’s characterstic polynomial det(A - \lambda) has roots, so that already shows that it is plausible to go from linear algebra to a polynomial root finding problem. But also you can encode multiplying a polynomial by x has a linear operation on the coefficients. In this way we can .

[1 x x^2 x^3 …] dot [a0 a1 a2 a3 …] = p(x)

Multiplying by x is the shift matrix. However, we also are assuming p(x)=0 which gives use the ability to truncate the matrix. x * [1 x x^2 x^3 …]  = Shift @ xbar. This is somewhat similar to how it feels to do finite differnce equations. The finite difference matrix is rectangular, but then boundary conditions give you an extra row. Multiplication by x returns the same polynomial back only when p(x)=0 or x = 0. The eigenvalues of this x matrix will be the value of x at these poisitions (the roots). This is the companion matrix https://en.wikipedia.org/wiki/Companion_matrix

We can truncate the space by using the zero equation.

It’s a pretty funky construction, I’ll admit.

To take it up to multivariable, we bring in a larger space [1 x y x^2 xy y^2 …] = xbar kron ybar

We now need two equations to reduce it to points. The X matrix is lifted to X kron I. and we can pad it with ?

Multiplying by an entire polynomial. Sylvester matrix for shared roots. Double root testing.

Sylvester matrix is based on something similar to bezout’s identity. To find out if some things p q has common factors you can find 2 things r s such that r*p + q*s = 0

https://en.wikipedia.org/wiki/Polynomial_greatest_common_divisor#B%C3%A9zout’s_identity_and_extended_GCD_algorithm

Sum of Squares is somewhat related material on systems of polynomial inequalities which can be translated to semidefinite matrix constraints. If you want to include equalities, you can use groebner bases to presolve them out.

Parrilo course material on Sum of Squares.

https://learning-modules.mit.edu/materials/index.html?uuid=/course/6/sp16/6.256#materials

Paper on using greobner and CAD (cylindrical algebraic decomposition) for opitmization and control

 
Using groebner basis for constraint satisfaction problems: x^n=1 gives a root of unity. There are n solutions. This gives a finite set to work with. Then you can add more equations. This is related to the max-cut thing. I saw this on Cox webpage.
You can require neighbors to have different vertices by  0=(xi^k – xj^k)/(xi – xj). You can encode many constraints using clever algebra.
an example using the same technique to solve sudoku
 
Sympy tutorial solving geoemtric theorems and map coloring
 
explicitly mentions toric groebner as integer programming.
 

other interesting exmaples

http://www.scholarpedia.org/article/Groebner_basis

Noncommutative grobner basis have application to solving differential equations? The differential operators are noncommutative. Not just silly quantum stuff. I mean the simple exmaple of non commutativty are the shcordinger momentum operators.

Automatic loop invariant finding

Geometric theorem rpvong

robotic kinematics

Optics? Envelopes, exchange of coordinates. Legendre transformations. Thermodynamics?

Global optimization? Find all local minima.

Nonlinear finite step.

Dynamic Prgramming. Add implicit V variab le for the vlaue function. Constrain via equations of modtion. Perform extermization keeping x0 v0 fixed. dx0=0 dv0=0 and dV=0. Grobner with ordering that removes x1 v1 V1. Iterate. Can keep dt as variable. Power series in t? Other integration schemes.

Probably need some method to simplify that left over relations so that they don’t get too complex. Smoothing? Dropping terms? Minimization may require factoring to find global minimum.

Differentiation. Add to every variable a dx. Collect up first order as a seperate set of constraints. Add conditions df=0 and dy=0 for fixed variables to perform partial differentiation and extremization. A very similar feel to automatic differentiation. Functions tend to not be functions, just other wriables related by constraints

Variable ordering

lex – good for elimination

deglex – total degree then a lex to tie break

grevlex – total degree + reverse lexicographic. The cheapest variable is so cheap that it goes last

block ordering, seperate variables into blocks and pick orderings inside blocks

general matrix ordering. Apply a matrix to the exponent vectors and lex comparse the results. Others are a subset.

Can’t I have a don’t care/ partial order? would be nice for blockwise elimination I feel like.

Non-commutative

http://sheaves.github.io/Noncommutative-Sage/

Physicsy

https://arxiv.org/pdf/hep-th/0609022

CAD book

https://link.springer.com/book/10.1007%2F978-3-7091-9459-1

Rings have addition and multiplication but not division necessarily. Polynomials, integers, aren’t guarenteed to have inverses that remain polynomials or integers.

ideal = a subset of a ring that absorbs multiplication. Also closed under addition

All polynomial conseqeunces of a system of equations

HIlbert Basis theorem – all ideals are egenrated by a finite set

ideal generated from set – any element of ring that can be generated via addition and multiplication by arbitary element. Is ideal because if you multiplied it by another object, it is still and sum of multiples.

Ideals are sometimes kind of a way of talking about factors without touching factors. Once something is a multiple of 5, no matter what you multiply it with, it is still a multiple of 5. If (x – 7) is a factor of a polynomial, then no matter what you multiply it with, (x-7) is still a factor. Zeros are preserved.

Principal ideal domain – every ideal is generated by a single object

Prime ideal. if a*b is in ideal then either a or b is in ideal. Comes from prime numbers ideal (all number divislable by prime number). If ab has a factor of p then either a or b had a factor of p. whereas consider all mutiples of 4. if a = b =2 then ab is a mutiple of 4, but neither a nor b are a multiple of 4.

1d polynomials. Everything is easy.

Polynomial division is doable. You go power by power. Then you may have a remained left over. It’s pretty weird.

You can perform the gcd of two polynomials using euclidean algorithm.

The ideal generated by a couple of them is generated by the multipolynomial gcd?

a = cx + dy + r

multivariate division: we can do the analog of polynomial division in the multivariate case. But we need an ordering of terms. reaminder is not unique.

But for certain sets of polynomials, remainder is unique.

Why the fixation on leading monomials?

The S-polynomial is the analog of one step of the euclidean algorithm. It also has the flavor of a wronskian or an anticommutator.

The bag euclidean algorithm. Grab the two things (biggest?). Take remainder between them, add remainder into bag.

This is the shape of the buchberger algorithm.

Finding homology or cohomology of solutions. Good question. One can see how this could lead to categorical nonsense since Category theory was invented for topological questions.

The variety is where a set of polynomials is 0. Roots and zero surfaces

List gives set of polynomials.

[forall a. Field a => (a,a,a) -> a ]

Or explicit

union and intersection can be achieved via multiplication and combining the sets

Krull dimension – Definition of dimension of algebraic variety. Maximal length of inclusion chain of prime ideals.

Ideals and Varieites have a relation that isn’t quite trivial.

The ideal of a variety

Envelopes – parametrized set of varieties f(x,t)=0 and partial_t f(x,t)=0. Eliminate t basically to draw the thing. Or trace out t?

Wu’s method for geometric theorem proving. You don’t need the full power of a grobner basis.

Polynomial maps. Talk about in similar language to differential geometry.

Boxes are a simple way to talk about subsets. Or lines, planes. Or polytopes.

Also any function that gives a true false value. But this is very limited in what you can actually do.

Varieties give us a concrete way to talk about subsets. Grothendieck schemes give unified languages supposedly using categorical concepts. Sounds like a good fit for Haskell.

class Variety

use powser. Functor composition makes multivariable polynomials. Tuples or V3 with elementwise multiplication

-- give variables names
newtype X a = X [a]

newtype Y a = Y [a]

-- from k^n -> k^m
type family PolyFun n m k where
   PolyFun n (S m) k = (PolyFun n 1, PolyFun n m)
   PolyFun (S n) 1 k = [PolyFun n 1 k] 
   PolyFun 1 1 k = k
-- Gonna need to Make a typeclass to actually do this. Yikes
-- it's just not as simple as Cat a b type. You really need to do computation
-- and input a is not same as 
class PolyF f where
   pcompose :: PolyFun b c k -> PolyFun a b k -> PolyFun a b k
   pid :: Num k => PolyFun b c k


-- related to ideal of n generators on a space k^m
-- this functions will compose some incoming 
-- or is this better thought of as a variety?
type Idealish :: (PolyFun n 1) -> PolyFun m 1
makeidealish :: PolyFun m n -> Ideal
makeidealish f = flip pcompose f

-- apply turns polynomial into haskell function
apply :: (PolyFun n m) -> V n -> V m

-- somehow should be able to replace points with varieties. It's like a whole thing
type VarietyFun = (PolyFun n 1 k) -> (PolyFun m 1 k)
(PolyFun n 1 k -> PolyFun m 1 k) -> (PolyFun m 1 k -> PolyFun l)

The polynomial as a type parameter for agda. Regular Functions are functions from one variety to another. They are the same as the polynomial ring quotiented out by the ideal of the variety.

Ring Space and Geometric Space (affine space)

Maximal ideals can be thought of as points. (ideal of x-a, y-b, …).

Free Polynomials ~ Free Num. Sparse representation. Uses Ordering of a. We should not assume that the are a power like in http://hackage.haskell.org/package/polynomial-0.7.3/docs/Math-Polynomial.html

Ord is monomial ordering. Think of a as [X,Y,X,X,X]

divmod :: (Integral a, Ord a) => Poly r a -> Poly r a -> Poly r a

newtype Monomial a = Monomial [a]

— different monomial newtype orderings for lex, etc.

Monomial (Either X Y)

divmod as bs = remove bs from as. if can’t remainder = as, div = 0

Intuition pumps : algebraic geometry, differential geoemtry, ctaegory theory, haskell agda.

In differential geometry, embedding sucks. We get around it by defining an atlas and differential maps.

There is a currying notion for polynomials. We can consider a polynomial as having coefficinets which themselves are polynomials in other variables or all at once.

What can be solved linearly? The Nullstullensatz certificate can be solved using linear equations

Resultants. What are they? Linear sums of monomials powers * the orginal polynomials. Det = 0 implies that we can find a polynomial combination

What is the deal with resultants

Toric Varieties. C with hole in it is C*. This is the torus because it is kind of like a circle. (Homologically?). There is some kind of integer lattice lurking and polytopes. Gives discrete combinatorial flavor to questions somehow. Apparently one of the more concrete/constructive arenas to work in.

binomaial ideals. the variety will be given by binomials

maps from one space to another which are monomial. can be implicitized into a variety. map is described by integer matrix. Integer programming?

Similar “cones” have been discussed in teh tropical setting is this related?

Algebraic statistics. Factor graph models. Probablisitc graphical models. Maybe tihs is why a PGM lady co taught that couse with Parillo

Modules

Tropical geometry

http://www.cmap.polytechnique.fr/~gaubert/papers.html

Loits of really intriguing sounding applications. Real time verification

gfan

How does the polynomial based optimization of the EDA course relate to this stuff? https://en.wikipedia.org/wiki/Logic_optimization

Mixed volume methods? Polytopes.

cdd and other polytopic stuff. Integration of polynomials over polytopes

Software of interest

Sage

Sympy

Singular – Plural non-commutative?

FGb – Faugiere’s implmentation of Grobner basis algorithms

Macaulay

CoCoa

tensorlab – https://en.wikipedia.org/wiki/Tensor_software

sostools

PolyBori – polynomials over boolean rings http://polybori.sourceforge.net/doc/tutorial/tutorial.html#tutorialli1.html

LattE

4ti2

normaliz

polymake – https://polymake.org/doku.php/tutorial/start slick

http://hep.itp.tuwien.ac.at/~kreuzer/CY/CYpalp.html Calabi Yau Palp????

TOPCOM

frobby – can get euler charactersitics of monomial ideals? http://www.broune.com/frobby/index.html

gfan

https://www.swmath.org/browse/msc

Homotopy continuation:

Bertini

http://homepages.math.uic.edu/~jan/phcpy_doc_html/index.html

phcpy and phcpack

hom4ps

https://www.juliahomotopycontinuation.org/

certification:

http://www.math.tamu.edu/~sottile/research/stories/alphaCertified/

cadenza

Jan

http://homepages.math.uic.edu/~jan/mcs563s14/index.html

www.math.uic.edu/~jan/tutorial.pdf

bezout thereom – for dense systems, number of solutions is bound by product of total degree of equations.

bernstein bound – newton polytope gives bound of number of solutions of polynomial system. useful for sparse

One could make an argument for the homotopy continuation methods being the analog of iterative solutions for linear equations if grobner basis are gaussian elimibnation. Take equation we know how to solve (~preconditioner) and perform some iterative thing on it.

add enough random linear equations to make system full (points).

Then you have a membership algorithm due to sweeping of planes. Once you have points on actual varieites, pairwise compare them.

Suggestion that “linear program” form helps auto differentiation?

local rings. thickening? Infinite power series modded out by local relation. One maximal ideal.

differential geometry on algebaric surfaces.

modules are like vector spaces.

Ring linear

Canonical example, a vector of polynomials.

1-d space of polynomials.

Module morphism – respects linearity with sresepct to scalar multiplacation and addition Can be specified compoent wise. But has to be specified in such a way that resepct.

Basis – Linearly Independent set that spans the whole module. May not exist.

So were are kind of stuck always working in overcomplete basis to make thje vector space analogy. The generators have non trivial relations that equal zero. These coefficients form their own vector space. The space whole image is zero because of the relations is called the first syzygy module.

But then do we have a complete basis of all the relations? Or is it over complete?

If you ignore that the entries of a vectors are polynomials it becomes vector space. But but because they are they have secret relations.

even 1 dimensional vector space has some funky structure because of the polynomial nautre of the ring.

Somehow fields save us?

Paramaetrized vector curves, surfaces.

Parametrzied matrices.

Noncommutative polynomials. We could perhaps consider the process of normal ordering something related to a grobner basis calcaultion. Perhaps a multi polynomial division process? Consider the ordering where dagger is greaer than no dagger. Canonical basis also has i<j (more important for fermion).

SOS gives you the exact minimum of 1-d polynomial. You could also imagine encoding this as a semidefintier program. H-lam>=0. Min lam. Where H is the characterstic matrix.

We can diagonalize to the sos form, and then take each individual term = 0 to solve for x*.

While integer programming does that funky toric variety stuff with the objective vector deswcribing the grobner basis, binary programming is simple. x^2=x + linear eequations and constraints

haskell grobener

1. Monomials. Exponent vectors. Logarithmic representation. Mutiplication is addition. Composition is elementwise multiplication. Type level tag for ordering.

newtype Mon3 ord = V3 Int

data Lex

data DegLex

Ordering of monomials is important. Map is perfect

Map (Mon3 ord) ring

Groebner bases can be used to describe many familiar operations. Linear algerba, gaussian elminiation. Using commutators. Building power series assuming terms are relatively irrelevant.

Can I get a power series solution for x^2 + ax + 1=0 by using a negative ordering for a? I need another equation. x = \sum c_n * a^n. (x+dx)? How do I get both solutions?

Dual numbers for differential equations. dx is in a ring such that dx^n = 0.

Subset sum. Find some of subset of numebrs that add up to 0.

s um variables s_i

Solutions obey

s_0 = 0

(s_i – s_{i-1})(s_i – s_{i-1}-a_{i-1})=0

s_N = 0

Factors give OR clauses. Sepearte oplynomials give AND clauses. pseudo CNF form. Can’t always write polys as factors though? This pattern also matches the graph coloring.

More interesting books:

https://wstein.org/books/ant/

Some fun with algebraic numbers

Polynomial Factorization

https://mattpap.github.io/masters-thesis/html/src/algorithms.html

https://en.wikipedia.org/wiki/Factorization_of_polynomials

Numerical vs Symbolic

Numeric

https://en.wikipedia.org/wiki/Root-finding_algorithm

Pick a random point. Then apply Newton’s method. Do this over and over. If you find N unique factors, you’ve done it. A little unsatisfying, right? No guarantee you’re going to find the roots.

2. Perturbation theory / Holonomy continuation. Start with a polynomial with the same number of total roots that you know how to factor. x^N – 1 = 0 seems like an easy choice. Given f(x)+\lambda g(x)=0, \partial g dx \lambda + \partial f dx  +g(x){d\lambda}= 0 . \frac{dx}{d\lambda} = \frac{-g(x)}{\lambda \partial g + \partial f}. You can use this ODE to track the roots. At every step use Newton’s method to cleanup the result. Problems can still arise. Do roots collapse? Do they smack into each other? Do they run off to infinity?

3. The Companion matrix. You can convert finding the roots into an eigenvalue problem. The determinant of a (A – \lambda) is a polynomial with roots at the eigenvalues. So we need tyo construct a matrix whose deteerminant equals the one we want.  The companion matrix simulates multiplication by x. That is what the 1 above the diagonal do. Then the final row replaces x^(N+1) with the polynomial. In wikipedia, this matrix is written as the transpose. https://en.wikipedia.org/wiki/Companion_matrix

4. Stetter Numerical Polynomial Algebra. We can form representations basically of the Quotient Rings of an Ideal.  We can make matrices A(j) that implement multiplication by monomials x^j in F[x]/I. Then we can take joint eigensolutions to diagonalize these multiplications. Something something lagrange polynomials. Then if the solutions respect some kind of symmettry, it makes sense that we can use Representation theory proper to possibly solve everything. This might be the technique of Galois theory metnoined in that Lie Algebra book. This is not unconnected with the companion matrix technique above. These matrices are going to grow very higher dimensional.

Thought. Could you use holonomy continuation to get roots, then interpolate those roots into a numerical grobner basis? Are the Lagrange polynomials of the zero set a grobner basis?

Symbolic

Part of what makes it seem so intimidating is that it isn’t obvious how to brute force the answer. But if we constrain ourselves to certain kinds of factors, they are brute forceable.

Given a suggested factor, we can determine whether it actually is a factor by polynomial division. If the remainder left over from polynomial division is 0, then it is a factor.

If we have an enumerable set of possibilities, even if large, then it doesn’t feel crazy to find them.

Any root of a polynomial with rational coefficients can be converted to integer coefficients by multiplying out all the denominators.

Let’s assume the polynomial has factors of integer coefficients.

Rational Root Test

Kronecker’s method

Finite Fields. It is rather remarkable that there exists finite thingies that have the algerbaic properties of the rationals, reals, and complex numbers. Typically when discretizing continuum stuff, you end up breaking some of the nice properties, like putting a PDE on a grid screws over rotational symmetry. Questions that may be hard to even see how to go about them become easy in finite fields in principle, because finite fields are amenable to brute force search. In addition, solutions in finite fields may simply extend to larger fields, giving you good methods for calculations over integers or rationals or what have you.

SubResultant. A curious property that if two polynomials share roots/common factors, it is pretty easy to seperate that out. The GCD of the polynomials.

Kind of the gold standard of root finding is getting a formula in terms of square roots. This is an old question. Galois Theory is supposedly the answer.

Fool’s Rules Regatta 2019

For four years running now we’ve done the fool’s rules regatta in Jamestown as Team Skydog in the unlimited category. http://www.jyc.org/FoolsRules/ThisYearsFoolsRules.htm

It’s a short boat race where you make a boat in 2 hours out of crap and then race ’em.

The first year we did trash bags filled with air with a platform. We were told we were very creative. We got incredibly dehydrated and sunburnt that year.

Then next two years we did a catamaran with triangular plywood pontoons held together with zip ties. Here’s some link from last year

http://www.jyc.org/FoolsRules/images/2018/foolsrules20183.jpg

http://www.jyc.org/FoolsRules/images/2018/foolsrules20187.jpg

This year we did a 2×4 and 2×3 frame with plastic sheeting stapled to the inside. It worked surprisingly well! Nearly unsinkable even when we intentionally punctured huge holes . The wind was blowing out to sea and almost everyone was drifting out to Newport. We got first place this year! We celebrated with bbq and our ice cream bucket prize.

Thoughts for next year: Make a keel / centerboard so we can tack? Maybe that is crazy talk.

Declan’s post on the same event: https://www.declanoller.com/2019/08/11/skydog-2019-winners-again/

So majestic.
SKYDOG! SKYDOG! ARF ARF ARF!
Will is the ultimate cutie. Ladies, please apply inside.
Layin’ out the frame
Sky dog ascendent!
Our reward for first place was an unholy bucket of ice cream

Giving the Mostly Printed CNC a try (MPCNC)

Declan had the good idea to make a CNC machine. There is a popular plan available here

https://www.v1engineering.com/specifications/

A Doge

The really cute part of it is using electrical conduit as rails, which are shockingly inexpensive. Like a couple bucks for 4 feet! Holy shnikes!

We’ve been printing up a storm for the last couple weeks. A ton of parts!

We already had a lot of motors and stuff lying around. Declan bought a lot of stuff too just for this. Assorted bearings and bolts. The plans have a bill of materials.

Repetier host seemed to work pretty well for controlling the board

Used the RAMPS branch of the mpcnc marlin repo

Edited the header files as described in this post so that we could use both extruders as extra x and y motor drivers. It did not seem like driving two motors from the same driver board was acceptable. Our bearings are gripping the rails a little too tight. It is tough to move.

Some useful links on the thingiverse version of the mpccnc https://www.thingiverse.com/thing:724999

He suggests using this program http://www.estlcam.com/ Seem like windows only? ugh.

The mpcnc plans don’t contain actual tool mounts but here are some examples

A pen holder: https://www.thingiverse.com/thing:1612207/comments

A dewalt mount: https://www.thingiverse.com/thing:944952

This is an interesting web based g-code maker. Ultimately a little to janky though. It works good enough to get started http://jscut.org/jscut.html . Not entirely clear what pocket vs interior vs whatever is. engrave sort of seemed like what I wanted. Went into inkscape with a reasonable png and traced bitmapped it, then object to path. It’s also nice to just find an svg on the internet

The following code was needed to zero repetier and the RAMPS at the touch off point. We added it as a macro. It is doing some confusing behavior though.

G92 X0 Y0 Z0
@isathome

pycam is the best I can find for 3d machining. Haven’t actually tried it yet

http://pycam.sourceforge.net/getting-started/

We should probably upgrade the thing to have limit switches. It pains me every time we slam it into the edge.

All in all, a very satisfying project. Hope we build something cool with it.

A horsie

Chile: Nice place

Just got back from Chile from a vacation visiting Will. Nice place.

We took a leisurely amount of time getting to Torres del Paine, which was the main feature of our trip. We travelled through Santiago, Punta Arenas and Puerto Natales. We spent a very tired day in the children’s science museum and rode the funicular. There wasn’t that much to do in the latter two cities, maybe we could have shaved some time from them. Our hostel in Punta Arenas was notably ramshackle. We spent 5 days backpacking in the park. Absolutely gorgeous. The wind on the second day was like nothing I’ve ever experienced. I was a little concerned about staying on my feet. Hiking poles for the win. Day 4 was cold and wet and miserable, but it ended up ok in the end. We were able to get a spot in a refugio when it just was too overwhelming to try to set up our tents in a flooded muddy campsite on that day. I think Beth in particular was at the end of her rope. I basically didn’t poop the entire first week I was there, but one glorious day on the mountain the heavens parted for me, and I was fine from then on. I didn’t quite pack food right. There ended up being camp stores at most of the places we stayed, but if I hadn’t been able to re up on cookies it would have been a lean couple days food wise. Ramen Bombs for the win. We drank right from the streams, which is unusual for us. Usually we filter.

All told we did ok on food. I really like the al pobre thing. What isn’t to like a about steak, onions, and eggs on fries? Chileans seem a little eager on the mayo. Nobody does breakfast right except the US. The street food was good. I love the fried tortilla thing that you can just slather salsa on. It was like 30 cents. The empanadas were also pretty great cheap food. Ceviche was also very tasty. They toss out avocado like it’s nuthin down there. Sandwiches were kind of shitty. Don’t know if that is entirely fair, but that is how it felt. Highlight meal of the trip was at Cafe Artimana in Puerto Natales. Yeah, I got some al pobre. But also basil lemonade stuff

After the hiking, we scheduled an early return to Santiago rather than busting our asses to a glacier viewpoint. In the airport at Punta Arenas, we got the southernmost dominos in the entire world. Ben Will and Declan went on a taxi quest to go get it. Wandered around Santiago, saw some churches and cathedrals, a fort, ate churros, etc. Declan was on a quest for a neck pillow. We did a prison themed Escape Room. People felt like we got a little cheated because some of the puzzles felt like bullshit? I think they really expect to break room records. I suck at escape rooms. We were able to spend a day in Valparaiso, which had a super awesome street art scene.

I spent the last day puking my guts out. So it goes. Not sure how exactly. The street sausage may have put me over the top. I guess I’m a sensitive fellow? I get pretty consistently unwell on trips.

Chile has tons of fluffy street dogs. They’re pretty friendly, although they do chase cars and motorcycles. Idiots.

Chile has a way lower english quotient than other trips I made. I’ve been surprised how common at least some english has been in Europe and Asia, and was now equally surprised how little there was in Chile. It makes sense. A lot of the continent is spanish speaking. It was really useful to have Will around, who has gotten shockingly good at Spanish from an outsiders perspective.

SUCH MEMORIES

Declan’s post on the trip.

tough day

Wait, where are all my BOBBY PICS!?!

o there u r u cutie

overwhelmed

A strange place named Andre’s
boys
beth

Thoughts on Faking Some of GADTs in Rust

I’m a guy who is somewhat familiar with Haskell who is trying to learn Rust. So I thought I’d try to replicate some cool Haskell functionality in Rust. I would love to hear comments, because I’m trying to learn. I have no sense of Rust aesthetics yet and in particular I have no idea how this interacts with the borrow system. What follows is a pretty rough brain dump

GADTs (Generalized algebraic data types) are an extension in Haskell that allows you to write constrained type signatures for your data constructors. They also change how the type checking of pattern matching is processed.

GADTs are sometimes described/faked by being built by making data types that hold equality/unification constraints. Equality constraints in Haskell like a ~ Int are fairly magical and the Rust compiler does not support them in an obvious way. Maybe this is the next project. Figure out how to fake ’em if one can. I don’t think this is promising though, because faking them will be a little wonky, and then GADTs are a little wonky on top of that. See https://docs.rs/refl/0.1.2/refl/ So we’ll go another (related) road.

This is roughly what GADTs look like in Haskell.

data MyGadt a where
  TBool :: Bool -> MyGadt Bool
  TInt :: Int -> MyGadt Int

And here is one style of encoding using smart constructors and a typeclass for elimination (pattern matching is replicated as a function that takes callbacks for the data held in the different cases). Regular functions can have a restricted type signature than the most general one their implementation implies. The reason to use a typeclass is so that we can write the eliminator as returning the same type that the GADT supplies. There isn’t an explicit equality constraint. A kind of Leibnitz equality

(forall f. f a -> f b)
is hiding in the eliminator. The Leibnitz equality can be used in place of (~) constraints at some manual cost. http://code.slipthrough.net/2016/08/10/approximating-gadts-in-purescript/

Click to access leibniz-equality.pdf


data MyGadt2 a = TBool2 Bool | TInt2 Int
-- smart constructors. Hide the original constructors
tInt :: Int -> MyGadt2 Int
tInt = TInt

tBool :: Bool -> MyGadt2 Bool
tBool = TBool

-- gadt eliminator
class MyGadtElim a where
   elimMyGadt :: forall f. MyGadt2 a -> (Bool -> f Bool) -> (Int -> f Int) -> f a

instance MyGadtElim Int where
   elimMyGadt (TInt2 x) fb fi = fi x
   elimMyGadt _ _ _ = error "How did TBool2 get type MyGadt2 Int?"
instance MyGadtElim Bool where
   elimMyGadt (TBool2 x) fb fi = fb x
 

The

forall f :: * -> *
is a problem for Rust. Rust does not have higher kinded types, although they can be faked to some degree. https://gist.github.com/CMCDragonkai/a5638f50c87d49f815b8 There are murmurs of Associated Type Constructors / GATs , whatever those are , that help ease the pain, but I’m pretty sure they are not implemented anywhere yet.

I’m going to do something related, a defunctionalization of the higher kinded types. We make an application trait, that will apply the given type function tag to the argument. What I’m doing is very similar to what happens in the singletons library, so we may be getting some things for free.

https://typesandkinds.wordpress.com/2013/04/01/defunctionalization-for-the-win/

trait App1 { type T;}

Then in order to define a new typelevel function rip out a quick tag type and an App impl.

struct F1 {}
impl App1 for F1{
    type T = Vec>;
}

It might be possible to sugar this up with a macro. It may also be possible to write typelevel functions in a point free style without defining new function tag names. The combinators Id, Comp, Par, Fst, Snd, Dup, Const are all reasonably definable and fairly clear for small functions. Also the combinator S if you want to talk SKI combinatory calculus, which is unfit for humans. https://en.wikipedia.org/wiki/SKI_combinator_calculus For currying, I used a number for how many arguments are left to be applied (I’m not sure I’ve been entirely consistent with these numbers). You need to do currying quite manually. It may be better to work with tuplized arguments

struct Vec1 {} // number is number of aspplications left.
impl App1 for Vec1{
    type T = Vec;
}

struct Id();

impl App1 for Id{
    type T = A; 
}

// partially applied const.
// struct Const()
// call this next one const1
struct Const1();
struct Const(PhantomData); // should be Const1
impl App1 for Const1 { // partial application
    type T = Const;
}

impl App1 for Const{
    type T = B; 
}



struct Fst {}
impl  App1<(A,B)> for Fst {
    type T = A;
} 

struct Snd {}
impl  App1<(A,B)> for Snd {
    type T = B;
} 


struct Dup{}
impl  App1 for Dup {
    type T = (A,A);
} 

struct Par2 {}
struct Par1 (PhantomData);
struct Par (PhantomData , PhantomData );
impl App1 for Par2 {
    type T = Par1;
}
impl App1 for Par1 {
    type T = Par;
}

impl App1<(X,Y)> for Par where F : App1, G : App1  {
    type T = (F::T, G::T);
}

// In order to curry, i think I'd have to define a name for every curried form.


// combinator calculus Const is K, Id is I, and here is S combinator. Yikes.
type I = Id;
type K = Const;

struct S3{}
struct S2(PhantomData);
struct S1(PhantomData, PhantomData);
// struct S(PhantomData, PhantomData, PhantomData);
impl  App1 for S1 where A : App1, B : App1, >::T : App1< >::T > {
    type T =  < >::T   as App1< >::T > >::T;
}  


struct Comp2();
struct Comp1 (PhantomData);
struct Comp (PhantomData, PhantomData);
impl App1 for Comp where G : App1, F : App1<>::T> {
    type T =  >::T>>::T;
}  

Anyway, the following is a translation of the above Haskell (well, I didn’t wrap an actual i64 or bool in there but I could have I think). You need to hide the actual constructors labeled INTERNAL in a user inaccessible module.

enum Gadt {
    TIntINTERNAL(PhantomData),
    TBoolINTERNAL(PhantomData)
    
}
// then build the specialzied constructors that 
fn TBool() -> Gadt{
    Gadt::TBoolINTERNAL(PhantomData)
}

fn TInt() -> Gadt{
    Gadt::TIntINTERNAL(PhantomData)
}

The smart constructors put the right type in the parameter spot

Then pattern matching is a custom trait per gadtified type. Is it possible to unify the different elimination traits that will come up into a single Elim trait? I’m 50-50 about whether this is possible. What we’re doing is a kind of fancy map_or_else if that helps you.

https://doc.rust-lang.org/std/option/enum.Option.html#method.map_or_else

trait GadtElim {
    type Inner;
fn gadtElim(&self, case1 : >::T   , case2 : >::T ) -> >::T  where 
         F : App1,
         F : App1,
         F : App1;
         

}


impl GadtElim for Gadt {
    type Inner = i64;
    fn gadtElim(&self, case1 : >::T   , case2 : >::T ) -> >::T  where 
         F : App1,
         F : App1,
         F : App1{
        match self{
            Gadt::TIntINTERNAL(PhantomData) => case2,
            Gadt::TBoolINTERNAL(PhantomData) => panic!("Somehow TBool has type Gadt")// Will never be reached though. god willing
        }
    }
}
impl GadtElim for Gadt {
    type Inner = bool;
    fn gadtElim(&self, case1 : >::T , case2 : >::T ) -> >::T  where 
         F : App1,
         F : App1,
         F : App1{
        match self{
            Gadt::TIntINTERNAL(PhantomData) => panic!("Somehow TInt has type Gadt"),
            Gadt::TBoolINTERNAL(PhantomData) => case1 // Will never be reached though. god willing
        }
    }
}

Usage. You have to explicitly pass the return type function to the eliminator. No inference is done for you. It’s like Coq’s match but worse. BTW the dbg! macro is the greatest thing on earth. Well done, Rust.

    let z = TInt().gadtElim::(true , 34);
    let z2 = TBool().gadtElim::(true , 34);

    dbg!(z);
    dbg!(z2);
    struct F7 {} // You need to do this. Kind of sucks. macroify? App!(F = Vec)
    impl App1 for F7 { type T = Vec; }
    dbg!(TInt().gadtElim::(vec!(true,false) , vec!(34,45,4,3,46)));

You can make helpers that don’t require explicit types to be given

fn gadtRec(x : impl GadtElim, case1 : A, case2 : A) -> A {
    x.gadtElim::>(case1 , case2)
}

One could also make an Eq a b type with Refl similarly. Then we need typelevel function tags that take two type parameter. Which, with currying or tupling, we may already have.

Questions:

Is this even good? Or is it a road of nightmares? Is this even emulating GADTs or am I just playing phantom type games?

We aren’t at full gadt. We don’t have existential types. Rust has some kind of existential story evolving (already there?), but the state of it is confusing to me. Something to play with. Higher rank functions would help?

Are overlapping typeclasses a problem in Rust?

Again, I have given nearly zero thought to borrowing and how it interacts with this. I’m a Rust n00b. I should think about it. Different eliminators based on whether you own or are borrowing?.

How much of singleton style dependent types do we get from this? It feels like we have already paid the cost of defunctionalizing. http://hackage.haskell.org/package/singletons

My current playground for this is at https://github.com/philzook58/typo

Edit: Rust is an ML with affine types and a syntax facelift roughly. So Ocaml is a good place to pump. Oleg Kiselyov had a fascinating approach that sort of smuggles through an Option type in an equality type using mutation. I wonder how well that would mesh with Rust. It seems obviously not thread safe unless you can make the mutation and matching atomic.

http://okmij.org/ftp/ML/GADT.txt

okmij.org/ftp/ML/first-class-modules/

https://blog.janestreet.com/more-expressive-gadt-encodings-via-first-class-modules/