# Introduction

Well, hey there buddo!

This book is intended to be an introduction to the programming language datalog. There isn't an overwhelming number of resources out there about datalog and some never get past queries over movie or employee databases. Datalog really is a programming language.

Datalog is fascinating from multiple perspectives.

- Operational - Datalog rules have a simple model of execution
- Relational - Datalog is a declarative description language of relations
- Logical - Datalog rules can be read as logical axioms

There is a thrill that comes from finding out how to program a system that does not offer the typical programming constructs. Some other restrictive or unusual systems might include regular expressions, sql, apl, forth, vectorized arrays, SIMD operations, assembly, typelevel programming, or weird machines. From these restrictions come great pains but also great gains. Datalog by is highly parallelizable and has execution strategies that avoid large redundant calculations you might naively make.

This book is built around the syntax of the implementation Souffle Datalog.

# Basics

To get started, let's demonstrate the core constructs of Datalog.

In Souffle Datalog, tables are declared alongside the types and names of their columns. The names of the columns are not used except for printing in Souffle.

Souffle Datalog has a number of basic built in types

`symbol`

is effectively a string datatype`number`

is signed integers`unsigned`

is unsigned integers`float`

are floating point numbers

Here's an example declaration

```
.decl sausages(maker : symbol, difference : number, count : unsigned, inches : float)
```

## Facts

Facts are inserted into the database by "calling" the table with values for the columnsc followed by a period.
This is the analog of a `INSERT INTO mytable FROM VALUES ("42",-42,42,42.1);`

statement in SQL.

Tables in Souffle datalog have set semantics, meaning duplicate rows are coalesced into a single row.

The user only sees relations marked as `.output`

.

```
.decl mytable(x : symbol, y : number, z : unsigned, w : float)
mytable("42", -42, 42, 42.1).
mytable("42", -42, 42, 42.1). // Duplicate rows are coalesced
mytable("Hi", -47, 7331, 11.1).
.output mytable(IO=stdout)
.decl this_table_wont_print(x : symbol)
this_table_wont_print("hidden").
```

## Rules

Insertion of facts is nice, but datalog *rules* are where it shines.

Rules describe how new facts can be derived from the already known database of facts. Sometimes the initial facts are called the intensional database (IDB) and the derived facts are called the extensional database (EDB).

They have a right hand side know as a body which corresponds roughly to a `SELECT`

query and left hand side known as a head which corresponds to an `INSERT`

. Facts can be seen as a rule with an empty unconditional body.

An SQL style query can be encoded by making a rule whose body corresponds to the query and saves the result to a table in the head.

```
.decl foo(x : symbol, y : symbol)
foo("a","a").
foo("b","a").
.decl myquery(x : symbol)
// SELECT foo.x FROM foo WHERE foo.x = foo.y
myquery(x) :- foo(x,x).
.output myquery(IO=stdout)
```

## Recursion

Ok, rules are great, but what is *really* great are recursive rules.

There is something to be said about examples involving family. This is a concept we primally understand (hence the popularity of the Fast and Furious franchise).

We can use rules to extract grandparent, greatgrandparent, and greatgreatgrandparent tables from parent tables.

```
grandparent(a,c) :- parent(a,b), parent(b,c).
greatgrandparent(a,c) :- parent(a,b), grandparent(b,c).
greatgreatgrandparent(a,c) :- parent(a,b), greatgrandparent(b,c).
```

There is a theme here, a pattern to extract. Any person deep in our family tree is our ancestor. To describe this concept, we can use a recursive rule. Our parent is our ancestor, and the parent of our ancestor is also an ancestor.

```
ancestor(a,b) :- parent(a,b).
ancestor(a,c) :- parent(a,b), ancestor(b,c).
```

We can also count how deep a relationship is.

```
ancestor(1, a, b) :- parent(a,b).
ancestor(n+1, a,c) :- parent(a,b), ancestor(n,b,c).
```

Your ancestors should form a tree structure. If they form a DAG, that's kind of messed up.

If your ancestor relationship has *cycles* in it, that is deeply troubling as a human. But datalog will be happy as a clam!

## Exercises

## Resources

```
.decl edge(x : number, y : number)
edge(1,2).
.output edge(IO=stdout)
```

Hey

Suggestions: Easy Sudoku matrix logic puzzles. first cousins

# Negation

A fact not existing in the database doesn't necessarily mean it won't eventually be derived. This simple fact makes negation tricky.

# Monotonicity

Datalog rules as described have the property that once a fact is derived, it is never removed from the database. The set of facts keeps growing. \( DB_{n} \subset DB_{n+1} \).

Why should we care?

Programs can be viewed at different levels of detail. Certainly we all know that renaming the variables in a program does not really change it. Sometimes reordering some assignments and lifing things out of loops and removing redundant or dead computations don't change our programs in some sense. There is an intent behind most programs and these low level changes don't really change the intent. A factorial function returns factorial regardless if it uses a loop or recursion or SIMD instructions or whatever. When we can make clear in our programs what is intent and what is details that don't really matter that much, the compiler or runtime environment is free to make choices to change the irrelevant details. This is a core behind the idea of declarative languages.

When I write a datalog program, I do have a vague operational model in my head of how it executes. This may not at all be the case.

# Forall

In classical logic, you can compile some of the logical operations into simpler forms. the universal quantifier $\forall x, P(x)$ is the same as $\not \exists x\not P(x)$ Implication $a \rightarrow b = \neg a \lor b $.

Forall is a thing that it is tempting to talk about when modelling problems "If for every predecessor such and such exists, then foo is true".

The natural forall datalog has is that surrounding the entire clause.

The natural exists is

See notes from that course. See prolog forall This also implements a forall using negation

# Algebraic Data Types

Algebraic data types are a programming language feature that are very useful for describing tree-like structures.

```
.type IntList = Nil {} | Cons {hd : number, tl : IntList}
.type BinTree = Empty {} | Leaf {n : number} | Node {left : BinTree, right : BinTree}
```

Traditional datalog does not include algebraic datatypes. Algebraic datatypes completely destroy the termination guarantees of datalog.

```
.type Nat = Succ {prev : Nat} | Zero {}
.decl nats(n : Nat)
nats($Zero()).
nats($Succ(n)) :- nats(n).
.limitsize nats(n=10)
.output nats(IO=stdout)
```

Despite that, they are profoundly useful.

## Subterm predicates

A useful predicate is one that collects up all subterms. You can use this for example to determine which lists to compute the length of. Without this blocking predicate, datalog would compute the length of all lists, which is not what you want.

```
.type IntList = Nil {} | Cons {hd : number, tl : IntList}
.type BinTree = Empty {} | Leaf {n : number} | Node {left : BinTree, right : BinTree}
.decl list(l : IntList)
list(tl) :- list($Cons(_hd,tl)).
.decl bintree(b : BinTree)
bintree(left), bintree(right) :- bintree($Node(left,right)).
.decl length(l : IntList, n : unsigned)
length($Nil(), 0).
length(l, n + 1) :- list(l), l = $Cons(_hd,tl), length(tl, n).
list($Cons(3,$Cons(2,$Cons(1,$Nil())))).
.output length(IO=stdout)
```

## Autoinc and Skolemization

It is very tempting and operationally simple to use the `autoinc()`

feature, which maintains a mutable counter that increments by one every time you call it. This is really problematic with regards to the declarative semantics of datalog. You get different results depending on the order and how many times you execute rules.

It is almost universally better to use ADTs instead of `autoinc()`

. The reason this is so is that you can replace the id with a term that holds the *reasons* you thought you needed a new id. Then, if the same reasons apply again, it will produce the same term. This same term will not make a new duplicate entry in the database, because datalog is manipulating sets, whereas autoinc would produce a fresh number and that *would* produce an essentially duplicate record. It is quite hard to get autoinc to terminate ever, and even worse it produces a lot of junk.

# Egglog

Egglog is also intimately based around the concept of datatypes. There are a number of differences from souffle's treatment

- Datatypes are open. You can add new constructors. In fact, egglog's notion of datatype is not currently distinct from egglog's notion of function.
- Datatypes are searchable. Souffle does not store flattened ADTs as regular tables. Instead it uses a record storing hash table. The only way this hash table can be queried is by dereferencing a record id, or giving the table a record to produce an id. Partially filled in records as queries are not a thing. You can manually reflect into indexing tables if you like. This is a common technique.
- Egglog's "record ids" are not stable over time. They change as the equality relation discovers new relationships.

```
(datatype Expr (Add Expr Expr) (Lit i64))
```

# Semi-Naive Evaluation

# Program Analysis

## Pure Expressions

### Hutton's Razor

There is a very solid rule thumb called Hutton's razor that you should try out programming language techqniues on a simple language of only addition and constants first. You can learn a lot.

The first thing to do with such expressions is of course to evaluate them.

```
.type AExpr = Add {x : AExpr, y : AExpr} | Lit {n : number}
.decl aexpr(e : AExpr)
aexpr(x), aexpr(y) :- aexpr($Add(x,y)).
.decl eval(e : AExpr, v : number)
eval(e, vx + vy) :- aexpr(e), e = $Add(x,y), eval(x,vx), eval(y,vy).
eval(e, n) :- aexpr(e), e = $Lit(n).
aexpr($Add($Add($Lit(1), $Lit(2)), $Lit(3))).
.output eval(IO=stdout)
```

### Adding Variables

One direction extend the language to make it more interesting is to add variables. Nothing needs to change, but now eval becomes a partial relation, only filled in for subterms that do not contain the variables. `eval`

now has the flavor of a constant propagation analysis. By using the results of the analysis, we can simplify the original expressions. Since `eval`

does not depend on `simplify`

, we can check whether or not an entry in `eval`

exists for `e`

using negation.

```
.type AExpr = Add {x : AExpr, y : AExpr} | Lit {n : number} | Var {v : symbol}
.decl aexpr(e : AExpr)
aexpr(x), aexpr(y) :- aexpr($Add(x,y)).
.decl eval(e : AExpr, v : number)
eval(e, vx + vy) :- aexpr(e), e = $Add(x,y), eval(x,vx), eval(y,vy).
eval(e, n) :- aexpr(e), e = $Lit(n).
.decl simplify(orig : AExpr, simp : AExpr)
simplify(e,$Lit(v)) :- eval(e, v).
simplify(e,e) :- aexpr(e), !eval(e, _), e = $Var(x).
simplify(e,$Add(x1,y1)) :- aexpr(e), !eval(e, _), e = $Add(x,y), simplify(x,x1), simplify(y,y1).
aexpr($Add($Add($Lit(1), $Lit(2)), $Var("x"))).
.output simplify(IO=stdout)
```

### Evaluation in Environments

We can also evaluate the language in an environment. I'm going to change my representation of terms only allow 3 named variables. This is to avoid needing first class maps from symbols to numbers to represent the environment (which is doable but complicated by accident in Souffle datalog. Other datalogs support this more easily.). Instead we can use a fixed sized record to represent the environment.

```
.type AExpr = Add {x : AExpr, y : AExpr} | Lit {n : number} | X {} | Y {} | Z {}
.type Env = [x : number, y : number, z : number]
.decl aexpr(e : AExpr)
aexpr(x), aexpr(y) :- aexpr($Add(x,y)).
.decl env(e : Env)
.decl eval(e : AExpr, rho : Env, v : number)
eval(e, rho, vx + vy) :- aexpr(e), e = $Add(x,y), eval(x,rho,vx), eval(y,rho,vy).
eval(e, rho, n) :- env(rho), aexpr(e), e = $Lit(n).
eval($X(),rho,x) :- env(rho), rho = [x,_y,_z].
eval($Y(),rho,y) :- env(rho), rho = [_x,y,_z].
eval($Z(),rho,z) :- env(rho), rho = [_x,_y,z].
aexpr($Add($Add($Lit(1), $Lit(2)), $X())).
env([1,2,3]).
.output eval(IO=stdout)
```

Now let us constrast this with a different approach of describing states which might have been your first inclination.

```
.type AExpr = Add {x : AExpr, y : AExpr} | Lit {n : number} | Var {x : symbol}
.decl aexpr(e : AExpr)
aexpr(x), aexpr(y) :- aexpr($Add(x,y)).
.decl env(x : symbol, v : number)
```

Another thing we can do in this simple language is a valueset analysis. Given the set of possible values

# Aggregates, Lattices, and Subsumption

# Aggregates

Aggregates include sum, count, min, max, mean, multiplication. They are summary data taken from multiple entries of a relation

Traditionally, aggregates are determined after the fixed point is reached in the datalog rule iteration for the relations that you are going to take the aggregate ovr. In this sense, aggregation is a relative of negation, which also requires the fixed point before it can determine a fact with never be dervied.

Negation, exists and bounded universal quantification (`p = forall x, foo(x) => bar(x)`

~ `not_p :- foo(x), !bar(x).`

) are all arguably also aggregates.

Reification into a first class set is also an aggregate

# Lattices

A lattice is a very useful mathematical concept for computer science. The common applications involve a data structure or data type that have some notion of combination.

The algebraic laws of the lattice are

- idempotency - It's ok for the system to combine things twice
- commutativity and associativity - It's ok for the system to reorder operations

If a computer system can know it's ok to do these things are still get the same results, significant oppurtunities for optimization present themselves. One becomes free for example to perform operations in a distributed way and combne them back together in whatever order or duplication the results arrive in.

Lattices often have the feel of some bound getting tighter, information being gained, or a structure becoming more defined.

Lattices can be seen as an online form of aggregates. The lattice properties allow the to be computer at the same time as the relations themselves rather than in a subsequent strata. Sometimes this is called recursive aggregates. Try figuring out how to do `sum`

as a lattice though. It's tough.

Examples include: - integers combined under maximum - integers combined under minimum - Intervals - Option - Sets under intersection or union

```
(function best (String) i64 :merge (max old new)) ; :default 0
(set (best "a") 10)
(set (best "a") 20)
(set (best "b") 5)
;(define bestc (best "c"))
(set (best "a") 42)
(print best)
```

When you observe a lattice value, you should not say that lattice value *is* the value you see, rather the lattice value is a lower bound upon it's eventual fixed point value. A more general statement is that observations can correspond to filters, upwards closed sets of lattices.

All things seem to come back to the path example. A good example for the use of lattices is the shortest path. This is in essence running the bellman ford algorithm

```
(function path (String String) i64 :merge (min old new))
(function edge (String String) i64)
(set (edge 1 2) 10)
(set (edge 2 3) 11)
(set (edge 1 3) 42)
(rule ((= cost (edge x y)))
((set (path x y) cost)))
(rule ((= c1 (edge x y)) (= c2 (path y z)))
((set (path x z) (+ c1 c2))))
(run 10)
(print path)
```

It's interesting to note that lattices in a certain sense do not add anything to the language. If instead of keeping just the join of the lattice value, we kept all of them, we could do the join as an aggregate at a later strata. However, if there is a loop in your graph, there will be an infinite number of possible costs, and the program will not terminate anymore. So lattices are a performance optimization and a termination improvement (improving termination is arguably increased descriptive power though).

# Subsumption

Souffle has a very powerful feature called subsumption, which is a form of conditional deletion.

#### max min

Max, min using subsumption.

```
mymax(z) :- reltomax(z).
mymax(z) <= mymax(z2) :- z1 <= z2.
```

Can I do sum and count? Maybe not. Not without tracking what we've already used. You could do this with bitsets. Requires reflection I guess. c Ah, I don't need to require tracking the set of already used, only count. Oh. Wait. no.

```
mysum(count : unsigned, partial_sum : unsigned)
mysum(1,n) :- foo(n).
mysum(n+1,)
nope
```

But if I cluster them into groups, I can take approximate sums. What if I track the current biggest element in the sum. This is tracking the state of a naive summation algorithm. We can lways necode an imperative algorithm to datalog by tracking state. This relies on summing positive numbers only

```
mysum(n,n) :- foo(n).
mysum(n, partsum + n) :- mysum(top,partsum), foo(n), top < n.
mysum(n,sum) <= mysum(n,sum') :- sum <= sum'.
```

count

```
mycount(n,1) :- foo(n).
mycount(n, partcount + 1) :- mysum(top,partcount), foo(n), top < n.
mysum(n,count) <= mysum(n,count2) :- count <= count2.
```

Could hyperloglog be relevant for approximate counts? https://en.wikipedia.org/wiki/HyperLogLog

A* search. Interesting. If you have the a bound on distance you can subsumpt away bad possible paths. Path search is a canonical datalog program. Subsumption allows shortest path search (which I showed above).

#### Lattices

See also bitset lattice above

Lattices are in some respects just an optimization. They allow the database to coalesce rows in a principled way. One could allow the database to be filled with the junk of lesser lattice values.

Lattices can be sometimes encoded into finite sets. Consider the 4 valued lattice of unknonw, true, false, conflict. These are the power sets of a set of size 2. This can be represented as two tables which can be in different states of full. Because of stratified negation, it is not always possible to observe things. You shouldn't really observe anything other than filter questions anyhow though. Anf you're always above unknown. true(a), false(a)

You often need 1 rule to combine pieces with the join of the lattice and 1 rule for subsumption. This is probably not efficient, but it is works. It would be be nice to have a construct that did both at once efficiently. We should be immeditaly erasing a and b when we compute their join.

```
rel(join(a,b)) :- rel(a), rel(b).
rel(a) <= rel(b) :- lattice_order(a, b).
```

Perhaps you could make a generic lattice compment parametrized over join

```
.comp Lattice<T> {
.decl join(x: T, y : T, z : T) overridable
.decl r(x:T, y:T)
r(z) :- r(x), r(y), join(x,y,z).
r(x) <= r(y) :- join(x,y,y).
}
```

But filling out joi will be annoying. There isn't a great way to parametrize over functors so far as I can tell.

Todo:

- vsa

##### Min/max lattice

labaeller Max lattice
I suppose in a sense this is the lattice of the function space `T -> int`

defining join pointwise. If that's your bag.

```
.comp Upper<T> {
.decl upper(label: T, x : unsigned)
upper(l,x) <= upper(l,x1) :- x <= x1.
}
.comp Lower<T> {
.decl lower(label: T, x : unsigned)
lower(l,x) <= lower(l,x1) :- x1 <= x.
}
.init symu = Upper<symbol>
symu.upper("x", 1).
symu.upper("x", 7).
symu.upper("y", 4).
.output symu.upper(IO=stdout)
```

If you have both, you have an interval, see below.

##### Maybe/Option lattice

```
.type optionsymbol = None {} | Some {val : symbol}
.decl A(v : optionsymbol)
A($Some("foo")).
A($Some("fiz")).
A($Some("foz")).
//A($None()).
A($None()) :- A(x), A(y), x != y.
A(x) <= A($None()) :- A($None()).
.output A(IO=stdout)
```

Also you can make N max set lattice. Keep sorted. Insertion sort. Kind of a pain if you get too many

```
.type Nary = None {} | One {a : T} | Two {a : T, b : T} | Top {}
```

##### Intervals

```
.type Interval = [l : float, u : float]
.type Expr = Add {a : Expr, b : Expr} | Lit { a : float} | Var {x : symbol}
// all expressions
.decl expr(e : Expr)
expr(a), expr(b) :- expr($Add(a,b)).
expr($Add($Var("x"), $Lit(1))).
.decl iexpr(e : Expr, i : Interval)
iexpr($Lit(n), [n,n]):- expr($Lit(n)).
iexpr(e, [la + lb, ua + ub]) :- iexpr( a, [la,ua] ), iexpr(b, [lb, ub]), e = $Add(a,b), expr(e).
// also vice versa back down the tree
iexpr(b, [ l1 - ua, u1 - la]) :- iexpr($Add(a,b), [l1,u1]), iexpr(a, [la,ua]).
iexpr($Var("x"), [-1,1]).
iexpr($Var("x"), [0,1]).
iexpr($Add($Var("x"), $Var("x")), [0.5,0.75]).
// meet semantics
iexpr(e, [max(l1,l2), min(u1,u2)]) :- iexpr(e, [l1,u1]), iexpr(e, [l2,u2]).
iexpr(e, [l1,u1]) <= iexpr(e, [l2,u2]) :- l1 <= l2, u2 <= u1.
.output iexpr(IO=stdout)
```

It may not be worth packing into a record though. This almost certainly has performance cost. Records never get deleted. So you should just unpack into a relation.

```
// intervals getting bigger
/*
components?
We'd need to know at compile time how many possible instantations
.comp Interval<T>{
.decl upper(x : T)
.decl lower(x : T)
}
*/
.decl upper(t : symbol, x : float)
upper(t,x) <= upper(t, y) :- x <= y.
.decl lower(t : symbol, x : float)
lower(t,x) <= lower(t, y) :- y <= x.
.output upper(IO=stdout)
.output lower(IO=stdout)
upper("foo", 7).
upper("foo", 8).
upper("fiz", 9).
lower("foo", 8).
lower("fiz", 9).
lower("foo", -3).
.comp Interval<T>{
.decl upper(x : T)
upper(x) <= upper(y) :- x <= y.
.decl lower(x : T)
lower(x) <= lower(y) :- y <= x.
}
.init i1 = Interval<float>
i1.upper(3).
i1.upper(14).
.output i1.upper(IO=stdout)
```

# BitSets

# First Class Sets

# Prolog

Read The Power of Prolog.

# Prolog vs Datalog

Compare with top down vs bottom up in dynamic programming.

Datalog is the "answer" half of prolog. It corresponds to "returns" in prolog

# What is Logic Programming

Computation = Proof Search.

`:-`

is horizontal bar of inference rules.

## Pure vs Impure Prolog

For the purposes of this book, I am only referring to mostly pure constructs in prolog.

Because the control flow of prolog is relatively predictable compared to datalog or minikanren, there is an irresistable temptation to use imperative constructs.
This situation mirrors that of Haskell vs Scheme/Ocaml. The lazy excution of haskell makes reasoninig about *true* side effectful code so difficult that they are forced to use the monadic paradigm. There are benefits to this discipline.

# Magic Set

# Unification Variables

The meaning of unification variables is not just one thing.

# Minikanren

At the level of comparison of this book, minikanren is more or less the same as prolog. It has a very similar notion of unification variable, is top down. It differs from prolog in the method by which is chooses to perform search.

Will Byrd chips in on his comparison between prolog and minikanren.

# Tabling

# Modelling State

Imperative programs proceed via a manipulation of state.

In essence, a stateful program can be emulated in datalog by recursively transforming one state to the next.

```
// Initial state
state(0,0,0).
// Transition functions
state(x1,y1,z1) :- state(x,y,z), x1 = f(x,y,z), y = g(x,y,z), z = h(x,y,z).
// Transition Relations
state(x1,y1,z1) :- state(x,y,z), trans(x,y,z,x1,y1,z1).
```

This is analogous to the simpler problem.

```
reachable(0).
reachable(x1) :- reachable(x), edge(x,x1)
```

Which is the single source form of the classic datalog `path`

problem, but with more interesting structured vertices.

# Die Hard Puzzle

A fun puzzle is the Die Hard 3 puzzle. This is used in some TLA+ tutorials.

```
.pragma "provenance" "explain"
.decl reach(big : unsigned, small : unsigned)
reach(0,0).
// move water small to big
reach(big + small, 0) :- reach(big,small), big + small <= 5.
reach(5, small - (5 - big)) :- reach(big,small), !big + small <= 5.
// move water big to small
reach(0, big + small) :- reach(big,small), big + small <= 3.
reach(big - (3 - small), 3) :- reach(big,small), !big + small <= 3.
// empty big
reach(0, small) :- reach(_,small).
// empty small
reach(big,0) :- reach(big,_).
// fill big
reach(5, small) :- reach(_,small).
// fill small
reach(big,3) :- reach(big,_).
.decl big_is_4(small : unsigned)
big_is_4(small) :- reach(4,small).
.output big_is_4(IO=stdout)
```

# Imperative Programs

Here we directly model a simple imperative program. We look at the program to determine the state. There are two variables `i`

and `acc`

.

```
/*
acc = 0;
for(i = 0; i < 10; i++){
acc += i;
}
*/
.decl sumn(i : number, acc : number)
sumn(0,0).
sumn(i+1, s + i) :- sumn(i,s), i < 10.
.output sumn(IO=stdout)
```

A more direct modelling of the program is to make sure that the *line number* / program point is part of your state. This is true. Your underlying CPU tracks the address of code it is running in a register called the program counter. You are in essence modelling that.

```
/*
// L1
acc = 0;
i = 0;
//L2
while(i < 10){
// L3
acc += i;
// L4
i++;
}
//L5
*/
.decl sumn(pc : number, i : number, acc : number)
sumn(1,0,0).
sumn(2,i,acc) :- sumn(1,i,acc).
sumn(3,i,acc) :- sumn(2,i,acc), i < 10.
sumn(4,i,acc+i) :- sumn(3,i,acc).
sumn(2,i+1,acc):- sumn(4,i,acc).
.output sumn(IO=stdout)
```

# Factoring State

A common overapproximation to try to use is to factor the state into separate relations. This will think more executions are possible than they are, but can be used to prove certain behavior does not happen. The state can only be factored and reconstituted if the subpieces of the state do not interact at all.

We have made this analysis nonterminating. :(

.decl acc(pc : number, v : number) .decl i(pc : number, v : number) acc(1,0). i(1,0). acc(2,v) :- acc(1,v). i(2,v) :- i(1,v). acc(3,i),i(3,acc) :- acc(2,v), i(2,v), i < 10. sumn(4,i,acc+i) :- sumn(3,i,acc). sumn(2,i+1,acc):- sumn(4,i,acc).

```
.decl acc(pc : number, v : number)
.decl i(pc : number, v : number)
.decl sumn(pc : number, i : number, acc : number) inline
sumn(pc,i,acc) :- acc(pc,acc), i(pc,i).
acc(1,0). i(1,0).
acc(2,acc),i(2,i) :- sumn(1,i,acc).
acc(3,acc),i(3,i) :- sumn(2,i,acc), i < 10.
acc(4,acc+1),i(4,i) :- sumn(3,i,acc).
acc(2,acc),i(2,i+1) :- sumn(4,i,acc).
.limitsize acc(n=20)
.output acc(IO=stdout)
```

## Ball and Springs

Balls and springs are a model of a partial differential equation. We can model the equations of motion, initial conditions, and boundary conditions using datalog. What fun!

```
#define T 10
.decl ball(t : number, n : number, x : float, v : float)
// initial conditions
ball(0, n, 0, 0) :- n = range(0,11), n != 5.
ball(0, 5, 10, 0).
// ball(0, n, x, v) :- n = range(1,10), x = to_float(n) * 0.1, v = 0.
// ball(0,0,0,0).
// ball(0,10,0,0).
// Boundary condition. iniital and final ball fixed
ball(t+1,0,0,0) :- t = range(0,T).
ball(t+1,10,0,0) :- t = range(0,T).
// dynamics
ball(t+1, n, newx, newv) :-
t < T,
ball(t, n-1, x0, _v0),
ball(t, n, x, v),
ball(t, n+1, x2, _v2),
newx = x + 0.1 * v,
newv = v + 0.1 * (x0 - x + x2 - x).
.output ball(IO=stdout)
```

# Mandelbrot

Translated from the SQLlite docs on recursive ctes

```
WITH RECURSIVE
xaxis(x) AS (VALUES(-2.0) UNION ALL SELECT x+0.05 FROM xaxis WHERE x<1.2),
yaxis(y) AS (VALUES(-1.0) UNION ALL SELECT y+0.1 FROM yaxis WHERE y<1.0),
m(iter, cx, cy, x, y) AS (
SELECT 0, x, y, 0.0, 0.0 FROM xaxis, yaxis
UNION ALL
SELECT iter+1, cx, cy, x*x-y*y + cx, 2.0*x*y + cy FROM m
WHERE (x*x + y*y) < 4.0 AND iter<28
),
m2(iter, cx, cy) AS (
SELECT max(iter), cx, cy FROM m GROUP BY cx, cy
),
a(t) AS (
SELECT group_concat( substr(' .+*#', 1+min(iter/7,4), 1), '')
FROM m2 GROUP BY cy
)
SELECT group_concat(rtrim(t),x'0a') FROM a;
```

```
#define dx 0.05
#define dy 0.0625
.decl xaxis(x : float)
.decl yaxis(y : float)
.decl m(iter:unsigned, cx : float, cy : float, x : float, y : float)
.decl m2(iter:unsigned, cx : float, cy : float)
.decl collect(cx:float, cy:float, line:symbol)
.decl a(cy:float, line:symbol)
xaxis(-2).
yaxis(-1.00000001).
xaxis(x + dx) :- xaxis(x), x < 1.2.
yaxis(y + dy) :- yaxis(y), y < 1.
m(0,x,y, 0,0) :- xaxis(x), yaxis(y).
m(iter+1, cx, cy, x*x-y*y + cx, 2.0*x*y + cy ) :- m(iter, cx,cy,x,y),iter < 28, x*x + y*y < 4.0.
m2(iter, cx,cy) :- xaxis(cx), yaxis(cy), iter = max i : m(i, cx,cy, _,_).
collect(-2.00,cy,"") :- yaxis(cy).
collect(cx+dx,cy,line2) :- m2(iter,cx+dx,cy), collect(cx,cy,line), line2 = cat(line,c),
( iter < 16 , c = " " ; iter >= 16, c = "#" ).
a(1+cy,line) :- cx = max x : xaxis(x), collect(cx,cy,line).
.output a(IO=stdout)
```

# Other stuff

- Concurrent Execution

# Functional Programs

There are two issues to tackle when converting functional programs to datalog.

- Datalog deals mostly in relations not functions.
- Datalog is bottom up, whereas functional programs are demand driven (they only compute functions that are called).

Functional programs can be converted to logic programs by making the return value a parameter to the relation. A function is a subcase of relations in which the last column is determined by the initial columns.

For example, the function `factorial(n)`

becomes the relation `factorial(n,res)`

or `append(x,y)`

becomes `append(x,y,res)`

.

```
def factorial(n):
if n == 0:
return 1
else:
return n * fact(n-1)
```

```
.decl factorial(n : unsigned, res : unsigned)
factorial(0,1).
factorial(n+1, (n + 1) * m) :- factorial(n, m), n < 10.
.output factorial(IO=stdout)
```

A priori I have chosen to stop the process at `n = 10`

. This is fine if we can know at compile time what answers we need computed. What if we don't though? Then we need to write the function in a demand driven style.

## Demand

A simple mechanism to achieve translate function programs is to define auxiliary demand relations. These relations do not have the field corresponding the result of the function.

Demand needs to be propagated downward. For every subcall in the body of a functions definition, you need to produce the demand for that subcall.

```
.decl _factorial(n : unsigned)
.decl factorial(n : unsigned, res : unsigned)
factorial(0,1).
_factorial(n-1) :- _factorial(n), n > 0. // demand propagation
factorial(n+1, (n + 1) * m) :- _factorial(n+1), factorial(n, m).
_factorial(10).
.output factorial(IO=stdout)
```

## Lists

We can show some common list functions to demonstrate this point further.

```
.comp List<A>{
.type t = [hd : A, tl : t]
.decl _length(l : t)
.decl length(l : t, res : unsigned)
_length(tl) :- _length([_hd,tl]).
length(nil,0).
length([hd,tl], n+1) :- _length([hd,tl]), length(tl,n).
.decl _nth(l : t, n : unsigned)
.decl nth(l : t, n : unsigned , res : A)
_nth(tl, n - 1) :- _nth([_hd,tl], n), n > 0.
nth([hd,tl],0,hd) :- _nth([hd,tl], 0).
nth([hd,tl], n, res):- _nth([hd, tl], n), nth(tl, n-1, res).
.decl _rev(l : t)
.decl __rev(l : t, rem : t, acc : t)
.decl rev(l : t, res : t)
__rev(l, l, nil) :- _rev(l).
__rev(l, tl, [hd,acc]) :- __rev(l, [hd, tl], acc).
rev(l, acc) :- __rev(l, nil, acc).
.decl _append(x : t, y : t)
.decl append(x : t, y : t, z : t)
_append(tl, y) :- _append([_hd, tl], y).
append(nil, y, y) :- _append(nil, y).
append([hd,tl], y, [hd, z]) :- _append([hd,tl], y), append(tl, y, z).
.decl _mem(x : A, l : t)
.decl mem(x : A, l : t, res : number)
mem(x, nil, 0) :- _mem(x, nil).
mem(x, [hd,tl], 1) :- _mem(x, [hd,tl]), x = hd.
_mem(x, tl) :- _mem(x, [hd,tl]), x != hd.
mem(x, [hd,tl], res) :- _mem(x, [hd,tl]), x != hd, mem(x, tl, res).
}
```

## The Stack

In the actual implementation of a functional program, there exists some data structure to keep track of who needs what results and what to do with them next. This is typically achieved via a stack, which records return addresses and records the needed state of the call site.

The stack is an implicit data structure that most languages maintain for you, but it is useful to note sometime that it is there.

This same observation is useful in other programming contexts. It is sometimes relatively wasteful to use the general purpose function call mechanism. You can convert a recursive function call into a loop and maintain that analog of your own stack for efficiency purposes.

Datalog does not have an implicit stack. If you want such a thing, you must build it.

# Demand by Option Lattice

A refactoring of the above 2 relations,is to use one single lattice where the result field is in an Option lattice. The value of None represents unknown value / demand and Some(x) represents a computed result.

# Egglog

Egglog is a new style of datalog

# Macros

Souffle datalog by default offers C preprocesor macros. These are useful for defining simple functions.

# Inline Relations

Souffle datalog offers a feature of inline relations. This performs top down resolution at compile time, but has to terminate, so inline relations cannot be recursively defined.. This is mainly useful for simple complex but non recursive definitions.

# Types and Logic

The clauses of prolog and datalog can be read as inference rules.

\( \AxiomC edge(1,2) )

\( edge(X,Y) \UnaryInfC path(X,Y) \)

\( edge(X,Y) path(Y,Z) \BinaryInfC path(X,Z) \)

In this analogy, facts are not quite associate with propositions, but actually with *judgements*.

Some other judgements you may see around in computer science literature:

I am constantly thrown by infix notation for judgements and relations.

( \Gamma \turnstile e : A )
`.decl hastype(ctx, expr, type)`

( {pre} cmd {post} )
`.decl hoare(pre, cmd, post)`

`.decl wellformed(term)`

\( e \downarrow v \)
`.decl evals_to(expr, value)`

\( A == B type\) \( \Gamma \turnstile A == B type\)

Frank Pfenning notes

# Provenance

Provenance in datalog is tracking how a fact was derived. Conceptually, every fact could have an entire proof tree datastructure storing every fact and rule application that occurred to find that fact from th initial database, but this is not necessary.

This is a similar situation to minimum path finding and other dynamic programming problems. One does not need to store the entire solution at every node because once you have built the table containing minimum costs, a simple search is sufficient to reconstruct the solution in a top down, demand driven manner.

Conceptually, the proof object behind the edge-path example is any path connecting the two nodes. This is an entirely down to earth object, representable as a list of vertices. If you are ever scared of the idea of a proof object, this is good to keep in mind.

# Contexts

Many judgements are hypothetical. The infix syntax makes this seem mysterious, but it is just another field.

What is a pain is that the context is very often considered to be a set, and that judgementw using weaker contexts (less assumptions) should subsume stronger uses of contexts. Not all uses of contexts actualy obey these properties, see substructural logics like linear logic.

# Type Checking

Related of course to the above.
See the table on page 3 of this bidirectional typing tutorial. This is more or less a transcription of these rules to datalog. It is interesting how explicit the moding and forward / backward reasoning becomes. `has`

becomes a seperate entity from `infer`

and `check`

. The latter two are top down / backwards requests for proofs, whereas `has`

is a bottom up / forward reasoning assertion of truth.

```
.type type = Unit {} | Arr {a : type, b : type}
.type term = TT {} | Lam {b : term} | App {f : term, x : term} | Var {n : number} | Annot {t : term, ty : type}
.type ctx = [ ty:type, g : ctx]
.decl has(g : ctx, t : term , ty : type) // bottom up
.decl infer(g : ctx, t : term) // mode -+ topdown
.decl check(g : ctx, t : term, ty : type) // topdown
// variable.
// base case
has([ty,ctx], $Var(0), ty) :- infer([ty,ctx], $Var(0)).
// recursive case
infer(ctx, $Var(n-1)) :- infer([_ty,ctx], $Var(n)), n > 0.
has([ty,ctx], $Var(n), ty2) :- infer([ty,ctx], $Var(n)), has(ctx, $Var(n-1), ty2).
// infer instead of check if possible
infer(ctx,t) :- check(ctx,t,_ty).
// subtyping or equality rule
// has(ctx, t, ty) :- check(ctx,t,ty), has(ctx,t,ty2), sub(ty,ty2).
// annotation rule
check(ctx, t, ty) :- infer(ctx, $Annot(t,ty)).
// unit rule
has(ctx,$TT(), $Unit()) :- check(ctx, $TT(), $Unit()).
// Lam case
check([a, ctx], body, b) :- check(ctx, $Lam(body), $Arr(a,b)).
has(ctx, $Lam(body), $Arr(a,b)) :- check(ctx, $Lam(body), $Arr(a,b)), has([a, ctx], body, b).
// App case.
infer(ctx, f) :- infer(ctx, $App(f,_x)).
check(ctx, x, a) :- infer(ctx, $App(f,x)), has(ctx, f, $Arr(a,_b)).
has(ctx, $App(f,x), b) :- infer(ctx, $App(f,x)), has(ctx, f, $Arr(a,b)), has(ctx, x, a).
check(nil, $Lam($Lam($App($Var(1),$Var(0)))),
$Arr($Arr($Unit(), $Unit()), $Arr($Unit(), $Unit()))).
.output has(IO=stdout)
/*
.decl check_top(t : term, ty : type)
check(nil,t,ty):- check_top(t,ty).
.decl has_top(t : term, ty : type)
has_top(t,ty ):- has(nil, t, ty).
check_top($TT(), $Unit()).
check_top($Lam($Var(0)), $Arr($Unit(),$Unit())).
check_top($Lam($Lam($Var(0))), $Arr($Unit(),$Unit())).
check_top($Lam($Lam($Var(0))), $Arr($Unit(),$Arr($Unit(),$Unit()))).
check_top($Lam($Lam($Var(1))), $Arr($Unit(),$Arr($Unit(),$Unit()))).
check_top($Lam($Lam($Var(2))), $Arr($Unit(),$Arr($Unit(),$Unit()))).
check_top($App($Lam($Var(0)), $TT()), $Unit()).
check_top($Lam($Lam($App($Var(1),$Var(0)))),
$Arr($Arr($Unit(), $Unit()), $Arr($Unit(), $Unit()))).
*/
//.output check(IO=stdout)
//.output infer(IO=stdout)
//.output has(IO=stdout)
//.output has_top(IO=stdout)
```

# Graphs

# Egglog

```
; edit me
(datatype Expr (Lit i64) (Add Expr Expr))
(rewrite (Add (Lit i) (Lit j)) (Lit (+ i j)))
(relation expr (Expr))
(expr (Add (Lit 1) (Lit 2)))
(run 10)
(print expr)
```

# Egglog by Example

(Copied From Pearls Section of paper submission)

In this section, we will walk through a list of examples showing the wide applications of egglog.

## Functional Programming with Egglog

Egglog is capable of evaluating many functional programs very naturally. The standard evaluation of Datalog programs is traditionally done bottom up. Starting from the facts known, it iteratively derives new facts, and when it terminates, all the derivable facts are contained in the database. In the evaluation of functional programs, however, we usually start with the goal of evaluating a big expression and break it down into smaller goals while traversing the abstract syntax tree from root to leaves, before collecting the results bottom up. In order to simulate the top-down style of evaluation of functional programs, Datalog programmers need to create manual ``demand'' relations that carefully tune the firing of rules to capture the evaluation order of functional programs. On the other hand, egglog express many functional programs very naturally, thanks to the unification mechanism.

For example, consider the task of computing the relation `tree_size`

,
which maps trees to their sizes.
A full instantiation of the `tree_size`

finds the size of *all* trees
and therefore is infinite,
so bottom-up evaluations will not terminate
in languages like Souffle.
We need to manually *demand transform* the program
to make sure we only instantiate `tree_size`

for the trees asked for and their children.
Demand transformation first populates a "demand" relation,
and `tree_size`

will compute only trees that resides in the demand relation.
The program is shown below.
To get the size of a specific tree,
we have to first insert the tree object into the `tree_size_demand`

relation to make a demand, before looking up `tree_size`

for
the actual tree size.

```
.type Tree = Leaf {} | Node {t1: Tree, t2: Tree}
.decl tree_size_demand(l: Tree)
.decl tree_size(t: Tree, res: number)
// populate demands from roots to leaves
tree_size_demand(t1) :-
tree_size_demand($Node(t1, t2)).
tree_size_demand(t2) :-
tree_size_demand($Node(t1, t2)).
// calculate bottom-up
tree_size($Node(t1, t2), s1 + s2) :-
tree_size_demand($Node(t1, t2)),
tree_size(t1, s1),
tree_size(t2, s2).
tree_size($Leaf(), 1).
// compute size for a particular tree
tree_size_demand($Node($Leaf(), $Leaf())).
.output tree_size
```

Similar to Datalog, Egglog programs are evaluated bottom up.
However, we do not need a separate demand relation in Egglog,
because we can use ids
to represent unknown or symbolic information.
To query the size of a tree `t`

,
we simply put the atom `(tree_size t)`

in the action.
egglog will create a fresh id as a placeholder for the value
`tree_size`

maps to on `t`

,
and the rest of the rules will
figure out the actual size subsequently.
The egglog program is shown below.

```
(datatype Tree (Leaf) (Node Tree Tree))
(datatype Expr (Add Expr Expr) (Num i64))
(function tree_size (Tree) Expr)
;; compute tree size symbolically
(rewrite (tree_size (Node t1 t2))
(Add (tree_size t1) (tree_size t2)))
;; evaluate the symbolic expression
(rewrite (Add (Num n) (Num m))
(Num (+ n m)))
(union (tree_size (Leaf)) (Num 1))
;; compute size for a particular tree
(define two (tree_size (Node (Leaf) (Leaf))))
(run 10)
(check (= two (Num 2)))
(clear)
```

Conceptually, we create a ``hole'' for the value
`(tree_size t)`

is mapped to.
A series of rewriting will ultimately fill in this hole
with concrete numbers.
We use fresh ids here in a way that is similar to
how logic programming languages use logic variables.
In logic programming,
logic variables represent unknown information
that has yet to be resolved.
We view this ability to represent the unknown
as one of the key insights
to both Egglog and logic programming languages
like Prolog and miniKanren.
However, unlike Prolog and miniKanren,
egglog does not allow backtracking
in favor of monotonicity and efficient evaluations.

## Simply Typed Lambda Calculus

% \yz{Consider removing the part on capture-avoiding substitutions}

Previous equality saturation applications use lambda calculus as their intermediate representation for program rewriting ~\citep{koehler2021sketch,egg,storel}. To manipulate lambda expressions in \egraphs, a key challenge is performing capture-avoiding substitutions, which requires tracking the set of free variables. A traditional equality saturation framework will represent the set of free variables as an eclass analysis and uses a user-defined applier to perform the capture-avoiding substitution, both written in the host language (e.g., Rust). As a result, users need to reason about both rewrite rules and custom Rust code to reason about their applications.

We follow the `lambda`

test suite of egg \citep{egg}
and replicate the lambda calculus example in egglog.
Instead of writing custom Rust code for analyses,
we track the set of free variables using standard egglog rules.
Figure \ref{fig:free-var} defines a function that maps terms to set of variable names.
Since the set of free variables can shrink in equality saturation
(e.g., rewriting \(x-x\) to \(0\) can shrink the set of free variables from \({x}\) to the empty set),
we define the merge expression as set intersections.
The first two rules say that values have no free variables and variables have themselves
as the only free variable.
The free variables of lambda abstractions, let-bindings, and function applications are
inductively defined by constructing the appropriate variable sets at the right hand side.
Finally,
the last three rewrite rules perform the capture-avoiding substitution
over the original terms depending on the set of free variables.
When the variable of lambda abstraction is contained in the set of free variables
of the substituting term,
a new fresh variable name is needed.
We skolemize the rewrite rule
so that the new variable name is generated deterministically.
Note that the last two rules depend both positively and negatively
on whether the set of free variables contains a certain variable,
so this program is not monotonic in general.

% Some applications perform constant folding over % lambda expressions. % For these applications, users have to write a composite analysis that is the Cartesian product of % constant folding analysis and free variable analysis. % Each component of the composite analysis cannot propagate independently. % In contrast, users can specify two individual analyses in egglog % by defining two functions or relations, and each individual analysis can propagate independently.

egglog can not only express eclass analyses, which are typically written in a host language like Rust, but also semantic analyses not easily expressible in eclass analyses. For example, consider an equality saturation application that optimizes matrix computation graphs and uses lambda calculus as the intermediate representation. Users may want to extract terms with the least cost as the outputs of optimizations, but a precise cost estimator may depend on the type and shape information of an expression (e.g., the dimensions of matrices being multiplied). Expressing type inference within the abstraction of eclass analyses is difficult: in eclass analyses, the analysis values are propagated bottom up, from children to parent eclasses. However, in simply typed lambda calculus, the same term may have different types depending on the typing context, so it is impossible to know the type of a term without first knowing the typing context. Because the typing contexts need to be propagated top down first, eclass analysis is not the right abstraction for type inference. In contrast, we can do type inference in egglog by simply encoding the typing rule for simply typed lambda calculus in a Datalog style: we first break down larger type inference goals into smaller ones, propagate demand together with the typing context top down, and assemble parent terms' types based on the children terms' bottom up.

\autoref{fig:stlc} shows a subset of rules that perform type inference
over simply typed lambda calculus.
We determine the types of variables based on contexts.
For lambda expressions, we rewrite the type of `(Lam x t1 e)`

to be \(t_1\rightarrow t_2\), where \(t_2\) is the type of \(e\) in the new context
where $x$ has type \(t_1\) (i.e., `(typeof (Cons x t1 ctx) e)`

).
Finally, because we cannot directly rewrite the type of function applications
in terms of types of their subexpressions,
we explicitly populate demands for subexpressions and
derive the types of function applications using
the types of subexpressions once they are computed.

```
(function free (Term) (Set Ident)
:merge (set-intersect old new))
;; Computing the set of free variables
(rule ((= e (Val v)))
((set (free e) (empty))))
(rule ((= e (Var v)))
((set (free e) (set-singleton v))))
(rule ((= e (Lam var body))
(= (free body) fv))
((set (free e) (set-remove fv var))))
(rule ((= e (Let var e1 e2))
(= (free e1) fv1) (= (free e2) fv2))
((set (free e) (set-union fv2
(set-remove fv1 var)))))
(rule ((= e (App e1 e2))
(= (free e1) fv1) (= (free e2) fv2))
((set (free e) (set-union fv1 fv2))))
;; [e2/v1](*$\lambda$*)v1.e1 rewrites to (*$\lambda$*)v1.e1
(rewrite (subst v e2 (Lam v e1))
(Lam v body))
;; [e2/v2](*$\lambda$*)v1.e1 rewrites to (*$\lambda$*)v1.[e/v2]e1
;; if v1 is not in free(e2)
(rewrite (subst v2 e2 (Lam v1 e1))
(Lam v1 (subst v2 e2 e1))
:when ((!= v1 v2)
(set-not-contains (free e2) v1)))
;; [e2/v2](*$\lambda$*)v1.e1 rewrites to (*$\lambda$*)v3.[e/v2][v3/v1]e1
;; for fresh v3 if v1 is in free(e2)
(rule ((= expr (subst v2 e2 (Lam v1 e1)))
(!= v1 v2)
(set-contains (free e2) v1))
((define v3 (Skolemize expr))
(union expr (Lam v3 (subst v2 e2
(subst v1 (Var v3) e1))))))
```

\caption{Free variable analysis and capture avoiding substitution in egglog. We use skolemization function \lstinline{Skolemize} to deterministically generate fresh variables for capture-avoiding substitution. }

```
(function typeof (Ctx Expr) Type)
(function lookup (Ctx Ident) Type)
(rewrite (lookup (Cons x t ctx) x) t)
(rewrite (lookup (Cons y t ctx) x)
(lookup ctx x)
:when ((!= x y)))
;; Type of matrix constants
(rewrite (typeof ctx (fill (Num n) (Num m) val))
(TMat n m))
;; Type of variables
(rewrite (typeof ctx (Var x) )
(lookup ctx x))
;; Type of lambda abstractions
(rewrite (typeof ctx (Lam x t1 e))
(Arr t1 (typeof (Cons x t1 ctx) e)))
;; Populate type inference demand for
;; subexpressions of function application
(rule ((= (typeof ctx (App f e)) t2))
((typeof ctx f)
(typeof ctx e)))
;; Type of function application
(rule ((= (typeof ctx (App f e)) t)
(= (typeof ctx f) (Arr t1 t2))
(= (typeof ctx e) t2))
((union t1 t2)))
```

\caption{Type inference for simply typed lambda calculus with matrices. Here we require that lambda abstractions are annotated with parameter types. Section~\ref{sec:hm} looses this restriction.}

### Type Inference beyond Simply Typed Lambda Calculus

egglog is suitable for expressing a wide range of unification-based algorithms
including equality saturation (Section~\ref{sec:eqsat}) and Steensgard analyses (Section~\ref{sec:points-to}).
In this section, we show an additional example on the expressive power of egglog:
type inference for Hindley-Milner type systems.
Unlike the simple type system presented in Section~\ref{sec:stlc},
a Hindley-Milner type system does not require type annotations for variables in lambda abstractions
and allows let-bound terms to have a \textit{scheme} of types.
For example, the term `let f = \x. x in (f 1, f True)`

is not typeable in simply typed lambda calculus,
since this requires `f`

to have both type \(\textit{Int}\rightarrow\textit{Int}\)
and \(\textit{Bool}\rightarrow\textit{Bool}\).
In contrast, A Hindley-Milner type system will accept this term,
because both \(\textit{Int}\rightarrow\textit{Int}\)
and \(\textit{Bool}\rightarrow\textit{Bool}\) are instantiations of the type scheme
\(\forall \alpha,\alpha\rightarrow\alpha\).

Concretely,
to infer a type for the above term,
a type inference algorithm will first introduce a fresh type variable (\alpha\) for `x`

,
the argument to function `f`

, and infer that
the type of `f`

is \(\alpha\rightarrow\alpha\).
Next, because `f`

is bound in a let expression,
the algorithm generalizes the type of `f`

to be a scheme by introducing forall quantified
variables, i.e., \(\forall \alpha.\alpha\rightarrow\alpha\).
At the call site of `f`

,
the type scheme is instantiated by consistently substituting forall quantified type variables with fresh ones,
and the fresh type variables are later unified with concrete types.
For example, `f`

in function application `f 1`

may be instantiated with type \(\alpha_1\rightarrow\alpha_1\).
Because integer `1`

has type \textit{Int},
type variable \(\alpha_1\) is unified with \(\textit{Int}\),
making the occurrence of `f`

here
have type \(\textit{Int}\rightarrow\textit{Int}\).
The final type of `f 1`

is therefore \textit{Int}.

The key enabler of Hindley-Milner inference is the ability to unify two types. To do this, an imperative implementation like Algorithm W \citep{hindley-milner} needs to track the alias graphs among type variables and potentially mutating a type variable to link to a concrete type, which requires careful handling. In contrast, egglog has the right abstractions for Hindley-Milner inference with the built-in power of unification. The unification mechanism can be expressed as a single injectivity rule

```
(rule ((= (Arr fr1 to1) (Arr fr2 to2)))
((union fr1 fr2)
(union to1 to2)))
```

This rule propagates unification down
from inductively defined types to their children types.
% Injective unification like this
% is known in the literature as the dual of congruence~\citep{congr-duality}.
At unification sites,
it suffices to call `union`

on the types being unified.
For instance, calling `union`

on
`(Arr (TVar x) (Int))`

and `(Arr (Bool) (TVar y))`

will unify type variable \(x\) (resp.\ \(y\)) and `Int`

(resp.\ `Bool`

)
by putting them into the same eclass.

```
(function generalize (Ctx Type) Scheme)
(function instantiate (Scheme) Type)
(function lookup (Ctx Ident) Scheme)
(function typeof (Ctx Expr i64) Type)
;; Injectivity of unification
(rule ((= (Arr fr1 to1) (Arr fr2 to2)))
((union fr1 fr2)
(union to1 to2)))
;; Simple types
(rewrite (typeof ctx (Num x)) (Int))
(rewrite (typeof ctx (True)) (Bool))
(rewrite (typeof ctx (False)) (Bool))
(rewrite (typeof ctx (Var x))
(instantiate (lookup ctx x)))
;; Inferring types for lambda abstractions
(rule ((= t (typeof ctx (Abs x e))))
((define fresh-tv (TVar (Fresh x)))
(define scheme (Forall (empty) fresh-tv))
(define new-ctx (Cons x scheme ctx))
(define t1 (typeof new-ctx e))
(union t (Arr fresh-tv t1))))
```

```
;; Inferring types for function applcations
(rule ((= t (typeof ctx (App f e))))
((define t1 (typeof ctx f))
(define t2 (typeof ctx e))
(union t1 (TArr t t2))))
;; Inferring types for let expressions
(rule ((= t (typeof ctx (Let x e1 e2))))
((define t1 (typeof ctx e1 c1))
(define scheme (generalize ctx t1))
(define new-ctx (Cons x scheme ctx))
(define t2 (typeof new-ctx e2))
(union t t2)))
;; Occurs check
(relation occurs-check (Ident Type))
(relation errors (Ident))
(rule ((= (TVar x) (Arr fr to)))
((occurs-check x fr)
(occurs-check x to)))
(rule ((occurs-check x (Arr fr to)))
((occurs-check x fr)
(occurs-check x to)))
(rule ((occurs-check x (TVar x)))
((errors x)
(panic "occurs check failed")))
```

\caption{Expressing Hindley-Milner inference in egglog.
`Ident`

is a datatype for identifiers
that can be constructed by lifting a string or a counter (i.e., `i64`

).
In the actual implementation,
we additionally track a counter in the `typeof`

function
to ensure the freshness of fresh variables,
which we omit for brevity.
Definitions of `instantiate`

, `generalize`

,
and `lookup`

are not shown as well.
}
\label{fig:hm}
\end{figure}

\autoref{fig:hm} shows a snippet of Hindley-Milner inference in egglog.
We translate the typing rule to rewrite rules in egglog straightforwardly.
The egglog rule for lambda abstractions says,
whenever we see a demand to check the type of `\x.e`

.
We create a fresh type scheme `fresh-tv`

with no variables quantified,
binding it to `x`

in the context, and infer the type of body as `t1`

.
Finally, we unify the type of `\x.e`

with
\lstinline[mathescape=true]{fresh-tv $\rightarrow$ t1}.
%\yz{this part is different from simple type inference.}
For function application `f e`

,
we can compact the two rules
for function application in simply typed lambda calculus into one rules
thanks to the injectivity rule:
we simply equate the type `t1`

of `f`

and the arrow type `Arr t t2`

for type `t`

of `f e`

and type `t2`

of `e`

,
and injectivity will handle the rest of unifications.
Finally, the rule for type inferring `let x = e1 in e2`

will first get and,
generalize\footnote{
Generalization, as well as instantiation for the rule for type inferring variables
is a standard operation in type inference literature.
They convert between types and type schemes based on contexts.
We omit them from the presentation for brevity.
To implement them, we also track the free type variables
for each type
in our implementation.
} the type of `e1`

in the current context,
bind variable `x`

to it
and infer the type of `e2`

as `t2`

.
The type of \lstinline[language=Haskell]{let x = e1 in e2}
is then unified with the type of `t2`

.

In Hindley-Milner type systems,
a type variable may be accidentally unified
with a type that contains it, which results in infinitary types
like \(\alpha\rightarrow\alpha\rightarrow\ldots\) and is usually
not what users intend to do.
A Hindley-Milner inference algorithm will also
do an ``occurs check'' before
unifying a type variable with a type.
In egglog, the occurs check can be done modularly,
completely independent of the unification mechanism
and the type inference algorithm.
In \autoref{fig:hm},
we define an `occurs-check`

relation
and match on cases where a type variable
is unified with an inductive type like the arrow type
and mark types that need to be occurs checked by
populating them in the `occurs-check`

relation.
The occurs check fails when an `occurs-check`

demand
is populated on an identifier and a type variable with the same identifier.
Our actual implementation also contains rules that
check if two different base types are unified or
a base type is unified with an arrow type,
where it will throw an error.
These could happen when two incompatible types are unified
due to ill typed terms
(e.g., when type inferring `True + 1`

).

### Theory of Arrays

```
(datatype Addr (Num i64) (Var String))
(datatype Array (Const i64) (AVar String))
(function store (Array Addr Addr) Array)
(function select (Array Addr) Addr)
(function add (Addr Addr) Addr)
(relation neq (Addr Addr))
;; select gets from store
(rewrite (select (store mem i e) i) e)
;; select passes through wrong index
(rewrite (select (store mem i1 e) i2) (select mem i2)
:when ((neq i1 i2)))
```

The theory of arrays is a simple first order axiomatization of functions that is useful in software verification. Some of the axioms are simple rewrite rules, but others are guarded upon no aliasing conditions of the address. Addresses may be concrete or abstract entities. Inferring that two addresses cannot alias may be done via constant propagation or injectivity reasoning and is a natural application of Datalog.

% \rw{Is theory of arrays monotonic?} % \pz{You're right. As written I need an explicit neq relation. I could also change to i64 % addresses, although this is a bit unsatisfying.} % \rw{No I was just wondering, not being monotonic is fine.} \end{comment}

## Other Egglog Pearls

In this subsection, we show more self-contained programs with interesting behaviors, further demonstrating the expressive power of egglog.

### Equation Solving

Many uses of eqsat and hence egglog fall into a guarded rewriting paradigm. A different mode of use is that of equation solving: rather than taking a left hand side and producing a right, egglog can take an entire equation and produces a new equation. A common manipulation in algebraic reasoning is to manipulate equations by applying the same operation to both sides. This is often used to isolate variables and use one variable to substitute other variables in an equation. Substitutions in \egraphs and egglog are implicit (since the variable and its definition via other variables are in the same equivalence class), and we can encode variable isolation as rules.

\autoref{fig:equationsolve:egglog} shows a simplistic equational system with addition, multiplication, and negations. Besides the standard algebraic rules, we use two rules that manipulate equations to isolate variables. This allows us to solve simple multivariable equations like \[ \begin{cases} z+y=6\ 2z=y \end{cases} \] .

Equation solving in egglog can be seen as similar to the "random walk" approach to variable elimination a student may take. For specific solvable systems this may be very inefficient compared to a dedicated algorithm. For example one can consider a symbolic representation of a linear algebraic system, for which Gaussian elimination will be vastly more efficient. However, equation solving in egglog is compositional and can easily handle the addition of new domain-specific rules like those for trigonometric functions.

```
(datatype Expr
(Add Expr Expr)
(Mul Expr Expr)
(Neg Expr)
(Num i64)
(Var String))
;; Algebraic rules over expressions
(rewrite (Add x y) (Add y x))
(rewrite (Add (Add x y) z) (Add x (Add y z)))
(rewrite (Add (Mul y x) (Mul z x))
(Mul (Add y z) x))
;; Make the implicit coefficient 1 explicit
(rewrite (Var x) (Mul (Num 1) (Var x)))
;; Constant folding
(rewrite (Add (Num x) (Num y)) (Num (+ x y)))
(rewrite (Neg (Num n)) (Num (- n)))
(rewrite (Add (Neg x) x) (Num 0))
;; Variable isolation by rewriting
;; the entire equation
(rule ((= (Add x y) z))
((union (Add z (Neg y)) x)))
(rule ((= (Mul (Num x) y) (Num z))
(= (% z x) 0))
((union (Num (/ z x)) y)))
; system 1: x + 2 = 7
(set (Add (Var "x") (Num 2)) (Num 7))
; system 2: z + y = 6; 2z = y
(set (Add (Var "z") (Var "y")) (Num 6))
(set (Add (Var "z") (Var "z")) (Var "y"))
(run 5) ;; run 5 iterations
(extract (Var "x")) ;; (Num 5)
(extract (Var "y")) ;; (Num 4)
(extract (Var "z")) ;; (Num 2)
```

### Proof Datatypes

Datalog proofs can be internalized as syntax trees inside of egglog. This proof datatype has one constructor for every Datalog rule of the program and records any intermediate information that may be necessary. This can also be done in any Datalog system that supports terms. A unique capability of egglog however is the ability to consider proofs of the same fact to be equivalent, a form of proof irrelevance. This compresses the space used to store the proofs and enhances the termination of the program which would not terminate in ordinary. In addition, the standard extraction procedure can be used to extract a short proof.

```
;; Proofs of connectivity
(datatype Proof
(Trans i64 Proof)
(Edge i64 i64))
;; Path function points to a proof datatype
(function path (i64 i64) Proof)
(relation edge (i64 i64))
;; Base case
(rule ((edge x y))
((set (path x y) (Edge x y))))
;; Inductive case
(rule ((edge x y) (= p (path y z)))
((set (path x z) (Trans x p))))
;; Populate the graph and run
(edge 1 2)
(edge 2 3)
(edge 1 3)
(run)
;; returns the smallest proof of
;; the connectivity between 1 and 3
(extract (path 1 3))
```

### Binary Decision Diagrams

Binary decision diagrams are a compact canonical representation of boolean formulas or sets of bitvectors. They can be viewed in different lights. One way of looking at them is as a hash consed if-then-else tree https://www.lri.fr/~filliatr/ftp/publis/hash-consing2.pdf that examines each variable in a prescribed order, and where any branch where the then and else branch hold the same expression is collapsed. egglog is hash consed by it's nature and is capable of encoding this collapse as a rewrite rule. Calculations of functions of the BDDs is also expressible as rewrite rules, being implemented by merging down the two trees in variable ordering.

This is particularly intriguing in light of the work of bddbdddb https://bddbddb.sourceforge.net/ which was a Datalog built around using binary decision diagrams as it's core datastructure. While the egglog encoding of the structure is unlikely going to be as efficient as a bespoke implementation, it is capable of describing them in its surface language as a first class construct. BDDs are one form of a first class set.

```
(datatype BDD
(ITE i64 BDD BDD) ;; variables labelled by number
(True)
(False)
)
; compress unneeded nodes
(rewrite (ITE n a a) a)
(function and (BDD BDD) BDD)
(rewrite (and (False) n) (False))
(rewrite (and n (False)) (False))
(rewrite (and (True) x) x)
(rewrite (and x (True)) x)
; We use an order where low variables are higher in tree
; Could go the other way.
(rewrite (and (ITE n a1 a2) (ITE m b1 b2))
(ITE n (and a1 (ITE m b1 b2)) (and a2 (ITE m b1 b2)))
:when ((< n m))
)
(rewrite (and (ITE n a1 a2) (ITE m b1 b2))
(ITE m (and (ITE n a1 a2) b1) (and (ITE n a1 a2) b2))
:when ((> n m))
)
(rewrite (and (ITE n a1 a2) (ITE n b1 b2))
(ITE n (and a1 b1) (and a2 b2))
)
```

### Reasoning about matrices

The algebra of matrices follows similar rules as the algebra of simple numbers, except that matrix multiplication generally not commutative. With the addition of structural operations like the Kronecker product, direct sum, and stacking of matrices a richer algebraic structure emerges. A particularly simple and useful rewrite rule allows one to push matrix multiplication through a Kronecker product \((A \otimes B) \cdot (C \otimes D) = (A \cdot C) \otimes (B \cdot D)\). Rewriting from left to right improves the asymptotic complexity of evaluating the expression. However, while this equation may proceed from right to left unconditionally, the left to right application requires that the dimensionality of the matrices line up. In a large matrix expression with possibly abstract dimensionality, this is not easily doable in a classical equality saturation framework. Although one may be tempted to express dimensionality with eclass analyses, the dimensionality is a symbolic term itself and needs to be reasoned about via algebraic rewriting. However, the abstraction of eclass analyses do not allow rewriting over the analysis values. On the other hand, because the analysis is just another function not unlike the constructors for matrix expressions, we can use standard egglog rules to reason about it just like how we reason about matrix expressions. \autoref{fig:matrix:egglog} shows a simple theory of matrices with Kronecker product, and this example can be generalized to other (essentially) algebraic theories.

```
(datatype MExpr
(MMul MExpr MExpr)
(Kron MExpr MExpr)
(Var String))
(datatype Dim
(Times Dim Dim)
(NamedDim String)
(Lit i64))
(function nrows (MExpr) Dim)
(function ncols (MExpr) Dim)
;; Computing the dimensions of
;; matrix expressions
(rewrite (nrows (Kron A B))
(Times (nrows A) (nrows B)))
(rewrite (ncols (Kron A B))
(Times (ncols A) (ncols B)))
(rewrite (nrows (MMul A B)) (nrows A))
(rewrite (ncols (MMul A B)) (ncols B))
;; Reasoning about dimensionality
(rewrite (Times a (Times b c))
(Times (Times a b) c))
(rewrite (Times (Lit i) (Lit j)) (Lit (* i j)))
(rewrite (Times a b) (Times b a))
;; Rewriting matrix multiplications and Kronecker
;; product
(rewrite (MMul A (MMul B C)) (MMul (MMul A B) C))
(rewrite (MMul (MMul A B) C) (MMul A (MMul B C)))
(rewrite (Kron A (Kron B C)) (Kron (Kron A B) C))
(rewrite (Kron (Kron A B) C) (Kron A (Kron B C)))
(rewrite (Kron (MMul A C) (MMul B D))
(MMul (Kron A B) (Kron C D)))
;; Optimizing Kronecker product with guarded rules
(rewrite (MMul (Kron A B) (Kron C D))
(Kron (MMul A C) (MMul B D))
:when ((= (ncols A) (nrows C))
(= (ncols B) (nrows D))))
```

\caption{Equality saturation with matrices in egglog. The last rule is guarded by the equational precondition that the dimensionalities should align, which is made possible by rich semantic analyses \textit{a la} Datalog.} \label{fig:matrix:egglog} \end{figure}

# Other Examples

- Unification
- Bitvectors
- Resolution theorem proving
- Herbie
- CClyzer
- Gappa
- Theorem Proving
- Homotopy Search

## Unification

Programmable Unification without backtracking

## Bitvectors

## Egraphs

Explain what an egraph is

# Reflection

# SQL

SQL at it's core is expressions of the form `SELECT ... FROM ... WHERE ...`

. This makes it more query language like than programming language like. Most programming languages have a more recursive structure.

Extensions to SQL can do all sorts of crazy things.

- SQL is multiset based, datalog is set based (typically)
- SQL gives variable names to rows, datalog gives variable names to row entries.
- Datalog allows variables to share names defining an implicit equality constraint

SQL is a useful language to perform macro operations on databases. Datalogs can be built upon a SQL interface. This isn't persay even inefficient, since the processing of the SQL text overhead will often be small compared to the manipulation of the database itself.

See these blog posts:

- Duckegg: A Datalog / Egraph Implementation Built Around DuckDB
- Datalite: A Simple Datalog Built Around SQLite in Python

## Recursive Common Tables Subexpressions

Recursive Common Table Subexpressions enable the descrpition of an extremely limittied form of datalog.

Triggers might possibly be usable to do certain datalog like tasks

# Implementations

- Flix
- Rel