See also:

  • Datalog
  • concurrency

Key Value Store

log structured storage a log is a append only store LSM - log structured merge trees. In memory table for writes. Flushed to disk. Multiple read only written to disk, coalesced in background. sstable Tombstone records for deletes.

What’s the big deal about key-value databases like FoundationDB and RocksDB? lobster comments

wide-column store key/value store

Embedded key value store. Backing engines. MySql has support for many backing engines



OLTP online transaction processing OLAP online analytical processing hyperloglog bloom filters cuckoo filter


Topics in Database Theory Dan Suciu Principles of Databases book

Conjunctive Queries

Query containment

  • See finite model theory

descriptive complexity NC^0 bounded fan in AC^0 unbounded fan in circuit. Constant height

Foundations of database

Conjunctive Query Fun queures to solve NP problems. another angle on the bdd = gj thing

hypergraph. vertices are variables. hyperedges are tables

hypertree width

CSP is finding homomorphisms to structures. graph coloring. The from is the instance queries are finding homomorphisms from structures (the query pattern). The to is the database

quantified boolean formula. Interesting. Model checking a QBF… use a database? Seem wacky. Hmm. Use an ROBDD. Makes sense then the connection between GJ and ROBDD. ROBDD and elimination ordering?

Constraint Satisfaction Complexity and Logic Phokion G Kolaitis

Schefer’s dichomotmy existensial pebble games CSP(B) finite variable logic and bounded tree width

conjunctive query containent, equivalence, evaluation

A = Q is evaluation

canonical conjunctive query - turn structure into query. elements become vriables, facts become parts of conjunction canonical structure - turn qury into database/structure. variables become elements, conjuncts become facts

Chndra and merlin. See also graph theory: logic of graphs


schema is finite set of relation symbol names an instance is a set of concrete relations with those symbol names. Sometimes also called a structure

Functional Dependencies

Armstrong axioms

Normal Formals

Tuple Generating dependencies

Query Optimization

Cascades framework volcano

Selinger method needs and provides query Compiler selinger properties. optimize subparts query Fully left deep join. Breaks associativty symmettry. Now we just need a sequence of subsets of all joins done.

Zetasql calcite

WeTune: Automatic Discovery and Verification of Query Rewrite Rules superoptimizer for query rewrite rules.

Cosette: An Automated SQL Solve HottSQL Inside the SQL Server Query Optimizer

Building Query Compilers (2023, under construction) nice notes. Convert sql to relation algebra. Push down select, convert cross product to join, pick from different methods according to what is Query trees vs query graphs

SQlite query optimization

Relation Algebra relation algebra and relation calculus have same power

The Chase

Equality Generating Dependencies The Chase Procedure and its Applications in Data Exchange

Yisu: query optimization data integration querying incomplete databases benchmarking the chase chasebench

Chasefun, DEMOo, Graal, llunatic, pdg, pegasus, dlv, e, rdfox

Stratgeies - (restricted, unrestricted, parallel, skolem, fresh-null

Chase Strategies vs SIPS

The power of the terminating chase

Is the chase meant to be applied to actual databases, symbolic databases / schema, or other dependencies? Is it fair the say that the restricted chase for full dependencies is datalog?

Alice book chapter 8-11

Graal - defeasible programming Something about extra negation power? Defeatable rules if something contradicts them Pure is part of graal

llunatic -

RDfox -

dlgp - datalog plus format. Allows variables in head = existentials. Variables in facts. Notion of constraint ! :- and notion of query. Hmm.

Direct modelling of union find in z3? homomorphism is union find


The core SQL stuff is just a query of the form

SELECT columns and expressions FROM a as alias1, couple as alias2, tables as alias3 
WHERE alias2.col1 = 7 AND alias4.col7 =

It really almost isn’t a programming language. It just so happens that there are enough slightly off the beaten path features that you can do some neat stuff with it. This can ever be useful, because serializing results over the network is probably very bad performance wise.

Sometimes you want to INSERT INTO or DELETE FROM these results rather than just returns them

Some other weird stuff:

You can use it as a calculator to just evaluate expressions.

SELECT 40 + 2;

Creating tables and adding concrete values.

CREATE TABLE T (a int PRIMARY KEY, -- implies not null
 b bool, c text, d int);

-- CREATE TYPE mytype AS (a bool, b text);

(1,true, "hi", 3),
(2,true, "hi", 3)


SELECT myrow.* -- 2 returns row variable
FROM T AS myrow;-- 1 binds myrow

SELECT myrow.* -- 2 returns row variable
FROM T AS myrow WHERE myrow.a = myrow.a;



-- can label columns
SELECT 40 + 2 AS firstcol, "dog" || "store" AS secondcol;

VALUES (10), (20); -- values may be used anywhere sql expects a table

SELECT * FROM (VALUES (10,20), (0,10)) AS myrow(x,y); 

Scalar subqueries - subqueries that return a single row may be considered as scalar values

From binds below, even though it’s kind of a for loop. [row for row in table] I guess this also reverses order.

Order by expressions. So we coukd have many more ordering constraints than columns for xample

Select distinct on. Returns first row in each group.

agregates bool_and bool_or (forall and exists)

Group by - wadler. Changing type of row entry to bag(row entry)

ALL bag semantics, no all is set semantics

  series(i) as (
    VALUES (0)
    SELECT t.i + 1 FROM
      series as t where t.i < 10
 SELECT * FROM series;

  root(i,j) AS (
    SELECT foo.i, max(foo.j) 
    FROM (VALUES (1,1), (2,1), (3,3)) AS foo(i,j)
          --(SELECT i, k FROM root AS (i,j), root as (j1,k) where j = j1))
    SELECT * from root;

  FROM (VALUES (1,1), (2,1), (3,3)) AS foo(i,j);

SELECT (SELECT 42) * 2; -- this works. There is broadcasting of sorts

sql injection everything is foreign keys? Interning

Recursive tables let you do datalog like stuff.

INSERT INTO edge(a,b)
SELECT a,b FROM edge;

--SELECT * FROM edge;

-- path(x,z) :- edge(x,y), path(y,z).
  path0(x,y) AS
    -- SELECT 1,2
    (SELECT a,b FROM edge UNION SELECT edge.a, path0.y FROM edge, path0 WHERE path0.x = edge.b )
  INSERT INTO path SELECT x,y FROM path0;
SELECT a,b FROM path;


  parent(x,y) AS
  SELECT a, min(b) (SELECT (a,b) FROM eq UNION eq, parent)

python sqlite3 in stdlib

import sqlite3
con = sqlite3.connect(':memory:')
cur = con.cursor()
# Create table
cur.execute('''CREATE TABLE stocks
               (date text, trans text, symbol text, qty real, price real)''')

# Insert a row of data
cur.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")

#cur.executemany("insert into characters(c) values (?)", theIter)
for row in cur.execute('SELECT * FROM stocks ORDER BY price'):

adapters to python types

sqlite loadable extensions

create table foo(a);
insert into foo values (1),(2),(3);

-- ok not allowed in sqlite
-- select * from foo as f, (select f.a) as b;

-- ok this won't work either. returning in subqueries is not supported
--create view rule1(a) as select 0 from (insert into foo select a + 1 from foo returning 0);
--select * from rule1;
import psycopg2
conn = psycopg2.connect()
cur = conn.cursor()
cur.execute("create temp table foo(a integer)")
cur.execute("insert into foo values (1), (3)")
    create or replace procedure rule1() language sql    
    as $$
    insert into foo select a + 1 from foo;
cur.execute("CALL rule1()")
cur.execute("select * from foo");

Functional Programming ocaml Sound and Efficient Language-Integrated Query Maintaining the ORDER A SQL to C compiler in 500 lines of code Finally, safely-extensible and efficient language-integrated query A practical theory of language-integrated query The Script-Writer’s Dream: How to Write Great SQL in Your Own Language, and Be Sure It Will Succeed - Cooper

Term Rewriting

Table flatteing. Maybe stord procedures could do better?

counter = 0
def freshrow():
  global counter
  counter += 1
  return "row" + str(counter)

def foo(a):
  def res():
    (rid, froms, wheres) = a()
    row = freshrow()
    return (f"{row}.rowid",  [f"foo as {row}"] + froms, [f"{rid} = {row}.a"]+ wheres)
  return res

def x():
  row = freshrow()
  return (row + ".rowid" ,[f"x AS {row}"], [])

def func(f):
  def res(*args0):
    def res1():
      args = [arg() for arg in args0]
      rids, froms, wheres = zip(*args)
      froms = sum(froms,[])
      wheres = sum(wheres, [])
      row = freshrow()
      return (f"{row}.rowid",
          [f"{f} as {row}"] + froms, 
          [f"{rid} = {row}.arg{n}" for n,rid in enumerate(rids)] + wheres)
    return res1
  return res


import sqlite3
conn = sqlite3.connect(":memory:")
cur = conn.cursor()
cur.execute("create table cons(a,b)")
cur.execute("create table nil(a)")
cur.execute("create table hd(a)")

cons = func("cons")
nil = lambda: ("'nil'", [], [])
hd = func("hd")
def lit(n):
  def res():
    return str(n), [], []
  return res

print(cons(lit(8), cons(lit(4),lit(6)))())


CREATE TABLE right(a);
CREATE TABLE backward(a);
CREATE TABLE forward(a);


select *, right.rowid from right;
create table env(left,right, UNIQUE(left), UNIQUE (right)); -- Use CTE? But then multiple statements?
-- INSERT INTO env SELECT left.rowid, right.rowid FROM left,right LIMIT 1; -- hmm we're overselecting. LIMIT BY 1? Yuck
INSERT OR IGNORE INTO env SELECT left.rowid, right.rowid FROM left,right;
-- only update timestamp if env is empty.
-- we could push env into the metalayer. Slow? Also the uniqueness filtering.
select * from env;

-- do rhs here if there was one. 
-- INSERT INTO rhs FROM env, left, right WHERE left.rowid = env.left AND right.rowid = env.right
DELETE FROM left where left.rowid IN (select env.left from env);
DELETE FROM right where right.rowid IN (select env.right from env);
select *, rowid from right;
select *, rowid from left;

Graph Matching

Graph Rewriting

Graph matching is part of graph rewriting


See blog posts Recstep

Model Checking

First order model checking

automata minimizaion

create table trans(s1 state, a action, s2 state);
-- primary key (s1,a) for Deterministic
create table accept(s1 state, flag bool);

-- insert into  trans
from trans as t1, trans as t2, accept where t1.
create table trans(s state unique, fin bool, sa state, sb state);
create table observe(fin, pa partition, pb partition, unique (fin,pa,pb)); -- observations
create table eqclass(s state unique, ob); -- mapping from state to eqclass id
-- initialize assuming everything in same partition
insert into eqclass select s, 0 from trans;

-- dfa_map
insert or ignore into observe select trans.fin, sobs1.ob, sobs2.ob from eqclass as sobs1, eqclass as sobs2, trans 
  where = sobs1.s and = sobs2.s;

insert into eqclass select trans.s, o.rowid from trans, observe as o where 
  o.fin = trans.fin and

insert into sobs select s, o from observe, eqclass as sobs1, eqclass as sobs2, trans 
  where = sobs1.s and = sobs2.s and
    observe.fin = trans.fin and = sobs1.ob and
    observe.pb = sobs2.ob


I mean this is the brute force loop searhc, but it’s neat that sqlite pushes the checks high up in the loop

create view digits(x) as select * from generate_series(0,9);
select * from digits as s, digits as e, digits as n, digits as d, digits as m, digits as o, digits as r, digits as y where
  (1000*s.x + 100*e.x + 10*n.x + d.x) + (1000*m.x + 100*o.x + 10*r.x + e.x) 
   = (10000*m.x + 1000*o.x + 100*n.x + 10*e.x + y.x) -- send + more = money 
   and s.x > 0 and m.x > 0 and -- non zero first digits
   -- all different digits
   s.x != e.x and s.x != n.x and s.x != d.x and s.x != m.x and s.x != o.x and s.x != r.x and s.x != y.x and
                  e.x != n.x and e.x != d.x and e.x != m.x and e.x != o.x and e.x != r.x and e.x != y.x and
                                 n.x != d.x and n.x != m.x and n.x != o.x and n.x != r.x and n.x != y.x and
                                                d.x != m.x and d.x != o.x and d.x != r.x and d.x != y.x and
                                                               m.x != o.x and m.x != r.x and m.x != y.x and
                                                                              o.x != r.x and o.x != y.x and
                                                                                             r.x != y.x

   limit 1;
-- sqlite time: 0.92s could be worse
for s in range(10):
  if s > 0:
    for e in range(10):
      if s != e:
        for n in range(10):
            if n != s and n != e:
          for d in range(10):
            for m in range(10)
              if m > 0:


Building good indices can be important for good query performance.


Saved queries that act as virtual tables


This is interesting

Aggregate functions

Window Functions

Ontology Formats

graph database OWL RDF sparql sparql slides shacl -

semantic web

Knowdlege representation handbook Course very similar to bap knoweldge base

Optimal Joins

worst case optimal join algorithm leapfrog triejoin Dovetail join - relational ai unpublished. Julia specific ish? use sparsity of all relations to narrow down search Worst case optiomal join Ngo pods 2012 leapfrog triejoin simpel worst case icdt 2015 worst case optimal join for sparql worst case optimal graph joins in almost no space Correlated subqueries: unnesting arbitrary queries How materializr and other databases optimize sql subqueries

genlteish intro to worst case optimal joins

Adopting Worst-Case Optimal Joins in Relational Database Systems tries The Adaptive Radix Tree: ARTful Indexing for Main-Memory Databases Persistent Storage of Adaptive Radix Trees in DuckDB

oltp indices 2

umbra spiritual successor to hyper. Hybridizes an in memory system to also work off ssd.

Free Join: Unifying Worst-Case Optimal and Traditional Joins

Vectorized Execution

cmu adavanced course lecture Rethinking SIMD Vectorization for In-Memory Databases

masked/selective load masked/selective store scatter gather

selection: branched vs branchless branched checks condition to see if should copy row out branchless writes but only increments index of storage by one if condition is met. I mean. There is a “branch” in this. But I see your point

EmptyHeaded: A Relational Engine for Graph Processing “generalized hypertree decomposition” ?

levelheaded linear algerba stuff?

Multi Version Concurrency Control


SQLite is an embedded in process database. Has a WASM version It’s a single drop in C file with no dependencies. That means it’s kind of available everywhere It isn’t good for concurrent writers.

Performance tips: WAL mode

sqlite commands that are interesting

  • .help
  • .dump
  • .tables
  • .schema
  • .indexes
  • .expert suggests indices?
create table edge(a,b);
insert into edge values (1,2), (2,3);
create view path(a,b) as
 select * from edge
 select edge.a, path.b from edge, path where edge.b = path.a;

select * from path; -- error, circularly defined.

Strong Consistency with Raft and SQLite The lightweight, easy-to-use, distributed relational database built on SQLite fork of sqlite to add features? merging sqlite database plugin

NULL behavior

-- NULL don't collide in unique constraints. NULL is not = to NUll
create table foo(a,b, unique (b));
insert into foo values (1,NULL), (2,NULL);
select * from foo;
select 1,NULL = NULL; -- returns null
select 1,NULL != NULL; -- returns null
select 1,2=2; --returns 1 whih is true

Duckdb sqlite for olap columnar

import duckdb
con = duckdb.connect(database=':memory:')
import pandas as pd
test_df = pd.DataFrame.from_dict({"i":[1, 2, 3, 4], "j":["one", "two", "three", "four"]})
con.execute('SELECT * FROM test_df')

add_df = pd.DataFrame(columns=["x","y","z"])
counter = 0
def add(x,y):
  global counter, add_df
  cond = add_df["x"] == x #& add_df["y"] == y
  df = add_df[cond]
  if not df.empty:
    return add_df["z"][0]
    z = counter
    counter += 1
    return z


import duckdb
con = duckdb.connect(database=':memory:')
con.execute("CREATE TABLE root (x INTEGER, y INTEGER);")
# "don't use execute many"
con.executemany("INSERT INTO root VALUES (?, ?)", [(1,1),(2,2),(3,3),(1,2),(2,3)])
SELECT x, max(y)
    FROM root
    GROUP BY x;""")

#UPDATE root a
#  INNER JOIN root b 
#  ON a.y = b.x
#  SET a.y = b.y""")

#UPDATE root c
#  SET y = max(b.y)
#    FROM root a
#    INNER JOIN root b ON a.x = c.x AND a.y = b.x
#    """)

WITH root2(x1,y1) AS (
  SELECT a.x, max(b.y)
    FROM root a, root b
    WHERE a.y = b.x
    GROUP BY a.x
  SET y = max(b.y)
  FROM root a
  INNER JOIN root b
  ON a.y = b.x
  GROUP BY a.x;

SELECT a.x, max(b.y)
    FROM root a, root b
    WHERE a.y = b.x
    GROUP BY a.x;""")

catalog multiversion concrruncy control cimpressed execution binder


Full Text Search

postgres as a graph database The manual sudo -u postgres psql Very often you need to be the postgres user on the default install

help \h for sql commands \? for

\c connect \dt look at tables

create user philip;

Had to make user mapping to postgres

pip install psycopg2-binary

createdb toydb

import psycopg2
conn = psycopg2.connect("dbname=toydb user=philip")
# Open a cursor to perform database operations
cur = conn.cursor()

# Execute a query
cur.execute("SELECT * FROM pg_tables")
# Retrieve query results
records = cur.fetchall()

print(cur.execute("Create table foo(x integer)"))
cur.execute("insert into foo values (1), (2)")

cur.execute("insert into foo SELECT * FROM generate_series(7,10)") #
cur.execute("SELECT * FROM foo")


Postgres features embedded sql. like a preprcoessor that makes it easy to write sql statements in C

  • schema - like a bunch of tables?
  • parition tables declarations
  • Returning clauses enable reutrnng deleted or updated rows. ok sqlite has this too
  • table functions
  • lateral
  • distinct on
  • with are basically let expressions
  • enum types
  • domain types are type + a check
  • create sequence
  • subquert expressions : Any, All, In
  • set returning functinons generate_series
  • indexed - create unque indexes n expressions, partial indexes. that’s a weird onde
  • non durable settings. ANALYZE Interesting constraint system. Foreign key Check constraints allow dynamic checks. Can involve multiple columns old way. make trigger and constraints to parttion table into pieces

Locks -

Truncate Table is faster than delete if you are removing everything?

Relational AI

snowflake databricks bigquery dbt fivetran

data apps - dapps

lookml sigma legend

Resposnive compilter - matsakis salsa.jl umbra/leanstore

incremental COnvergence of datalog over presmeirings differential dataflor cidr2013 reconciling idfferences 2011 Green F-IVM incrmenetal view mantinance with triple lock fotrization benefits

systemml vecame apache systemds

Semantic optimization FAW question asked frequence : Ngo Rudra PODS 2016 What do shannon type ineuqlaities submodular width and disjunctive datalog have to do with one another pods 2017 precise complexity analysis for efficient datalog queries ppdp 2010 functional aggregate queries with additive inequalities convergence of dtalog over pr-esemirign

Relational machine learning Layered aggregate engine for analystics worloads schelich olteanu khamis leanring models over relational data using sparse tenosrs The relational data borg is learning olteanu vldb keynote sturcture aware machine learning over multi relational database relational know graphs as the ofundation for artifical intelligence km-means: fast clustering for relational data Learning Models over Relational Data: A Brief Tutorial

duckdb for sql support calcite postgresql parser

Fortress library traits. OPtimization and parallelism triangle view mantenance


streaming 101 unbounded data

lambda architecture - low latency inaccurate, then batch provides accurate

event time vs processing time


Flink Apache Beam millwheel spark streaming


Raft paxos consensus

Data Structures

B Tree

Bw-tree The B-Tree, LSM-Tree, and the Bw-Tree in Between open bw-tree 2018

Radix Trie

Meta Techniques

There are certain good ideas that I don’t even know how to classify really

Timestamps logical timestamps


Rather than deleting immediately, have a table that marks things as deleted. Or a deleted column. Perhaps with deletion time

This goes some ways towards make a persistent data structure. / Maybe you can keep some data read only


Conflict Free replicated datatypes martin Kleppmann

CRDT of string - consider fractional positions. Tie breaking. Bad interleaving problem unique identifiers

  • LSeq
  • RGA
  • TreeSeq crdt rich text

automerge: library of data structures for collab applications in javascript local first. use local persistent storage. git for your app’s data. rust implementation?

isabelle crdt I was wrong. CRDTs are the future

Conflict-free Replicated Data Types” “A comprehensive study of Convergent and Commutative Replicated Data Types

Operational Transformation - sequences of insert and delete. Moves possibly.

delta-based vs state-based


json crdt for vibes patches?

Tree move op. Create delete subtrees.

Synthesizing CRDTs from Sequential Data Types with Verified Lifting

Big Data

SLOG: Serializable, Low-latency, Geo-replicated Transactions spanner and calvin

Spark Hadoop MapReduce Dask Flink Storm

Mahout Vowpal Wabbit



Spark Databricks - company bigdatalog MLlib spark streaming graphx

Message brokrs

RabbitMQ Kafka


BigQuery Snowflake Azure AWS

Graph systems

It isn’t that relational systems can’t express graph problems. But maybe graph systems are more optimized for the problem neo4j Giraph Powergraph graphrex graphx myria graphchi xsteam gridgraph graphlab


  • create table
  • create index
  • explain query plan I saw explain analyze elsewhere
  • select
  • vacuum - defrag and gabrage collect the db
  • begin transaction Writing a Python SQL engine from scratch




build your own database book

  • Database Design and Implementation by Edward Sciore, 2020

  • Architecture of a Database System, Hellerstein and Stonebraker (2007)

SQL/DB learning resources

use the index luke sqlbolt = interactive sql tutorial

the art of postgresql a book. select star sql

schemaverse a space battle game written in sql

SQLite: Past, Present, and Future

Datavases, types, and the relational model The third manifesto

how query engines work andy grove

database internals book

database design and implementation

duckdb embedded like sqlite?

Conjunctive-query containment and constraint satisfaction

Designing Data intensive systems martin kleppmann

scalability but at what cost? big systems vs laptops.

Data integration the relational logic approach

postgres indexes for newbies postgres tutorial raytracer in sql [advent of code sql(] sqllancer detecting lgoic bugs in dbms

  • Differential Datalog
  • CRDTs
  • Differential Dataflow
  • Nyberg Accumulators
  • Verkle Trees
  • Cryptrees
  • Byzantine Eventual Consistency
  • Self-renewable hash chains
  • Binary pebbling

Ezra Cooper. The Script-Writer’s Dream: How to Write Great SQL in Your Own Language, and Be Sure It Will Succeed. 2009. Full text

James Cheney et al. A practical theory of language-integrated query. 2013. Full text

Suzuki et al. Finally, safely-extensible and efficient language-integrated query. 2016. Full text

Oleg Kiselyov et al. Sound and Efficient Language-Integrated Query – Maintaining the ORDER. 2017. Full text

DBSP: Automatic Incremental View Maintenance for Rich Query Languages - mcsherry et al

pavlo advanced databases

awesome database learning

database architects blogs

database internals

Ask HN: What could a modern database do that PostgreSQL and MySQL can’t

postgres internals book

Sqlite virtual tables osquery osquery qerying C++ databases advanced sql course

roaring bitmaps Switches out storage method and different scales and density.

nocodb It’s like a spreadsheet that attaches to dbs. Open source airtable?

Does sql need help



sudo -u postgres psql