Notes on Getting Started in OCaml

I know a bit of Haskell. It’s the functional programming language I have the strongest background in. OCaml is very similar to Haskell, which is why I haven’t felt a strong need to learn it. Having had to delve into it for necessity because of work I think that was an oversight. The biggest thing for me is being able to more easily read a new set of literature and see familiar things in a new light, which is very refreshing.

Getting OCaml

https://ocaml.org/docs/install.html

opam is the package manager. Follow the instructions to install it and get your environment variables setup. It’ll tell you some extra commands you have to run to do so. You use it to install packages via opam install packagename. You can also use it to switch between different ocaml compiler versions via command like opam switch 4.08.1.

Dune is a build tool. You can place a small config file called dune in your folder and it can figure out how to appropriately call the compiler. Dune is in flux, so check out the documentation. What I write here may be wrong.

https://dune.readthedocs.io/en/stable/

Here’s an example execution. Note that even though the file is called main.ml in this example, you call build with main.exe. And exec requires the ./ for some reason. Weird.

Here’s a dune file with some junk in it. You make executables with blocks. You include a list of the files without the .ml suffix require by the executable in the modules line. You list libraries needed in the libraries line.

You want to also install merlin. opam install merlin. Merlin is a very slick IDE tool with autocomplete and type information. dune will setup a .merlin file for you

The ReasonML plugin is good for vscode. Search for it on the marketplace. It is the one for OCaml also. ReasonML is a syntactic facelift intended for the web btw. I don’t particularly recommend it to start. There are also emacs and vim modes if that is what you’re into.

The enhanced repl is called utop. Install it with opam install utop. Basic repl usage: Every line has to end with ;;. That’s how you get stuff to be run. Commands start with #. For example #quit;; is how you end the session. #use "myfile.ml";; will load a file you’ve made. Sometimes when you start you need to run #use "topfind";; which loads a package finder. You can load libraries via the require command like #require "Core";;

#help;; for more.

A Quick Look at the Language

With any new language I like to check out Learn X from Y if one is available.

https://learnxinyminutes.com/docs/ocaml/

Here are some shortish cheat sheets with a quick overview of syntax

https://github.com/alhassy/OCamlCheatSheet

https://ocaml.org/docs/cheat_sheets.html

More In Depth Looks

This is a phenomenal book online book built for a Cornell course: https://www.cs.cornell.edu/courses/cs3110/2019sp/textbook/

Real World OCaml is also quite good but denser. Very useful as a reference for usage of Core and other important libraries.

The reference manual is also surprisingly readable https://caml.inria.fr/pub/docs/manual-ocaml/ . The first 100 or so pages are a straightforward introduction to the language.

https://github.com/janestreet/learn-ocaml-workshop Pretty basic workshop. Could be useful getting you up and running though.

Useful libraries

Core – a standard library replacement. Becoming increasingly common https://github.com/janestreet/core It is quite a bit more confusing for a newcomer than the standard library IMO. And the way they have formatted their docs is awful.

Owl – a numerical library. Similar to Numpy in many ways. https://ocaml.xyz/ These tutorials are clutch https://github.com/owlbarn/owl/wiki

Bap – Binary Analysis Platform. Neato stuff

Lwt – https://github.com/ocsigen/lwt asynchronous programming

Graphics – gives you some toy and not toy stuff. Lets you draw lines and circles and get keyboard events in a simple way.

OCamlGraph – a graph library

Jupyter Notebook – Kind of neat. I’ve got one working on binder. Check it out here. https://github.com/philzook58/ocaml_binder

Menhir and OCamlLex. Useful for lexer and parser generators. Check out the ocaml book for more

fmt – for pretty printing

Interesting Other Stuff (A Descent into Lazy Writing)

https://discuss.ocaml.org/ – The discourse. Friendly people. They don’t bite. Ask questions.

https://github.com/ocaml-community/awesome-ocaml Awesome-Ocaml list. A huge dump of interesting libraries and resources.

An excerpt of cool stuff:

  • Coq – Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.
  • Why3 – Why3 is a platform for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions.
  • Alt-Ergo – Alt-Ergo is an open-source SMT solver dedicated to the proof of mathematical formulas generated in the context of program verification.

http://ocamlverse.github.io/ – A pretty good set of beginner advice and articles. Seems like I have a lot of accidental overlap. Would’ve been nice to find earlier

https://www.cl.cam.ac.uk/teaching/1617/L28/materials.html – advanced functional programming course. Interesting material.

TAPL – https://www.cis.upenn.edu/~bcpierce/tapl/ Has implementations in OCaml of different lambda calculi. Good book too.

It is not uncommon to use a preprocessor in OCaml for some useful features. There is monad syntax, list comprehensions, deriving and more available as ppx extensions.

https://whitequark.org/blog/2014/04/16/a-guide-to-extension-points-in-ocaml/ ppx perepsorcssor. ocamlp4 5 are both preprocessors too

https://tarides.com/blog/2019-05-09-an-introduction-to-ocaml-ppx-ecosystem.html

https://blog.janestreet.com/archive/ The jane street blog. They are very prominent users of ocaml.

https://opensource.janestreet.com/standards/ Jane Street style guide

Oleg Kiselyov half works in Haskell, half in OCaml, so that’s cool.

https://arxiv.org/pdf/1905.06544.pdf oleg effects without monads

Oleg metaocaml book. MetaOCaml is super cool. http://okmij.org/ftp/ML/MetaOCaml.html And the switch funtionality of opam makes it easy to install!

Oleg tagless final http://okmij.org/ftp/tagless-final/index.html

https://github.com/ocamllabs/higher

Cohttp, LWT and Async

https://github.com/backtracking/ocamlgraph ocaml graphs

https://mirage.io/ Mirage os. What the hell is this?

https://github.com/ocamllabs/fomega

https://github.com/janestreet/hardcaml

ppx_let monaidc let bindings

some of the awesome derivinig capabilites are given by ppx_jane. SExp seems to be a realy good one. It’s where generic printing is?

dune build lambda.bc.js will make a javascript file. That’s pretty cool. Uses js_of_ocaml. The js_of_ocaml docs are intimidating https://ocsigen.org/js_of_ocaml/dev/manual/overview

http://ocsigen.org/js_of_ocaml/dev/api/

Note you need to install both the js_of_ocaml-compiler AND the library js_of_ocaml and also the js_of_ocaml-ppx.

Go digging through your _build folder and you can find a completely mangled incomprehensible file jsboy.bc.js. But you can indeed import and use it like so.

dune build --profile release lambda.bc.js putting it in the release profile makes an insane difference in build size. 10mb -> 100kb

There is also bucklescript for compiling to javascript. Outputs readable javascript. Old compiler fork?

Check out J.T. Paach’s snippets. Helpful

Dune:

https://gist.github.com/jtpaasch/ce364f62e283d654f8316922ceeb96db

Z3 ocaml

https://gist.github.com/jtpaasch/3a93a9e1bcf9cae86e9e7f7d3484734b

Ocaml new monadic let syntax

https://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny-syntax.html

#require “ppx_jane”;; in utop in order to import a thing using ppx

And argument could be made for working from a docker

Weird dsls that generate parsers and lexers. Also oddly stateful.

Took a bit of fiddling to figure out how to get dune to do.

Otherwise pretty straight forward encoding

expereince rport: using f omega as a teaching language

Because they aren’t hidden behind a monadic interface (for better or for worse), OCaml has a lot more of imperative feel. You could program in a subset of the language and have it not feel all that different from Java or python or something. There are for loops and while loops, objects and classes, and mutable variables if you so choose. I feel like the community is trying to shy away from these features for most purposes however, sitcking to the functional core.

However, it does let you do for loops and has an interesting community anddifferent areas of strength.

Maybe more importantly it let’s you access a new set of literature and books. Sligthly different but familiar ideas

I think Core is bewildering for a newcomer.

Rando Trash Poor Style OCaml Snippets

lex_lisp.mll : simplistic usage of ocamllex and menhir

parse_lisp.mly

Doinking around with some graphics

A couple Advent of code 2018

A little Owl usage

The Classical Coulomb Gas as a Mixed Integer Quadratic Program

The coulomb gas is a model of electrostatics where you take the discreteness of charge into account. That is what makes it hard compared to the continuous charge problem. Previously, I’ve used mixed integer programming to find lowest energy states of the ising model. This is even more obvious application of mixed integer programming to a physics model.

We ordinarily consider electric charge to be a continuum, but it isn’t. It comes in chunks of the electron charge. Historically, people didn’t even know that for quite a while. It is usually a reasonable approximation for most purposes to consider electric charge to be continuous

If you consider a network of capacitors cooled to the the level that there is not enough thermal energy to borrow to get an electron to jump, the charges on the capacitors will be observably discretized. With a sufficiently slow cooling to this state, the charges should arrange themselves into the lowest energy state.

The coulomb gas model also is of interest due to its connections to the XY model, which I’ve taken a stab at with mixed integer programming before. The coulomb gas models the energy of vortices in that model. I think the connection between the models actually requires a statistical or quantum mechanical context though, whereas we’ve been looking at the classical energy minimization.

We can formulate the classical coulomb gas problem very straightforwardly as a mixed integer quadratic program. We can easily include an externally applied field and a charge conservation constraint if we so desire within the framework.

We write this down in python using the cvxpy library, which has a built in free MIQP solver, albeit not a very good one. Commercial solvers are probably quite a bit better.

A plot of charge in a constant external electric field.

The results seems reasonable. It makes sense for charge to go in the direction of the electric field. Going to the corners makes sense because then like charges are far apart. So this might be working. Who knows.

Interesting places to go with this:

Prof Vanderbei shows how you can embed an FFT to enable making statements about both the time and frequency domain while keeping the problem sparse. The low time/memory N log(N) complexity of the FFT is reflected in the spasity of the resulting linear program.

https://vanderbei.princeton.edu/tex/ffOpt/ffOptMPCrev4.pdf

Here’s a sketch about what this might look like. Curiously, looking at the actual number of nonzeros in the problem matrices, there are way too many. I am not sure what is going on. Something is not performing as i expect in the following code

The equivalent dense DFT:

It would be possible to use a frequency domain solution of the interparticle energy rather than the explicit coulomb law form. Hypothetically this might increase the sparsity of the problem.

It seems very possible to me in a similar manner to embed a fast multipole method or barnes-hut approximation within a MIQP. Introducing explicit charge summary variables for blocks would create a sparse version of the interaction matrix. So that’s fun.

Doing Basic Ass Shit in Haskell

Haskell has lots of fancy weird corners, but you want to get rippin’ and runnin’

The Haskell phrase book is a new useful thingy. Nice and terse.

https://typeclasses.com/phrasebook

This one is also quite good https://lotz84.github.io/haskellbyexample/

I also like what FP complete is up to. Solid set of useful stuff, although a bit more emphasis towards their solutions than is common https://haskell.fpcomplete.com/learn

I was fiddling with making some examples for my friends a while ago, but I think the above do a similar better job.

https://github.com/philzook58/basic-ass-shit

Highlights include:

Makin a json request

Showing a plot of a sine function

Doing a least squares fit of some randomly created data

Not too complicated stuff to get you excited about Haskell:

I love Power Serious. https://www.cs.dartmouth.edu/~doug/powser.html Infinite power series using the power of laziness in something like 20 lines

https://blog.plover.com/prog/haskell/monad-search.html Using the list monad to solve SEND+MORE=MONEY puzzle.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.8903&rep=rep1&type=pdf Jerzy Karczmarczuk doing automatic differentiation in Haskell before it was cool. Check out Conal Elliott’s stuff after.

Very simple symbolic differentiation example. When I saw this in SICP for the first time, I crapped my pants.

https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf Why functional Programming Matters by John Hughes

https://www.cs.cmu.edu/~crary/819-f09/Backus78.pdf John Backus emphasizing escaping the imperative mindset in his 1978 Turing Award speech. A call to arms of functional programming

https://www.cs.tufts.edu/~nr/cs257/archive/richard-bird/sudoku.pdf Richard Bird defining sudoku solutions and then using equation reasoning to build a more efficient solver

https://wiki.haskell.org/Research_papers/Functional_pearls – Functional Pearls

Here’s how I find useful Haskell packages:

I google. I go to hackage (if I’m in a subpage, click on “contents” in the upper right hand corner). Click on a category that seems reasonable (like “web” or something) and then sort by Downloads (DL). This at least tells me what is popular-ish. I look for tutorials if I can find them. Sometimes there is a very useful getting started snippet in the main subfile itself. Some packages are overwhelming, others aren’t.

The Real World Haskell book is kind of intimidating although a lovely resource.

The wiki has a pretty rockin set of tutorials. Has some kind soul been improving it?

https://wiki.haskell.org/Category:Tutorials

I forgot learn you a Haskell has a chapter on basic io

http://learnyouahaskell.com/input-and-output

Learn more

When you’re ready to sit down with Haskell more, the best intro is currently the Haskell Book

You may also be interested in https://www.edx.org/course/introduction-functional-programming-delftx-fp101x-0 this MOOC

https://github.com/data61/fp-course or this Data61 course

Then there is a fun infinitude of things to learn after that.

______

More ideas for simple examples?

This post is intentionally terse.

IO is total infective poison.

standard output io

mutation & loops. You probably don’t want these. They are not idiomatic Haskell, and you may be losing out on some of the best lessons Haskell has to offer.

file IO

web requests

http://www.serpentine.com/wreq/tutorial.html

web serving – scotty

image processing

basic data structures

command line arguments

plotting

Parallelism and Concurrency

https://nokomprendo.frama.io/tuto_fonctionnel/posts/tuto_fonctionnel_25/2018-08-25-en-README.html

CAV 2019 Notes: Probably Nothin Interestin’ for You. A bit of noodling with Liquid Haskell

I went to the opening workshops of CAV 2019 in New York this year (on my own dime mind you!) after getting back from joining Ben on the long trail for a bit. The tutorials on Rosette and Liquid Haskell were really fun. Very interesting technology. Also got some good ramen and mochi pudding, so it’s all good. Big Gay Ice Cream was dece.

Day 1 workshops

Calin Belta http://sites.bu.edu/hyness/calin/.Has a new book. Control of Temporal logic systems. Automata. Optimized. Partition space into abstraction. Bisimulation https://www.springer.com/gp/book/9783319507620

Control Lyapunov Function (CLF) – guarantees you are going where you want to go

Control Barrier Function – Somehow controls regions you don’t want to go to.

Lyapunov function based trajectory optimization. You somehow have (Ames 2014) http://ames.gatech.edu/CLF_QP_ACC_final.pdf Is this it?

Differential flatness , input output linearization

Sadradiini worked there.

Temproal logic with

Rise of Temporal Logic

Linear Temporal Logic vs CTL

Fixpoint logic,

Buchi automata – visit accepting state infinite times

equivalency to first order logic

monadic logic, propositions only take 1 agrument. Decidable. Lowenheim. Quantifier elimination. Bounded Mondel property

Languages: ForSpec, SVA, LDL, PSL, Sugar

Monadic second order logic (MSO).

method of tableau

Sadraddini

Polytopic regions. Can push forward the dynmaics around a trajectory and the polytope that you lie in. RRT/LQR polytopic tree. pick random poitn. Run.

Evauating branching heuristics

branch and prune icp. dreal.

branch and prune. Take set. Propagate constraints until none fire.

branching heuristics on variables

largest first, smearing, lookahead. Try different options see who has the most pruning. Non clear that helped that muhc

QF_NRA. dreal benchmarks. flyspeck, control, robotics, SMT-lib

http://capd.sourceforge.net/capdDynSys/docs/html/index.html

Rosette

commands: saolver adied programming

verify – find an input on which the assertions fail. exists x. not safe

debug – Minimal unsat core if you give an unsat query. x=42/\ safe(s,P(x))$ we know thia is unsat because of previous step

solve – exists v si.t safe(v)

synthesis – exists e forall x safe(x,P(x))

define-symbolic, assert, verify, debug, solve, sythesis

Rosette. Alloy is also connected to her. Z Method. Is related to relational logic?

https://homes.cs.washington.edu/~emina/media/cav19-tutorial/index.html

http://emina.github.io/rosette/

Building solver aided programming tool.

symbolic compiler. reduce program all possible paths to a constraint

Cling – symbolic execution engine for llvm

implement intepreter in rosette

Symbolic virtual machine

layering of languages. DSL. library (shallow) embedding. interpreter (deep) embedding.

deep embedding for sythesis.

I can extract coq to rosette?

how does it work?

reverse and filter keeping only positive queries.

symbolic execution vs bounded model checking

symbolic checks every possible branch of the program. Cost is expoentntial

CBMC.

type driven state merging. Merge instances of primitiv types. (like BMC), value types structurally ()

instance Merge Int, Bool, Real — collect up SMT context

vs. Traversable f => Merge (f c) – do using Traversable

symbolic union a set of guarded values with diskoint guard.

merging union. at most one of any shape. bounded by number of possible shapes.

puts some branching in rosette and some branch (on primitives) in SMT.

symbolic propfiling. Repair the encdoing.

tools people have built.

veify radiation

strategy generation. That’s interesting. builds good rewrite rules.

serval.

certikso komodo keystone. fintie programss

IS rosette going to be useful for my work? coooooould be

Liquid Haskell

https://ranjitjhala.github.io/

Liquid Haskell – What is?

another thing we could do is galois connections between refinements. Pos, Zero, Neg <-> Int

Liquid Haskell uses SMT solvers to resolve it’s type checking requirements.

Agda et al also work very much via unification. Unification is a broad term but it’s true.

It also has a horn clause solver for inference. Every language needs some kind of inference or you’d go insane. Also it is piggybacking on haskell

It’s not as magical as I thought? Like seeing the magicians trick. It really does understand haskell code. Like it isn’t interpretting it. When it knows facts about how (+) works, that is because the refined type was put in by hand in the prelude connecting it to SMT facts. What is imported by liquid haskell?

The typing environment is clutch. You need to realize what variables are in scope and what their types are, because that is all the SMT can use to push through type checking requirements.

Installing the stack build worked for me. It takes a while . I couldn’t get cabal install to work, because I am not l33t.

Uninterpeted functions. Unmatchability?

It wouldn’t be haskell without a bunch of compiler directives. It is somewhat difficult to find in a single cohesive place what all the syntax, and directives are from liquid haskell. Poking around it best.

  • ple
  • reflection
  • no-termination
  • higherorder – what is this?

https://github.com/ucsd-progsys/230-wi19-web course notes

https://github.com/ucsd-progsys/liquid-sf some of software foundations

https://nikivazou.github.io/publications.html niki vazou’s pubs. Check out refinement reflection

https://nikivazou.github.io/static/Haskell17/law-abiding-instances.pdf draft work? Shows stuff about typeclasses. This is a haskell 2017 paper though

https://arxiv.org/pdf/1701.03320 intro to liquid haskell. Interesting to a see a different author’s take

http://goto.ucsd.edu/~nvazou/presentations/ presentations. They are fairly similar to one another.

Liquid haskell gives us the ability to put types on stuff that wasn’t possible before.

Linearity :: f :: {a -> b | f (s ^* a) == s ^* (f a) }

Pullback. {(a,b) | f a == g b}

Equalizer

Many things in categoruy theory rely on the exists unique. Do we have functiona extensionality in Liquid haskell?

product : {(a,b) | f q = x, g q = y, => }

Pushing the boundaries on what liquid haskell can do sounds fun.

Equalizer. The eqaulizer seems prominents in sheaves. Pre-sheaves are basically functors. Sheaves require extra conditions. Restriction maps have to work? Open covers seem important

type Equalizer f g a b = {(e :: a , eq :: a -> b) | f (eq e) = g (eq e) }

I think both the type a and eq are special. e is like an explcit parametrization.

type Eq f g a = {e :: a | f e = g e} I think this is more in the spirit. Use f and g both as measures.

presheaf is functor. But then sheaf is functor that

(a, Eq (F a) (G a)). typelevel equalizer? All types a that F and G agree on.

https://ncatlab.org/nlab/show/equalizer

https://blog.functorial.com/posts/2012-02-19-What-If-Haskell-Had-Equalizers.html

Records are sheaves – Jon Sterling. Records have subtyping. This gives you a toplogy feeling thing.

https://www.slideshare.net/jonsterling/galois-tech-talk-vinyl-records-in-haskell-and-type-theory

What about purescript records?

{foo | a} {bar | a} -> intersection = {foo bar | b} can inhabit either

union is
or do you want closed records? union is union of fields. intersection is intersection of fields.

In this case a cover would be a set of records with possibly overlapping fields whose combined labels cover the whle space we want to talk about. consistency condition of sheaf/equalizer is that overlapping records fields have to match. I guess {  q.foo = r.foo } ?There is a way to combine all the stuff up. This is exactly what Ghrist was getting at with tables. Tables with shared columns.

data R1 = R1 {foo :: Int, bar :: Int}

{ (r1 :: R1, r2 :: R2) | (foo r1) = (foo r2) } — we manitain duplicates across records.

{. }

if you have a “cover” {foo bar |} {bar fred} {gary larry} whose in

https://www.sciencedirect.com/science/article/pii/S1571066108005264

Sheaves. As a model of concurrency? Gaguen paper.

sheaves as constraint satisfcation? sheafifcation. Constraint solving as a way of fusing the local constraints to be globally consistent.

sheaves as records

sheaves as data fusion

http://www.cs.bham.ac.uk/~mhe/papers/barbados.pdf

Escardo. Compact data types are those finitely searchable

Continuous funcitons are ~computable? Productive?

http://www.paultaylor.eu/

http://www.paultaylor.eu/ASD/foufct/

http://www.paultaylor.eu/~pt/prafm/

typed recursion theory toplogy

typed computatabiltity theory

Topological notions in computation. Dictionary of terms realted decidable, searchable, semi decidablee

cs.ioc.ee/ewscs/2012/escardo/slides.pdf

https://en.wikipedia.org/wiki/Computable_topology

Through NDArray overloading, a significant fragment of numpy code is probably verifiable.

Start with functional arithmetic programs.

Need to inspect function annotations to know how to build input type.

@verify() tag

Use (Writer a) style monad.

If statements are branching. We are again approaching inspecting functions via probing. But what if we lazily probe. At every __bool__ point, we run a z3 program to determine if there is an avaiable bool left possible (we don’t need to inspect dead code regions. Also would be cool to mention it is a dead region). Curious. We’re reflecting via Z3.

Loops present a problem. Fixed loops are fine. but what about loops that depend on the execution? for i in range(n). I guess again we can hack it…? Maybe. range only takes an integer. we don’t have overload access.

Maybe we need to go into a full eval loop. utterly deconstructing the function and evaluating it statelemnt by statement.

(compare :: a -> a -> Comparison). We could select a choice based on if there is a new one avaialable. Requires access to external store. We lose the thread. How can we know a choice was made? How can we know what the choice was? Did it ask var1 or var2? We can probably do it in python via access to a global store. But in haskell?

while loops take invariant annotations.

It would be cool to have a program that takes

pre conditions. Post ocnditions, but then also a Parameter keyword to declare const variables as deriveable. exists parameter. forall x precondition x => post condition.

Parameter could be of a type to take a dsl of reasonable computations. Perhaps with complexity predicates. and then interpretting the parameter defines the computation.

Or simpler case is parameter is an integer. a magic number.

Dump of Nonlinear Algebra / Algebraic geometry Notes. Good Links Though

Old notes on nonlinear algebra. I dunno about the content but some very good links and book suggestions in here, so I’m gonna dump this one out there. Maybe it’ll help someone.

Systems of multivariable polynomial equations are more solvable than people realize. There are algebraic and numeric methods. Look at Macaulay, Singular, Sympy for algebraic methods. phcpack and bertini  and homotopycontinuation.jl for numerical .

Algebraic methods are fixated on Groebner bases, which are a special equvialent form your set of equations can be manipulated to. You can disentangle the variables using repeated polynomial division (buchberger’s algorithm) turning your set of equations into an equivalent set that has one more variable per equation. This is like gaussian elimination which is actually the extremely simple version of buchberger for linear equations.

The numerical methods use perturbation theory to take a system of equations you know how to solve and smoothly perturb them to a new system. Each small perturbation only moves the roots a little bit, which you can track with a differential equation solver. Then you can fix it up with some Newton steps. People who really care about this stuff make sure that there are no pathological cases and worry about roots merging or going off to infinity and other things.

You need to know how many roots to build and track in your solvable system. For that two theorems are important

bezout thereom – for dense systems, number of solutions is bound by product of total degree of equations.

bernstein bound – newton polytope gives bound of number of solutions of polynomial system. useful for sparse

One could make an argument for the homotopy continuation methods being the analog of iterative solutions for linear equations if grobner basis are gaussian elimibnation. Take equation we know how to solve (~preconditioner) and perform some iterative thing on it.

add enough random linear equations to make system full (points).

Then you have a membership algorithm due to sweeping of planes. Once you have points on actual varieites, pairwise compare them.

Cox OShea book is often reccomended. It’s really good.

https://www.springer.com/us/book/9781441922571

More advanced Cox et al book

https://www.springer.com/us/book/9780387207063

Bernd Sturmfels,  Mateusz Michalek (including video lectures)

https://personal-homepages.mis.mpg.de/michalek/ringvorlesung.html

https://personal-homepages.mis.mpg.de/michalek/NonLinearAlgebra.pdf

(Bernd is da man!)

https://math.berkeley.edu/~bernd/math275.html

Maculay 2 book

https://faculty.math.illinois.edu/Macaulay2/Book/

Singular books

https://www.singular.uni-kl.de/index.php/publications/singular-related-publications.html

https://www.springer.com/us/book/9783662049631

https://www.ima.umn.edu/2006-2007

Planning Algorithms, in particular chapter 6

http://planning.cs.uiuc.edu/

Gröbner bases in Haskell: Part I

Summer school on tensor methods

https://www.mis.mpg.de/calendar/conferences/2018/nc2018.html

Extensions of

https://ieeexplore.ieee.org/document/4399968

Numerical Polynomial Algebra by Hans Stetter

https://epubs.siam.org/doi/book/10.1137/1.9780898717976?mobileUi=0&

Introduction to Non-Linear Algebra V. Dolotin and A. Morozov. A high energy physics perspective

https://arxiv.org/pdf/hep-th/0609022.pdf

Nonlinear algebra can also be approach via linear algebra surprisingly. Resultants. As soon as you see any nonlinearity, the linear part of your brain shuts down, but a good question is linear in WHAT? Consider least squares fitting, which works via linear algebra. Even though you’re fitting nonlinear functions, the expressions are linear in the parameters/coefficients so you’re all good.

Similarly you can encode root finding into a linear algebra problem. A matrix has the same eigenvalues as it’s characterstic polynomial det(A - \lambda) has roots, so that already shows that it is plausible to go from linear algebra to a polynomial root finding problem. But also you can encode multiplying a polynomial by x has a linear operation on the coefficients. In this way we can .

[1 x x^2 x^3 …] dot [a0 a1 a2 a3 …] = p(x)

Multiplying by x is the shift matrix. However, we also are assuming p(x)=0 which gives use the ability to truncate the matrix. x * [1 x x^2 x^3 …]  = Shift @ xbar. This is somewhat similar to how it feels to do finite differnce equations. The finite difference matrix is rectangular, but then boundary conditions give you an extra row. Multiplication by x returns the same polynomial back only when p(x)=0 or x = 0. The eigenvalues of this x matrix will be the value of x at these poisitions (the roots). This is the companion matrix https://en.wikipedia.org/wiki/Companion_matrix

We can truncate the space by using the zero equation.

It’s a pretty funky construction, I’ll admit.

To take it up to multivariable, we bring in a larger space [1 x y x^2 xy y^2 …] = xbar kron ybar

We now need two equations to reduce it to points. The X matrix is lifted to X kron I. and we can pad it with ?

Multiplying by an entire polynomial. Sylvester matrix for shared roots. Double root testing.

Sylvester matrix is based on something similar to bezout’s identity. To find out if some things p q has common factors you can find 2 things r s such that r*p + q*s = 0

https://en.wikipedia.org/wiki/Polynomial_greatest_common_divisor#B%C3%A9zout’s_identity_and_extended_GCD_algorithm

Sum of Squares is somewhat related material on systems of polynomial inequalities which can be translated to semidefinite matrix constraints. If you want to include equalities, you can use groebner bases to presolve them out.

Parrilo course material on Sum of Squares.

https://learning-modules.mit.edu/materials/index.html?uuid=/course/6/sp16/6.256#materials

Paper on using greobner and CAD (cylindrical algebraic decomposition) for opitmization and control

 
Using groebner basis for constraint satisfaction problems: x^n=1 gives a root of unity. There are n solutions. This gives a finite set to work with. Then you can add more equations. This is related to the max-cut thing. I saw this on Cox webpage.
You can require neighbors to have different vertices by  0=(xi^k – xj^k)/(xi – xj). You can encode many constraints using clever algebra.
an example using the same technique to solve sudoku
 
Sympy tutorial solving geoemtric theorems and map coloring
 
explicitly mentions toric groebner as integer programming.
 

other interesting exmaples

http://www.scholarpedia.org/article/Groebner_basis

Noncommutative grobner basis have application to solving differential equations? The differential operators are noncommutative. Not just silly quantum stuff. I mean the simple exmaple of non commutativty are the shcordinger momentum operators.

Automatic loop invariant finding

Geometric theorem rpvong

robotic kinematics

Optics? Envelopes, exchange of coordinates. Legendre transformations. Thermodynamics?

Global optimization? Find all local minima.

Nonlinear finite step.

Dynamic Prgramming. Add implicit V variab le for the vlaue function. Constrain via equations of modtion. Perform extermization keeping x0 v0 fixed. dx0=0 dv0=0 and dV=0. Grobner with ordering that removes x1 v1 V1. Iterate. Can keep dt as variable. Power series in t? Other integration schemes.

Probably need some method to simplify that left over relations so that they don’t get too complex. Smoothing? Dropping terms? Minimization may require factoring to find global minimum.

Differentiation. Add to every variable a dx. Collect up first order as a seperate set of constraints. Add conditions df=0 and dy=0 for fixed variables to perform partial differentiation and extremization. A very similar feel to automatic differentiation. Functions tend to not be functions, just other wriables related by constraints

Variable ordering

lex – good for elimination

deglex – total degree then a lex to tie break

grevlex – total degree + reverse lexicographic. The cheapest variable is so cheap that it goes last

block ordering, seperate variables into blocks and pick orderings inside blocks

general matrix ordering. Apply a matrix to the exponent vectors and lex comparse the results. Others are a subset.

Can’t I have a don’t care/ partial order? would be nice for blockwise elimination I feel like.

Non-commutative

http://sheaves.github.io/Noncommutative-Sage/

Physicsy

https://arxiv.org/pdf/hep-th/0609022

CAD book

https://link.springer.com/book/10.1007%2F978-3-7091-9459-1

Rings have addition and multiplication but not division necessarily. Polynomials, integers, aren’t guarenteed to have inverses that remain polynomials or integers.

ideal = a subset of a ring that absorbs multiplication. Also closed under addition

All polynomial conseqeunces of a system of equations

HIlbert Basis theorem – all ideals are egenrated by a finite set

ideal generated from set – any element of ring that can be generated via addition and multiplication by arbitary element. Is ideal because if you multiplied it by another object, it is still and sum of multiples.

Ideals are sometimes kind of a way of talking about factors without touching factors. Once something is a multiple of 5, no matter what you multiply it with, it is still a multiple of 5. If (x – 7) is a factor of a polynomial, then no matter what you multiply it with, (x-7) is still a factor. Zeros are preserved.

Principal ideal domain – every ideal is generated by a single object

Prime ideal. if a*b is in ideal then either a or b is in ideal. Comes from prime numbers ideal (all number divislable by prime number). If ab has a factor of p then either a or b had a factor of p. whereas consider all mutiples of 4. if a = b =2 then ab is a mutiple of 4, but neither a nor b are a multiple of 4.

1d polynomials. Everything is easy.

Polynomial division is doable. You go power by power. Then you may have a remained left over. It’s pretty weird.

You can perform the gcd of two polynomials using euclidean algorithm.

The ideal generated by a couple of them is generated by the multipolynomial gcd?

a = cx + dy + r

multivariate division: we can do the analog of polynomial division in the multivariate case. But we need an ordering of terms. reaminder is not unique.

But for certain sets of polynomials, remainder is unique.

Why the fixation on leading monomials?

The S-polynomial is the analog of one step of the euclidean algorithm. It also has the flavor of a wronskian or an anticommutator.

The bag euclidean algorithm. Grab the two things (biggest?). Take remainder between them, add remainder into bag.

This is the shape of the buchberger algorithm.

Finding homology or cohomology of solutions. Good question. One can see how this could lead to categorical nonsense since Category theory was invented for topological questions.

The variety is where a set of polynomials is 0. Roots and zero surfaces

List gives set of polynomials.

[forall a. Field a => (a,a,a) -> a ]

Or explicit

union and intersection can be achieved via multiplication and combining the sets

Krull dimension – Definition of dimension of algebraic variety. Maximal length of inclusion chain of prime ideals.

Ideals and Varieites have a relation that isn’t quite trivial.

The ideal of a variety

Envelopes – parametrized set of varieties f(x,t)=0 and partial_t f(x,t)=0. Eliminate t basically to draw the thing. Or trace out t?

Wu’s method for geometric theorem proving. You don’t need the full power of a grobner basis.

Polynomial maps. Talk about in similar language to differential geometry.

Boxes are a simple way to talk about subsets. Or lines, planes. Or polytopes.

Also any function that gives a true false value. But this is very limited in what you can actually do.

Varieties give us a concrete way to talk about subsets. Grothendieck schemes give unified languages supposedly using categorical concepts. Sounds like a good fit for Haskell.

class Variety

use powser. Functor composition makes multivariable polynomials. Tuples or V3 with elementwise multiplication

The polynomial as a type parameter for agda. Regular Functions are functions from one variety to another. They are the same as the polynomial ring quotiented out by the ideal of the variety.

Ring Space and Geometric Space (affine space)

Maximal ideals can be thought of as points. (ideal of x-a, y-b, …).

Free Polynomials ~ Free Num. Sparse representation. Uses Ordering of a. We should not assume that the are a power like in http://hackage.haskell.org/package/polynomial-0.7.3/docs/Math-Polynomial.html

Ord is monomial ordering. Think of a as [X,Y,X,X,X]

divmod :: (Integral a, Ord a) => Poly r a -> Poly r a -> Poly r a

newtype Monomial a = Monomial [a]

— different monomial newtype orderings for lex, etc.

Monomial (Either X Y)

divmod as bs = remove bs from as. if can’t remainder = as, div = 0

Intuition pumps : algebraic geometry, differential geoemtry, ctaegory theory, haskell agda.

In differential geometry, embedding sucks. We get around it by defining an atlas and differential maps.

There is a currying notion for polynomials. We can consider a polynomial as having coefficinets which themselves are polynomials in other variables or all at once.

What can be solved linearly? The Nullstullensatz certificate can be solved using linear equations

Resultants. What are they? Linear sums of monomials powers * the orginal polynomials. Det = 0 implies that we can find a polynomial combination

What is the deal with resultants

Toric Varieties. C with hole in it is C*. This is the torus because it is kind of like a circle. (Homologically?). There is some kind of integer lattice lurking and polytopes. Gives discrete combinatorial flavor to questions somehow. Apparently one of the more concrete/constructive arenas to work in.

binomaial ideals. the variety will be given by binomials

maps from one space to another which are monomial. can be implicitized into a variety. map is described by integer matrix. Integer programming?

Similar “cones” have been discussed in teh tropical setting is this related?

Algebraic statistics. Factor graph models. Probablisitc graphical models. Maybe tihs is why a PGM lady co taught that couse with Parillo

Modules

Tropical geometry

http://www.cmap.polytechnique.fr/~gaubert/papers.html

Loits of really intriguing sounding applications. Real time verification

gfan

How does the polynomial based optimization of the EDA course relate to this stuff? https://en.wikipedia.org/wiki/Logic_optimization

Mixed volume methods? Polytopes.

cdd and other polytopic stuff. Integration of polynomials over polytopes

Software of interest

Sage

Sympy

Singular – Plural non-commutative?

FGb – Faugiere’s implmentation of Grobner basis algorithms

Macaulay

CoCoa

tensorlab – https://en.wikipedia.org/wiki/Tensor_software

sostools

PolyBori – polynomials over boolean rings http://polybori.sourceforge.net/doc/tutorial/tutorial.html#tutorialli1.html

LattE

4ti2

normaliz

polymake – https://polymake.org/doku.php/tutorial/start slick

http://hep.itp.tuwien.ac.at/~kreuzer/CY/CYpalp.html Calabi Yau Palp????

TOPCOM

frobby – can get euler charactersitics of monomial ideals? http://www.broune.com/frobby/index.html

gfan

https://www.swmath.org/browse/msc

Homotopy continuation:

Bertini

http://homepages.math.uic.edu/~jan/phcpy_doc_html/index.html

phcpy and phcpack

hom4ps

https://www.juliahomotopycontinuation.org/

certification:

http://www.math.tamu.edu/~sottile/research/stories/alphaCertified/

cadenza

Jan

http://homepages.math.uic.edu/~jan/mcs563s14/index.html

www.math.uic.edu/~jan/tutorial.pdf

bezout thereom – for dense systems, number of solutions is bound by product of total degree of equations.

bernstein bound – newton polytope gives bound of number of solutions of polynomial system. useful for sparse

One could make an argument for the homotopy continuation methods being the analog of iterative solutions for linear equations if grobner basis are gaussian elimibnation. Take equation we know how to solve (~preconditioner) and perform some iterative thing on it.

add enough random linear equations to make system full (points).

Then you have a membership algorithm due to sweeping of planes. Once you have points on actual varieites, pairwise compare them.

Suggestion that “linear program” form helps auto differentiation?

local rings. thickening? Infinite power series modded out by local relation. One maximal ideal.

differential geometry on algebaric surfaces.

modules are like vector spaces.

Ring linear

Canonical example, a vector of polynomials.

1-d space of polynomials.

Module morphism – respects linearity with sresepct to scalar multiplacation and addition Can be specified compoent wise. But has to be specified in such a way that resepct.

Basis – Linearly Independent set that spans the whole module. May not exist.

So were are kind of stuck always working in overcomplete basis to make thje vector space analogy. The generators have non trivial relations that equal zero. These coefficients form their own vector space. The space whole image is zero because of the relations is called the first syzygy module.

But then do we have a complete basis of all the relations? Or is it over complete?

If you ignore that the entries of a vectors are polynomials it becomes vector space. But but because they are they have secret relations.

even 1 dimensional vector space has some funky structure because of the polynomial nautre of the ring.

Somehow fields save us?

Paramaetrized vector curves, surfaces.

Parametrzied matrices.

Noncommutative polynomials. We could perhaps consider the process of normal ordering something related to a grobner basis calcaultion. Perhaps a multi polynomial division process? Consider the ordering where dagger is greaer than no dagger. Canonical basis also has i<j (more important for fermion).

SOS gives you the exact minimum of 1-d polynomial. You could also imagine encoding this as a semidefintier program. H-lam>=0. Min lam. Where H is the characterstic matrix.

We can diagonalize to the sos form, and then take each individual term = 0 to solve for x*.

While integer programming does that funky toric variety stuff with the objective vector deswcribing the grobner basis, binary programming is simple. x^2=x + linear eequations and constraints

haskell grobener

1. Monomials. Exponent vectors. Logarithmic representation. Mutiplication is addition. Composition is elementwise multiplication. Type level tag for ordering.

newtype Mon3 ord = V3 Int

data Lex

data DegLex

Ordering of monomials is important. Map is perfect

Map (Mon3 ord) ring

Groebner bases can be used to describe many familiar operations. Linear algerba, gaussian elminiation. Using commutators. Building power series assuming terms are relatively irrelevant.

Can I get a power series solution for x^2 + ax + 1=0 by using a negative ordering for a? I need another equation. x = \sum c_n * a^n. (x+dx)? How do I get both solutions?

Dual numbers for differential equations. dx is in a ring such that dx^n = 0.

Subset sum. Find some of subset of numebrs that add up to 0.

s um variables s_i

Solutions obey

s_0 = 0

(s_i – s_{i-1})(s_i – s_{i-1}-a_{i-1})=0

s_N = 0

Factors give OR clauses. Sepearte oplynomials give AND clauses. pseudo CNF form. Can’t always write polys as factors though? This pattern also matches the graph coloring.

More interesting books:

https://wstein.org/books/ant/

Some fun with algebraic numbers

Polynomial Factorization

https://mattpap.github.io/masters-thesis/html/src/algorithms.html

https://en.wikipedia.org/wiki/Factorization_of_polynomials

Numerical vs Symbolic

Numeric

https://en.wikipedia.org/wiki/Root-finding_algorithm

Pick a random point. Then apply Newton’s method. Do this over and over. If you find N unique factors, you’ve done it. A little unsatisfying, right? No guarantee you’re going to find the roots.

2. Perturbation theory / Holonomy continuation. Start with a polynomial with the same number of total roots that you know how to factor. x^N – 1 = 0 seems like an easy choice. Given f(x)+\lambda g(x)=0, \partial g dx \lambda + \partial f dx  +g(x){d\lambda}= 0 . \frac{dx}{d\lambda} = \frac{-g(x)}{\lambda \partial g + \partial f}. You can use this ODE to track the roots. At every step use Newton’s method to cleanup the result. Problems can still arise. Do roots collapse? Do they smack into each other? Do they run off to infinity?

3. The Companion matrix. You can convert finding the roots into an eigenvalue problem. The determinant of a (A – \lambda) is a polynomial with roots at the eigenvalues. So we need tyo construct a matrix whose deteerminant equals the one we want.  The companion matrix simulates multiplication by x. That is what the 1 above the diagonal do. Then the final row replaces x^(N+1) with the polynomial. In wikipedia, this matrix is written as the transpose. https://en.wikipedia.org/wiki/Companion_matrix

4. Stetter Numerical Polynomial Algebra. We can form representations basically of the Quotient Rings of an Ideal.  We can make matrices A(j) that implement multiplication by monomials x^j in F[x]/I. Then we can take joint eigensolutions to diagonalize these multiplications. Something something lagrange polynomials. Then if the solutions respect some kind of symmettry, it makes sense that we can use Representation theory proper to possibly solve everything. This might be the technique of Galois theory metnoined in that Lie Algebra book. This is not unconnected with the companion matrix technique above. These matrices are going to grow very higher dimensional.

Thought. Could you use holonomy continuation to get roots, then interpolate those roots into a numerical grobner basis? Are the Lagrange polynomials of the zero set a grobner basis?

Symbolic

Part of what makes it seem so intimidating is that it isn’t obvious how to brute force the answer. But if we constrain ourselves to certain kinds of factors, they are brute forceable.

Given a suggested factor, we can determine whether it actually is a factor by polynomial division. If the remainder left over from polynomial division is 0, then it is a factor.

If we have an enumerable set of possibilities, even if large, then it doesn’t feel crazy to find them.

Any root of a polynomial with rational coefficients can be converted to integer coefficients by multiplying out all the denominators.

Let’s assume the polynomial has factors of integer coefficients.

Rational Root Test

Kronecker’s method

Finite Fields. It is rather remarkable that there exists finite thingies that have the algerbaic properties of the rationals, reals, and complex numbers. Typically when discretizing continuum stuff, you end up breaking some of the nice properties, like putting a PDE on a grid screws over rotational symmetry. Questions that may be hard to even see how to go about them become easy in finite fields in principle, because finite fields are amenable to brute force search. In addition, solutions in finite fields may simply extend to larger fields, giving you good methods for calculations over integers or rationals or what have you.

SubResultant. A curious property that if two polynomials share roots/common factors, it is pretty easy to seperate that out. The GCD of the polynomials.

Kind of the gold standard of root finding is getting a formula in terms of square roots. This is an old question. Galois Theory is supposedly the answer.

Fool’s Rules Regatta 2019

For four years running now we’ve done the fool’s rules regatta in Jamestown as Team Skydog in the unlimited category. http://www.jyc.org/FoolsRules/ThisYearsFoolsRules.htm

It’s a short boat race where you make a boat in 2 hours out of crap and then race ’em.

The first year we did trash bags filled with air with a platform. We were told we were very creative. We got incredibly dehydrated and sunburnt that year.

Then next two years we did a catamaran with triangular plywood pontoons held together with zip ties. Here’s some link from last year

http://www.jyc.org/FoolsRules/images/2018/foolsrules20183.jpg

http://www.jyc.org/FoolsRules/images/2018/foolsrules20187.jpg

This year we did a 2×4 and 2×3 frame with plastic sheeting stapled to the inside. It worked surprisingly well! Nearly unsinkable even when we intentionally punctured huge holes . The wind was blowing out to sea and almost everyone was drifting out to Newport. We got first place this year! We celebrated with bbq and our ice cream bucket prize.

Thoughts for next year: Make a keel / centerboard so we can tack? Maybe that is crazy talk.

Declan’s post on the same event: https://www.declanoller.com/2019/08/11/skydog-2019-winners-again/

So majestic.
SKYDOG! SKYDOG! ARF ARF ARF!
Will is the ultimate cutie. Ladies, please apply inside.
Layin’ out the frame
Sky dog ascendent!
Our reward for first place was an unholy bucket of ice cream

Annihilating My Friend Will with a Python Fluid Simulation, Like the Cur He Is

Will, SUNDER!
A color version

As part of my random walk through topics, I was playing with shaders. I switched over to python because I didn’t feel like hacking out a linear solver.

There are a number of different methods for simulating fluids. I had seen Dan Piponi’s talk on youtube where he describes Jos Stam’s stable fluids and thought it made it all seem pretty straightforward. Absolutely PHENOMENAL talk. Check it out! We need to

  • 1. apply forces. I applied a gravitational force proportional to the total white of the image at that point
  • 2. project velocity to be divergence free. This makes it an incompressible fluid. We also may want to project the velocity to be zero on boundaries. I’ve done a sketchy job of that. This requires solving a Laplace equation. A sketch:
    • v_{orig} = v_{incomp} + \nabla w
    • \nabla \cdot v_{incomp}=0
    • \nabla ^2 w = \nabla \cdot v_{orig}. Solve for w
    • v_{incomp}=v_{orig} - \nabla w
  • 3. Advect using interpolation. Advect backwards in time. Use v(x,t+dt) \approx v(x-v(x)*dt,t) rather than v(x,t+dt) \approx v(x,t)+dv(x,t)*dt . This is intuitively related to the fact that backward Euler is more stable than forward Euler. Numpy had a very convenient function for this step https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.map_coordinates.html#scipy.ndimage.map_coordinates

Given those basic ideas, I was flying very much by the seat of my pants. I wasn’t really following any other codes. I made this to look cool. It is not a scientific calculation. I have no idea what the error is like. With a critical eye, I can definitely spot weird oscillatory artifacts. Maybe a small diffusion term would help?

When you solve for the corrections necessary to the velocity to make it incompressible \nabla \cdot v = 0 , add the correction ONLY to the original field. As part of the incompressible solving step, you smooth out the original velocity field some. You probably don’t want that. By adding only the correction to the original field, you maintain the details in the original

When you discretize a domain, there are vertices, edges, and faces in your discretization. It is useful to think about upon which of these you should place your field values (velocity, pressure, electric field etc). I take it as a rule of thumb that if you do the discretization naturally, you are more likely to get a good numerical method. For example, I discretized my velocity field in two ways. A very natural way is on the edges of the graph. This is because velocity is really a stand in for material flux. The x component of the velocity belongs on the x oriented edges of the graph on the y component of velocity on the y oriented edges. If you count edges, this means that they actually in an arrays with different dimensions. There are one less edges than there are vertices.

This grid is 6×4 of vertices, but the vx edges are 6×3, and the vy edges are 5×4. The boxes are a grid 5×3.

For each box, we want to constrain that the sum of velocities coming out = 0. This is the discretization of the \nabla \cdot v = 0 constraint. I’m basing this on my vague recollections of discrete differential geometry and some other things I’ve see. That fields sometimes live on the edges of the discretization is very important for gauge fields, if that means anything to you. I did not try it another way, so maybe it is an unnecessary complication.

Since I needed velocities at the vertices of the grid, I do have a simple interpolation step from the vertices to the edges. So I have velocities being computed at both places. The one that is maintained between iterations lives on the vertices.

At small resolutions the code runs at real time. For the videos I made, it is probably running ~10x slower than real time. I guarantee you can make it better. Good enough for me at the moment. An FFT based Laplace solver would be fast. Could also go into GPU land? Multigrid? Me dunno.

I tried using cvxpy for the incompressibility solve, which gives a pleasant interface and great power of adding constraints, but wasn’t getting good results. i may have had a bug

This is some code just to perform the velocity projection step and plot the field. I performed the projection to 0 on the boundaries using an alternating projection method (as discussed in Piponi’s talk), which is very simple and flexible but inefficient. It probably is a lot more appropriate when you have strange changing boundaries. I could have built the K matrix system to do that too.

The input velocity field is spiraling outwards (not divergence free, there is a fluid source in the center)
We project out the divergence free part of that velocity field, and project it such that the velocity does not point out at the boundary. Lookin good.

Presolving the laplacian matrix vastly sped up each iteration. Makes sense.

Why in gods name does sparse.kron_sum have the argument ordering it does? I had a LOT of trouble with x vs y ordering. np.meshgrid wasn’t working like I though it should. Images might have a weird convention? What a nightmare. I think it’s ok now? Looks good enough anyway.

And here is the code to make the video. I converted to image sequence to an mp4 using ffmpeg

Code to produce the velocity graphs above.

I should give a particle in cell code a try

Links

Proving some Inductive Facts about Lists using Z3 python

Z3 is an SMT solver which has a good rep. Here are some excellent tutorials.

https://rise4fun.com/z3/tutorial

https://theory.stanford.edu/~nikolaj/programmingz3.html

http://homepage.cs.uiowa.edu/~ajreynol/VTSA2017/

SMT stands for satisfiability modulo theories. The exact nature of power of these kinds of solvers has been and is still hazy to me. I have known for a long time that they can slam sudoku or picross or other puzzles, but what about more infinite or logic looking things? I think I may always be hazy, as one can learn and discover more and more encoding tricks to get problems and proofs that you previously thought weren’t solvable into the system. It’s very similar to learning how to encode to linear programming solvers in that respect.

SMT solvers combine a general smart search procedure with a ton of specialized solvers for particular domains, like linear algebra, polynomials, linear inequalities and more.

The search procedure goes by the name DPLL(T). It is an adjustment of the procedure of SAT solvers, which are very powerful and fast. SAT solvers find an assignment of boolean variables in order to make a complicated boolean expression true, or to eventually find that it can never be made true. SAT solvers work on the principle of guessing and deducing. If a OR b needs to be true and we already know a is false, we can deduce b must be true. When the deduction process stalls, we just guess and then backtrack if it doesn’t work out. This is the same process you use manually in solving Sudoku.

The modern era of SAT solvers came when a couple new tricks vastly extended their power. In particular Conflict Driven Clause Learning (CDCL), where when the solver finds itself in a dead end, it figures out the choices it made that put it in the dead end and adds a clause to its formula so that it will never make those choices again.

https://sahandsaba.com/understanding-sat-by-implementing-a-simple-sat-solver-in-python.html

SMT works by now having the boolean variables of the SAT solver contain inner structure, like the boolean p actually represents the fact x + y < 5. During the search process it can take the pile of booleans that have been set to true and ask a solver (maybe a linear programming solver in this case) whether those facts can all be made true in the underlying theory. This is an extra spice on top of the SAT search.

Something that wasn’t apparent to me at first is how important the theory of uninterpreted formulas is to SMT solvers. It really is their bread and butter. This theory is basically solved by unification, which is the fairly intuitive process of finding assignments to variables to make a set of equations true. If I ask how to make fred(x,4) = fred(7,y), obviously the answer is y=4, x=7. That is unification. Unification is a completely syntax driven way to deduce facts. It starts to give you something quite similar to first order logic.

https://eli.thegreenplace.net/2018/unification/

https://cs.mtsu.edu/~rbutler/courses/sdd/automated_deduction/unification.pdf

I was also under the impression that quantifiers \forall, \exists were available but heavily frowned upon. I don’t think this is quite true. I think they are sort of a part of the entire point of the SMT solver now, although yes, they are rather flaky. There are a number of methods to deal with the quantifier, but one common one is to look for a pattern or parts of the subformula, and instantiate a new set of free variables for all of the quantified ones and add the theorem every time the patterns match. This is called E-matching.

Here are a couple tutorials on proving inductive facts in SMT solvers. You kind of have to hold their hand a bit.

http://lara.epfl.ch/~reynolds/vmcai15.pdf

http://homepage.divms.uiowa.edu/~ajreynol/pres-ind15.pdf

SMT solvers queries usually have the flavor of finding something, in this case a counterexample. The idea is that you try to ask for the first counterexample where induction failed. Assuming that proposition P was true for (n-1), find n such that P is not true. If you can’t find it, then the induction has gone through.

And here is some code where I believe I’m showing that some simple list functions like reverse, append, and length have some simple properties like \forall t. rev (rev(t)) == t .

A Short Skinny on Relations & the Algebra of Programming

I’ve been reading about the Algebra of Programming lately and lovin’ it. See J.N. Oliveira’s draft text in particular and the links in the references. I’ve started exploring the stuff from this post and more over here: https://github.com/philzook58/rel

Why and What?

Relations can expand the power of functional programming for the purpose of specification.

The point of a specification is to be able to write down in a very compact and clear way your intent for a program, more clearly and compactly than a full implementation could be written. It therefore makes sense to add to your specification language constructs that are not necessarily executable or efficient for the sake of compactness and clarity. When one needs executability or efficiency, one writes an implementation whose behavior you can connect to the spec via a formal or informal process.

Functional programming, with it’s focus on the abstraction of the mathematical function, is a good trade-off between executability, efficiency, and expressibility. It lies in a reasonable location between the ideas amenable to reasoning by a human mind and the command-driven requirements of the machines.

Functions are a specialization of relations. Relations extend the mathematical notion of functions with constructs like nondeterministic choice, failure and converse. These constructs are not always obviously executable or efficient. However, they greatly extend the abilities of reasoning and the clarity of expression of a specification.

The point-free style of reasoning about functions extends to a point-free style reasoning about relations, which is known as relation algebra. There are rich analogies with databases, category theory, linear algebra, and other topics.

Plus, I think it is very neato for some reason. If anyone ever thinks something is really neato, isn’t it worth giving it a listen?

A Simple Representation of Relations in Haskell

The simplest description of relations is as a set of tuples. So first let’s talk a bit about the options for sets in Haskell.

Sets in Haskell

There are a couple different reasonable ways to represent sets in Haskell.

  • [a] or Vector a
  • a -> Bool
  • Set a — a tree based Set from the containers package.

These have different performance characteristics and different power. The list [a] is very simple and has specialized pleasant syntax available. The indicator function a -> Bool gives you no ability to produce values of type a, but can easily denote very sophisticated spaces. Set a is a good general purpose data structure with fast lookup. You might also choose to mix and match combinations of these. Interconversion is often possible, but expensive. This is not a complete list of possibilities for sets, for example you may want a representation with a stronger possibility for search.

Relations in Haskell

We can directly use the definition of relations as a set of tuples with the above

But we also have the option to “curry” our relation representations, sort of mixing and matching properties of these representations.

You might also choose to package up multiples of these representations, choosing the most appropriate as the situation requires, see for example the relation package, whose type holds both Map a (Set b) and Map b (Set a).

Despite fiendishly poor performance, for simplicity and list comprehension syntax we are going to be using type Rel a b = [(a,b)] for the remainder of the post.

I’m also taking the restriction that we’re working in bounded enumerable spaces for ease. I assume such a requirement can be lifted for many purposes, but finite spaces like these are especially well tamed. The following typeclass and definition is very useful in this case.

Functions and Relations

Functions can be thought of as relations with the special property that for each left part of the tuple, there is exactly one right side and every possible left side appears. The relation corresponding to a function f looks like F = \{(x,y) | x \in X, y \in Y, y = f (x)\}.

There is a natural and slightly clever lifting of function composition to relations. We now check whether there exists a value that is in the right side of one and the left side of the other.

Because of these two operations (and their properties of associativity and absorption), FinRel is a category. We do however need the Eq b restriction to make Rel an instance of the category typeclass, so it does not quite fit the definition of category in base. It is a constrained category.

We can lift the common arrow/categorical combinators up to relations for example.

With these combinators, you have access to many functions on basic non-recursive algebraic data types. By combining them in a point free style, you can build some other useful combinators.

An Aside: Relations, Linear Algebra, Databases

The composition operation described above is not so unfamiliar as it may first appear.

Relation algebra has a great similarity to linear algebra. This connection can be made more clear by considering sparsity patterns of matrices and tensors. Sparsity patterns are a useful abstraction of linear algebraic operations. Instead of considering matrices of numbers, instead the entries are “zero” and “possibly nonzero” or, if you prefer, a matrix of boolean values corresponding to those same questions.

The ordinary row times column matrix multiplication corresponds to relation composition. Replace * with AND and + with OR. If any of the numbers is zero, then multiplying them will result in zero. In summing two numbers, if either is possibly nonzero, then the result is possibly nonzero.

Another interesting way of looking at it is that we are replacing the summation binding form \sum_i with the logical quantifier \exists_i. Both introduce a scoped “dummy variable” i and have a lot of syntactic similarity. Other related forms include \lambda i, \forall i, \int di, \max_i .

There is also an analog of the point free relation algebra in linear algebra. Linear algebra has the most widely used point free notation in the world, matrix notation. Consider the expressions Ax=b and X = ABC as compared to \sum_j A_{ij} x_j = b_i and X_{il} = \sum_{jk} A_{ij} B_{jk} C_{kl} . Matrix notation is SO much better for certain calculations. Other pieces of the matrix notation include transpose, inverse, Kronecker product, the Khatri-Rao product, and Hadamard product. Their properties are more clear in the index free form in my opinion. I believe even massive tensor expressions can be written index free using these operators. There are also analogies to be drawn between the graphical notations in these different fields.

Databases can be thought of very similarly to sparse matrices. In principle, you could enumerate all the possible values for a column of a database. So you could think of a database as a giant matrix with a 1 if the item is in the database and 0 if not. Databases are very very sparse from this perspective, and you would never store them this way. The join operation is a relative of relational composition, however join usually operates via looking at the column names, whereas our join is position based.

Query optimization in databases has interesting analogs in sparse linear algebra. For example, the Taco compiler http://tensor-compiler.org/ is doing something very akin to a query optimizer.

Inverting Relations

Unlike functions, Relations are always “invertible”. We call this the converse of a relation. When a function is invertible, it corresponds to the converse. In terms of the tuples underlying our representation, it just swaps them. Relations also possess operations trans and untrans that may be thought of as a kind of currying or as a partial inverse on a single parameter.

Orderings can also be lifted to relations (\leq) = \{(a,b) | a \leq b\}. The composition of relations also respects the usual composition of ordering.

Nondeterministic choice is sometimes represented in Haskell using Set returning functions a -> [b]. You may recall this from the context of the List monad. In fact in this case, we have an isomorphism as evidenced by tabulateSearch and searchRel.

Similarly partial functions can be reflected into relations

A useful trick is to lift sets/subsets to relations as a diagonal relation. \{(a,a) | a \in S \}. Projection onto the set can be achieve by composing with this relation. The identity results if you are talking about the entire set S.

Comparing Relations

We can compare sets by asking if one is a subset of the other A\subseteq B . Relations can also be compared by this operation, which we call relational inclusion.

A subservient notion to this is relational equality.

Relational algebra is chockful of inequality style reasoning, which is richer and slightly more complicated than equality style reasoning. This is one of the benefits of moving from functional descriptions to a relational description.

Relations also form a lattice with respect to these comparisons. What the hell are lattices? In the context of finite relations, lattices may be over powered mathematical machinery, but it really is useful down the line. They give you binary operators that play nice with some kind of ordering, in our case relational inclusion. These two operations are the meet and the join, which find the greatest lower bound and least upper bound of the operands respectively. For our relations, these correspond to the more familiar notion of set intersection and union. The intersection of two sets is the biggest set that is in both of them. The union is the smallest set for which both sets are a subset of it.

Using meet/join vs intersection/union becomes more interesting when the domain is fancier than relations over finite domains. Issues of infinity can make this interesting, or when using a representation that can’t explicitly represent arbitrary unions or intersections, but that instead has to approximate them. My favorite example is polyhedra. Polyhedra are not closed under unions. So in this case the join and union do not coincide. You need to take a convex hull of the union instead, which is the best approximation. Concretely, polyhedra can be represented as a list of their vertices, which generate the polyhedron. There is no way to express a union in this representation. Concatenating the lists represents taking the convex hull of the union.

An additional property that a lattice may possess is a largest and small element, called top (\top ) and bottom (\bot). Our finite domain relations do have these.

Relational Division

And now finally we get to one of the most interesting, powerful, and confusing operations: relational division. Relational division is a kind of pseudo inverse to relational composition. In linear algebra, the pseudo inverse is a matrix that does the best job it can to invert another matrix in a least squares sense. If the matrix is actually invertible, it equals the inverse. Relational division does the best job it can to invert a relational composition. Instead of taking least squares as a criteria, it ensures that the result doesn’t over approximate. If you had the inequality X \cdot Y \subseteq Z and you want to solve for X, relational division is the thing that does that. The right division Q = Z/Y is the largest relation such that Q \cdot Y \subseteq Z.

A helpful example is the similar operation of division in database tables.

And here is an implementation that I think is correct. I’ve goofed it up a couple times, it is a rather confusing construct.

There also exists a very similar operation of ldiv.

Relational division encapsulates many notions of searching or optimizing. I invite you to read more about it in J.N. Oliveira’s text or the Bird & de Moor text.

Properties and QuickCheck

Oh. Mah. Glob. You guys. So many properties. (Artwork courtesy of David)

Relation algebra is so chock full of properties. This is a perfect opportunity for some QuickCheck , a randomized property testing framework. There are so many more to test. I need to dig through to collect up all the identities.

Bits and Bobbles

  • Relations over continuous spaces. Vector subspaces (Linear Relations), Polyhedra (Linear inequality relations).
  • Non Bool-valued Relations. Replace \exists_x with \max_x. The weighted edgelist of a graph is a natural relation. By using composition we can ask about paths. We still have a comparison operator \subseteq which now respects the ordering of weights
  • Galois connections are cool.
  • Relations combined with recursion schemes. Recursion schemes are the point free way of describing recursion.
  • Moving into infinite spaces. How do we cope?
  • Faster search. Some relations are best specified by functions, Maps, others, mixes and matching.
  • If you “singletonize” relations a la the Agda project https://github.com/scmu/aopa, you get very interesting interconnections with profunctors, which people say are a categorical generalization of relations.
  • Point-free DSLs are interesting and pleasant. Many worries about alpha renaming are gone, at the expense of point-free pain. A DSL like this may be necessary to choose good relational query plans

Edit: A follow up post on that type level angle here http://www.philipzucker.com/relational-algebra-with-fancy-types/

References

Edit : A math exchange question about a -> [b] relational type. https://math.stackexchange.com/questions/3360026/can-division-be-expressed-intensionally-in-relation-algebra/3361351#3361351

Edit: An interesting comment and related library from /u/stevana

Some Notes on Drake: A Robotic Control ToolBox

Dumping these notes out there as is. Some material is out of date. Hope they’re useful.

What is Drake?

From the Drake Website (https://drake.mit.edu/):

“Drake (“dragon” in Middle English) is a C++ toolbox started by the Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Lab (CSAIL). The development team has now grown significantly, with core development led by the Toyota Research Institute. It is a collection of tools for analyzing the dynamics of our robots and building control systems for them, with a heavy emphasis on optimization-based design/analysis.

While there are an increasing number of simulation tools available for robotics, most of them function like a black box: commands go in, sensors come out. Drake aims to simulate even very complex dynamics of robots (e.g. including friction, contact, aerodynamics, …), but always with an emphasis on exposing the structure in the governing equations (sparsity, analytical gradients, polynomial structure, uncertainty quantification, …) and making this information available for advanced planning, control, and analysis algorithms. Drake provides interfaces to high-level languages (MATLAB, Python, …) to enable rapid-prototyping of new algorithms, and also aims to provide solid open-source implementations for many state-of-the-art algorithms. Finally, we hope Drake provides many compelling examples that can help people get started and provide much needed benchmarks. We are excited to accept user contributions to improve the coverage.”

Drake is a powerful toolkit for the control of dynamical systems, and I hope I lower some of the barrier to entry with this post. Be forewarned, Drake changes quickly with time, and some of the following may be out of date (especially the rigid body trees) or ill advised. Use your judgement.

Getting Drake

Drake may be installed from binaries or source. Both may be gotten here:

https://drake.mit.edu/installation.html

Using Bazel

Drake uses Bazel as a build tool. Bazel is an open-source build and test tool similar to Make, Maven, and Gradle.

There are three commands that you need to know:

  • bazel build
  • bazel run
  • bazel query

Query is very useful for investigating available binaries within the examples folder and elsewhere.

The notation // is used to refer to the build’s main directory. This corresponds to the drake folder.
... is the notation for everything in the subdirectory

Examples:

  • bazel build //... will build everything in the project.
  • To query all the binaries available for tools run bazel query //tools/...

Using Pydrake

Drake can be run natively in C++, or by using its MATLAB, python, or Julia bindings. This manual will be focusing on using Drake in python. By default pydrake will not be in the path. You can put pydrake into the path after building by running the following line or adding it to your .bashrc

The following line will import everything in Drake into the python namespace.

Drake is only compatible with python 2.7. If your default system install is python3, you may need to explicitly run the command python2.
The following message may be thrown if you inadvertently use python3.

Pydrake itself has generated documentation available here:

http://drake.mit.edu/pydrake/index.html

This is an extremely useful resource.

A very useful tool for exploring and confirming the Drake functionality and syntax is the python REPL. From the python REPL or within your code, it is useful to inspect an object using either help(obj) or print(dir(obj)) which will print a list of all the properties available on your object.

Drake resources

Doxygen

Besides the source code itself, the most accurate and up to date information is available in searchable form on the Doxygen page. This is a useful reference, but can be overwhelming.

https://drake.mit.edu/doxygen_cxx/index.html

Underactuated Robotics textbook and examples

Drake has a textbook being built alongside it by Russ Tedrake.

http://underactuated.mit.edu/underactuated.html

In particular the python source used to generate some of the examples in the book is worth examining.

https://github.com/RussTedrake/underactuated/tree/master/src

The full course is available online both on Edx and more recent versions on Youtube.

http://underactuated.csail.mit.edu/Spring2018/

Examples directory.

A useful source of use cases for drake can be found in the examples directory within the Drake project. As of October 2018 the contents include:

  • Cartpole – The cartpole is a classic control system consisting of a pendulum attached to a linearly actuated cart. In this directory you can find examples
  • Pendulum
  • Double Pendulum
  • Acrobot – The Acrobot is a double pendulum system actuated at the shoulder
  • Kinova Jaco Arm – A commercially available robotic arm
  • Kuka Iiwa Arm – A commerically available robotic arm
  • Particles
  • Bouncing Ball
  • Contact Model – Contains bowling pins, a gripper, and sliding bricks demonstrating Drakes ability to simulate contact dynamics
  • Rimless Wheel – A very simple model of a walking robot
  • Compass Gait – A slightly less simple model of a walking robot
  • Atlas – Examples concerning the Atlas humanoid robot
  • zmp
  • quadcopter
  • and others

Additional usage examples for pydrake can be found in the drake/bindings/pydrake/test folder.

Periscope Tutorials

There is a set of Jupyter notebook based tutorials for some basic Drake functionality in python.

https://github.com/gizatt/drake_periscope_tutorial

Drake Concepts

Simulation

For dynamic simulation, Drake exposes a Lego block interface for building complex systems out of smaller pieces, a paradigm similar to Simulink and other modeling software.
Objects possess input and output ports. One wires up input ports to output ports to build composite systems.

To build a simple forward simulation, construct a builder object. Then add all subsystems to the builder object. Explicitly connected input and output ports together. One possible cause of crashes may be leaving ports unconnected.

Once the entire system has been built, a Simulator object can be constructed from it. You may select an integration scheme and set initial conditions by getting a context from the Simulator object. The context holds state information.

There following excerpt from https://github.com/RussTedrake/underactuated/tree/master/src/simple shows how to define a simple system and simulate it.



Refs:

http://underactuated.mit.edu/underactuated.html?chapter=systems

Rigid Body Trees

Edit : I think the Rigid Body Trees interface is deprecated. Use Multi Body Trees.

There are two complementary perspectives to take of the degrees of freedom of a robot, intrinsic coordinates and extrinsic coordinates. The intrinsic coordinates have one variable per degree of freedom of the robot. A common example is a set of joint angles. The dynamics are simply expressed in the intrinsic coordinates and can derived using Lagrangian mechanics. The extrinsic coordinates specify the spatial locations and orientation of frames attached to the robot. These are called frames. These spatial coordinates may be constrained in a relationship by a rigid mounting or gearing, so there may be more extrinsic frames available than intrinsic coordinates. Extrinsic coordinates are particularly pertinent for discussions of geometry, contact, and external forces.

Drake uses the URDF (Universal Robot Description Format) format. This is a common robot format originating in the ROS community for which you can find models of many commercial robots online. It is an XML based format with visualization, inertial, and collision tags.

This is an example URDF for a pendulum cart system.


Example: Pendulum URDF