Using Z3 to solve a simple logic puzzle

I was checking out Smullyan’s Lady of the Tiger. It starts with some very simple logic puzzles. Let’s make Z3 do one for us!

There are 100 politicians

at least one is honest

for any two at least one is crooked

How many honest politicians are there?

And here is one way of encoding this into z3:

from z3 import *

politicians = [ Bool('x%s' % i) for i in range(100) ]
#True is an honest politicians#false is a crooked one
atleastonehonest = Or(politicians)

forall2atleastonecrooked = And([ Or(Not(i), Not(j)) for i in politicians for j in politicians if not i is j])

#sol, something = solve(forall2atleastonecrooked,atleastonehonest)
s = Solver()
solution = s.model()
#print dir(solution[0])
#print solution
#print map(solution)
#print is_true(solution[0])
#print bool(solution[0])
solbool = [solution[pol] for pol in politicians]
print "honest men:", len([x for x in solbool if x ])

Z3 is kind of a pain in that it won’t immediately cast to bools when I want them. A mistake I made at first was missing that “not i is j” in the list comprehension. Also an unfortunate feature is that this does not make it clear that there is a unique answer, due to permutations of the variables. But if we just asked for the number of honest, it’s not clear to me how to encode the forall2 constraint. Very curiously, z3 picks ‘x18’ to be the one honest man. I wonder why?


When I first heard about Prolog, and that it’s “functions” are reversible (they are really more like relations) my first thought on how I would build such a thing was wrong. Let’s say you wanted plus(x,y,z). I thought maybe you’d write three functions, one for each of the possible outputs you’d like. so x+y -> z,  x-z -> y, and y-z->x. Then it’s not so hard to imagine something like wiring up the graph of connections between relations and figuring out an ordering of computation that gives you the query you want. You could make such a thing non-deterministic by returning a list of possible answers rather than a single answer if that is the case like square(x,y) (square root has plus and minus answer y-> [+sqrt(y),-sqrt(y)]).

Prolog actually uses a backtracking search mechanism though and doesn’t require you to write a function 3 times (although maybe under the hood primitive addition relations might have something like this for efficiency?) which makes the thing appear all the more magical at the programmer level.

But I think that first idea is somewhat related to something called a propagator.


Ed Kmett is a god. He is also horrendously opaque unless maybe you have some serious mathematical chops. The propagators are a unifying paradigm for some very disparate sounding things. The Introduction to Lattices book by Davey and Priestly he mentions is actually a fairly pleasant read. The reference about how orders are involved in programming languages that comes to mind is The laziness of Haskell makes a richer set of possibilities than just the result of a function is terminating or not. If you don’t look inside a data type, it may not matter that that piece involved a non-terminating computation. So it turns out you can define an ordering relation between data that have non-termination in different slots. the Haskell function fix, which is sort of a nasty (or useful) recursive higher order function is called so because it is related to taking the fixed point of a function from this order theory perspective.

It is very interesting that this analogy may allows you to apply the tricks of SAT solvers to speed up parallelism and stuff. As an example, I think the idea is that a function requires previous values that are it’s inputs to be computed before it can fire. So you have this mess of functions waiting to fire and every time a new answer gets completed, maybe you have to search through the mess for the next guy that is ready. But the two watched literal technique from SAT solvers is a nice way of doing this. It really reduces the number of functions from the blob you need to check by keeping tabs on only a much smaller set per intermediate result (you’d keep like a hash or list of all possible functions to check if ready to fire off after an intermediate result comes in). The two watched literal technique doesn’t keep ALL the possibly firing functions in the list. Per function, they only are put into two possibility lists. And when one of those intermediate results comes in, you pop the function out of that list and put it into one of the other lists of something else it needs that hasn’t come in yet. If you can’t find another list to put it in, it’s ready to fire off itself. Each function may end up getting inspected much less than the number of variables it has depending on the order intermediates come in rather than once per variable as you would in the naive approach. This is the blog of Lindsey Kuper, who did that PhD work on LVars he mentions.

The Bartosz Milewski himself talking about MVars (Last 2 videos in playlist)

MVars block on write if full and on read if empty. So they are useful for synchronization.

IVars can only be written to once and block on read until full. They are promises


Propagators are mentioned here


Caffe: Getting Started

To Use the Movidius neural network stick we have to use caffe.

Caffe is not super friendly or well documented in my opinion.

I’m having lots of installation problems on my mac.

Segmentation Fault 11

Trying the last thing here. Be sure to change the version numbers in the path

I eventually got it to import with

PYTHON_INCLUDE := /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/include/python2.7 \

PYTHON_LIB := /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib

in MakeFile.config

I run the command with python2 which supposedly is the homebrew python.

python2 -m pip install protobuf scikit-image


Ok. Now we can start.


It is configuration file based. You make this protobuf file with the network and then load it up.

In standard configuration, you pull the data off of a database.

Data layers have the training data

top parameter is output of a layer

bottom is input

include Train Test are ways to include different guys at different stages

Any loss layer contributes to the eventual loss. You can weight them with a weighting parameter.  The solver runs to optimize loss layers by default. There is no parameter to specify which.

Some beginner files for syntax and exploration.

Sets input blobs in python (probably not the preferred methodology but it is fast to get up and running. Should probably at least dump into an hdf5)

Performs InnerProduct which is a matrix product and computes a euclidean loss.

Check out the mnist folder in caffe/examples. It has the clearest stuff.

name: "SimpleMult"
#input: "thedata"
#input_dim: 2
#input_dim: 3
#input_dim: 100
#input_dim: 101
layer {
  name: "inarray"
  type: "Input"
  top: "thedata2"
  input_param { shape: { dim: 1 dim: 1 dim: 2 dim: 5 } }
layer {
  name: "inarray2"
  type: "Input"
  top: "trueoutput"
  input_param { shape: { dim: 1 dim: 1 dim: 1 dim: 1 } }

#  name: "scalelayer"
#  type: "Scale"
  bottom: "thedata2"
  top: "scaleddata"


  name: "innerlayer"
  type: "InnerProduct"
  bottom: "thedata2"
  top: "ip"
  inner_product_param {
      num_output: 1 #the thing produces 1 output, the inner prduct. num_output however goes into the channel dim
      weight_filler {
        type: "gaussian"
        std: 0.01
      bias_filler {
        type: "constant"
        value: 0

  name: "loss"
  bottom: "trueoutput" #needs two bottoms to compare
  bottom: "ip"
  top: "loss"
  loss_weight: 1


import sys
import numpy as np
import matplotlib.pyplot as plt

import caffe

#load numpy array into caffe
net = caffe.Net('simplemult.prototxt', caffe.TEST)
print("inputs " , net.inputs)
print("blobs ", net.blobs)
print("params ", net.params)
print(net.blobs['thedata2'].height) # num, width,

myarray = np.arange(10).reshape((1,1,2,5))
output = np.arange(1).reshape((1,1,1,1))
print("start forward")

net.blobs['thedata2'].data[...] = myarray

net.blobs['trueoutput'].data[...] = output
print("start forward")
out = net.forward()

#out = net.forward()

#net.blobs['data'].data[...] = transformer.preprocess('data', im)

#Using a Solver.
#solver = caffe.get_solver(solver_prototxt_filename)
# if intending to import network for solving
# import via get_solver, then set input data using[]

#use caffe to mutiply

# receive numpy array
# The train/test net protocol buffer definition
net: "examples.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: CPU





This is the location of

It has some routines for conversion to and from arrays and preprocessing in Transformer.


View at

Deep Learning With Caffe In Python – Part I: Defining A Layer

You can set certain layers to not train by setting learnign rate to 0

What a great talk! NP solvers in Clojure for puzzles

I was listening to this guy on this podcast Programming Throwdown and really liked the cut of his jib. So I sought out the talks he mentioned

Links to stuff mentioned

Rolling Stones – bindings to Java Sata solver Sat4j

Tarantella – Dancing Links

Loco – Choco   – CSP solver in java



I guess the difference is the community as perspective. CSP tends to be from AI. SMT comes from formalizing and proving correctness of programs.

Simple GnuRadio Sonar Rangefinding



A simple graph where you can see a peak move as you move the microphone closer and farther from the speaker.

Uses the FFT method to compute the cross-correlation between the source signal and the microphone signal.

The source signal is a random binary signal. The repetition of 2 helpful I think because of the window applied in the Fourier Transform elements. Since a binary signal at the sampling rate has a lot of high frequency components, I hypothesize that Even a very sharp low pass filter might hurt. Repetition ought to push the signal somewhere in the middle of the spectrum.


Could use averaging to increase signal level.


The plan is to master the sonic case, where the electronics is much simpler, and then move into using my LimeSDR attached to IR diodes for a DIY Lidar unit. We’ll see where we get.


An interesting package that i should have investigated first


Elm, Eikonal, and Sol LeWitt

We saw this cool Sol LeWitt wall at MASS MoCA. It did not escape our attention that it was basically an eikonal equation and that the weird junctures were caustic lines.

It was drawn with alternating colored marker lines appearing a cm away from the previous line. This is basically Huygens principal.

So I hacked together a demo in elm. Elm is a Haskell-ish language for the web.




So I made a quick rip and run elm program to do this. This is the output, which I could make more dynamic.

The algorithm is to turn a list of points into their connecting lines. Then move the line perpendicular to itself, then recompute the new intersection points. It’s somewhat reminiscent of Verlet integration. Lines coordinates are momentum-like and points are position like and we alternate them. This is a finite difference version of the geometric Huygen’s principle.

Alternative methods which might work better include the Fast Marching Method or just using the wave equation and then plotting iso surfaces.

I also had to resample the function to get only the maximum y value for each x value in order to duplicate the LeWitt effect.


These are the helper functions with lots of junk in there

module LineHelpers exposing (..)
-- Maybe should just be doubles or nums

import Debug

fromJust : Maybe a -> a
fromJust x = case x of
    Just y -> y
    Nothing -> Debug.crash "error: fromJust Nothing"

toPointString : List (number, number) -> String
toPointString xs =
  case xs of
    (x,y) :: ys -> (toString x) ++ "," ++ (toString y) ++ " " ++ (toPointString ys)
    _ -> ""

crossProd : (number,number,number) -> (number,number,number) -> (number,number,number)
crossProd (a,b,c) (d,e,f) = (b * f - c * e, c * d - a * f, a * e - b * d)

type alias PointListH number = List (number,number,number)
type alias LineListH number = List (number,number,number)

-- gives the mapping function the list and the list shifted by 1
neighbormap f a = let a_ = fromJust (List.tail a) in List.map2 f a a_

crossNeighbor = neighbormap crossProd

norm a b = sqrt (a * a + b * b)
shiftLine delta (a,b,c)  = (a,b, (norm a b) * delta + c)

connectingLines = crossNeighbor
shiftLines delta = (shiftLine delta)
intersections = crossNeighbor

-- nearly striaght lines will find their intersection at infinity.
-- maybe filter out a lower threshold on c
-- keep first and last point
last xs = let l = List.length xs in fromJust (List.head (List.drop (l - 1) xs))

timestep : Float -> List (Float,Float, Float) -> List (Float,Float, Float)
timestep delta points = let
        firstpoint = fromJust (List.head points)
        lastpoint = last points
        connectlines = connectingLines points
        newlines = shiftLines delta connectlines
        newpoints = intersections newlines
        filterednewpoints = List.filter (\(a,b,c) -> (abs c) > 0.01) newpoints
        normpoints = normalize filterednewpoints
        result = firstpoint :: (normpoints ++ [lastpoint])
        resample = (maxfunc ( dehomogenize result)) initx
        --result2 = removeoutoforder (-100000, 0,00) result
          in homogenize (zip initx resample)

homogenize (a,b) = (a,b,1)
dehomogenize (a,b,c) = (a / c, b / c)
normalize = dehomogenize >> homogenize

zip = List.map2 (,)

initx = (toFloat >>((*) 4.5)) (List.range -200 400)
--inity = (\x -> x * x / 50) initx
--inity = (\x -> 300 + x * x / -50) initx
--inity = (\x -> 25 * sin (x / 20) + 250) initx
inity = (\x -> 25 * sin (x / 20) + 250 + 15 * sin (x/13)) initx
initxy = zip initx inity
initxyh = homogenize initxy

iterate n f x = if n == 0 then [] else (f x) :: iterate (n - 1) f (f x)

paths = ( << dehomogenize (iterate 60 (timestep 5.0) initxyh)

colors = List.concat (List.repeat (List.length paths) ["red", "blue", "yellow"] )

removeoutoforder prev xs = case xs of
          y :: ys -> if prev < y then (y :: removeoutoforder y ys) else removeoutoforder prev ys
          _ -> []

neighborzip a = let a_ = fromJust (List.tail a) in zip a a_
linearinterp x ((x1,y1), (x2,y2)) = (y1 * (x2 - x) + y2 * (x - x1)) / (x2 - x1)
maxfunc : List (Float, Float) -> Float -> Float
maxfunc points x = let
        pairs = neighborzip points

        filterfunc ((x1,y1), (x2,y2)) = (xor (x < x1) (x < x2))
        candidates = List.filter filterfunc pairs
        yvals = (linearinterp x) candidates in Maybe.withDefault 100 (List.maximum yvals)

And this is the svg main program.

import Html exposing (Html, button, div, text)
import Html.Events exposing (onClick)
import Svg exposing (..)
import Svg.Attributes exposing (..)
import LineHelpers exposing (..)

roundRect : Html.Html msg
roundRect =
      [ width "1000", height "1000",viewBox "-100 0 350 350" ]
      (List.reverse ([--[ rect [ x "10", y "10", width "100", height "100", rx "15", ry "15" ] [],
      -- polyline [ fill "none", stroke "red", points "20,100 40,60 70,80 100,20" ] [],
       polyline [ fill "none", stroke "black", strokeWidth "5.0", points (LineHelpers.toPointString LineHelpers.initxy) ] []] ++
         (List.map2 (\path color -> polyline [ fill "none", stroke color, strokeWidth "3.0", points (LineHelpers.toPointString path)] []) LineHelpers.paths LineHelpers.colors)))

main =
  Html.beginnerProgram { model = model, view = view, update = update }


type alias Model = Int

model : Model
model =


type Msg = Increment | Decrement

update : Msg -> Model -> Model
update msg model =
  case msg of
    Increment ->
      model + 1

    Decrement ->
      model - 1


view : Model -> Html Msg
view model =
  div []
    [ button [ onClick Decrement ] [ Html.text "-" ]
    , div [] [ Html.text (toString model) ]
    , button [ onClick Increment ] [ Html.text "+" ],



notes on elm

elm is installed with npm


you import packages (including your own) with

import ThisPackage

and you check types by just writing them and hitting enter rather than :t

elm-live is a very handy thing. A live reloading server that watches for changes in your files.

elm-make myfile.elm

will generate the javascript and html

This is a good tutorial and a good snippet to get you going


Differences from Haskell:

elm isn’t lazy which is probably good.

The composition operator (.) is now <<

elm doesn’t have the multiline pattern match of haskell. You need  to use case expressions. I miss them.

typeclass facilities are not emphasized.

The list type is List a rather than [a]


A couple of interesting deep learning topics

Image Segmentation

How to find objects as subimages in an image:

Basically, use classifying networks on suggested subboxes. Then there are some tricks layered on top of that idea, like using a netowkr to suggest possible subboxes. There exist implementations of these things in tensorflow, caffe and others.

One shot learning

Differentiate whether two pictures are of the same object using only the one image.

One-Shot Imitation learning


gstreamer is tinker toys for putting together media applications. Very reminiscent of gnuradio although it doesn’t have a nice gui editor. You smash together a bunch of blocks

It keeps coming up so I am looking into it more.

sudo apt install libgstreamer1.0-dev

copy example

gcc hello_gstream.c `pkg-config –cflags –libs gstreamer-1.0`

v4l2src is the webcam source of the /dev/video0 device

apt-get install gstreamer0.10-ffmpeg

gst-launch-1.0 -v \
v4l2src \
! qtdemux \
! h264parse \
! ffdec_h264 \
! ffmpegcolorspace \
! x264enc \
! rtph264pay \
! udpsink host= port=5000

helpful idiom

gst-inspect | grep “h264”

This let me view my webcam

gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=1280,height=720 ! xvimagesink

The video/x-raw is a “cap”, a capability, kind of defining the type of video flowing through. It isn’t a conversion step as I understand it. It is telling the graph which of the possible types of video available you’ve picked (your webcam can just be told to give you different stuff).

Ugh. The gstreamer elements are super useful, but where is an organized list of them. The manual just has a big dump. Most of these are probably not useful.

videoconvert sounds like a good one

There are some fun opencv and opengl ones. Like face detection or wacky effects. Handdetect is a curious one

fakesrc for testing

special sinks for os x –  osxvideosink

playbin for playing from a uri

x264enc – encodes into h264

uvch264 – gets a h264 stream right from the webcam

Using the Logitech C920 webcam with Gstreamer 1.2

Or you can just change the parameter to v4l2src to output h264. Ok this is not working on my webcam. I get

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.


gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=640,height=480 ! x264enc tune=zerolatency !  h264parse ! avdec_h264 ! xvimagesink

encodes h264 and then decodes it. May want to change that zerolatency to another setting option. Maybe?


okay continuing ahead with the streaming. I can’t get h264 to stream. It gives a ERROR: from element /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0: Internal data flow error. when combined with the stock example code


gst-launch-1.0 rtpbin name=rtpbin \
v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=640,height=480 ! queue ! x264enc tune=zerolatency ! rtph264pay ! rtpbin.recv_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5000                            \
rtpbin.send_rtcp_src_0 ! udpsink port=5001 sync=false async=false    \
udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0

However. using h263 it does work. Needed to change ffenc to avenc from their example and ffdec to avdec


gst-launch-1.0 rtpbin name=rtpbin \
        v4l2src ! videoconvert ! avenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \
                  rtpbin.send_rtp_src_0 ! udpsink port=5000                            \
                  rtpbin.send_rtcp_src_0 ! udpsink port=5001 sync=false async=false    \
                  udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0


gst-launch-1.0 -v rtpbin name=rtpbin \
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998" \
port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! rtph263pdepay ! avdec_h263 ! xvimagesink \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false

for receiving on my macbook

gst-launch-1.0 -v rtpbin name=rtpbin \

udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998" \

port=5000 ! rtpbin.recv_rtp_sink_0 \

rtpbin. ! rtph263pdepay ! avdec_h263 ! videoconvert ! osxvideosink \

udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \

rtpbin.send_rtcp_src_0 ! udpsink host= port=5005 sync=false async=false

you need to specify a host for the udpsinks to get the video on another computer.

I would estimate the latency at 1/4 second maybe. Much better than other things I’ve tried.

okay default latency on rtpbin is 200ms.

on receiving side set latency=0 as an option to rtpbin (not totally sure if transmitting side should have it too.)

I wonder how bad that will fail in the event of packet loss? It’s probably not a good setting for some circumstances, but for a non-critical application on a LAN it seems pretty good.

I think the latency might have crept up a bit over a minute. Not too bad though.

Update 3/12/19:

Mark has some interesting notes on using gstreamer to streaming. He reported a better latency.

Making a Podcast

I followed a couple tutorial websites

I’m using Zencastr for the moment. It is a browser based skype recording thing. Our first episode came out really out of sync between the two tracks

I’m using audacity to mix the two tracks together into a single file.

Hosting is on a blogspot. I think this was a bad choice. Should have used wordpress. Anyway, simple enough. Make a post per episode.

Using Feedburner to collect up the RSS feed from the blogspot and giving it more metadata. This service feels so crusty and ancient. I wonder if it still best practice

Domain names on Google Domains.

Google Drive with shared links was used for hosting. This works ok, but is not good enough for itunes. It is missing byte range requests and maybe nice urls with filenames in them? Google drive has some abilities that stopped in 2015 that would’ve been helpful. If you modify the usual shared link to look like

it works better, replacing that junk at the end with your junk

Using Amazon S3 for storage. I already had an AWS account and bucket, so no biggy. The cost should be cheap according to what I’m seeing? $0.04 /GB/month for storage and a couple of cents per 1000 requests supposedly. We’ll see. I’ve been burned and confused by AWS pricing before.

Submit podcast on for itunes



Nerve: Elixir OS packager for Raspberry Pi

I found out about Elxir Nerve on the Functional Geekery podcast. Seems right up my alley.

It builds a minimal linux image with erlang running. Runs on the raspberry pis and beaglebone.

Erlang and elixir does intrigue me a lot, but I haven’t gotten over the hump yet.

Summary of experience so far: Kind of painful. Docs aren’t great. Being an elixir newbie hurts. Strong suspicion that going outside the prebuilt stuff is gonna be tough.


Getting Started

mix hello_nerves

need to export the target variable. Why is this  not part of the config file. There probably is a reason.

export MIX_TARGET=rpi0

Building the firmware

mix firmware

writing to sd card

mix firmware.burn

says I need to install fwup

makes sense. Not in apt-get. Downloaded the deb file and installed


Booted up. Shows things on hdmi. Cool

went to

run the following before building to set wifi stuff

export NERVES_WIFI_SSID=my_accesspoint_name
export NERVES_WIFI_PSK=secret

mix deps.get

mix firmware

mix firmware.burn


Hmm. Can’t get it to work. i have Erlang 20 and It wants 19. Upon further inspection, this example is completely deprecated. Sigh.



mix wifi_test

add in the nerves_network dependency into mix.exs

# Specify target specific dependencies
def deps("host"), do: []
def deps(target) do
[ system(target),
{:bootloader, "~> 0.1"},
{:nerves_runtime, "~> 0.4"},
{:nerves_network, "~> 0.3"},


config :nerves_network,
  regulatory_domain: "US"

at the end of the config file


Alright. I admit temporary defeat. The pi zero is an awful thing.



If you plug the usb port of the pi zero into your computer it shows up as a serial device

in my case /dev/ttyACM0

you can open that up with screen or the arduino serial monitor

baud 115200

And you have access to an elixir console.



I was able to get linklocal ethernet connection working. You have to setup nerves_network usb0 to use method :linklocal.

I used nerves_init_gadget

In addition on unbutu you have you go into the netowrk settings and change the ipv4 dropdown in the options to linklocal only. Then the pi is available at nerves.local


The edimax wifi dongle does not work by default

hmm buildroot

This is intriguing. It is a build tool for getting linux on embedded systems