Haskell – Where am I at?

A long list of scattered thoughts. I need more examples.

Functors:

fmap generalizes map on lists. If you can apply a function and inject it into a container it is a functor.

Examples: trees. Simple wrappers. Records.

Alternatively Functors are category theory functors. The combo of the type constructor (Which maps a into Constructor a) and fmap (which maps the morphisms)

Foldable:

foldMap is like concatMap.

I am confused.

Traversable:

traverse is like fmap except “effectful”?

traverse is like mapM for applicatives? Like a mapA. mapM is like map except it uses bind instead of apply. So if you had a list of Maybe Ints you could mapM over them to apply a monadically defined function Int -> mb to all elements. So instead you use <*> I guess? If you had a…

sequence is like? It flips outside and inside wrappers? List of Maybes to a Maybe List. Like transposing a matrix ? Hmm. The matrix is a bit confusing because there is a traversable and an applicative at play

I am confused.

Applicative:

slightly less powerful monads. Equal to do notation if you don’t use values until the return.

Applying a list of functions to a list of arguments. Either zipwise or nondeterministic/cartesian product wise.

If you fmap <$> a multiargument function over a list you get a list of functions via currying.

(+) <$> [1,2] <*> [7,8]

will return a list of all possible sum combos via the default applicative list instance.

I am pretty darn confused about applicative. I don’t have examples of applicatives that aren’t also monads although I’ve seen the claim there are many.

Lens:

Look at Simon Peyton Jones talk. It helped a lot. Ed Khmett’s talk is very puzzling from the get go. It assumes comfort levels with traversable and foldable.

Lens a b are kind of functiony “things” that have a big object a and a focus within that object b.

Simplest lens would be a hand written setter and getter functions

something like the _1 lens would look like

_1  {get = \(x,y) -> x, set a = \(_,y)->(a,y)}

Composable. Sort of compose setters and getters if one guy is inside of another. I think there is some pipework for getting set to work.

Can be unified into a single interface. Lens’ which uses Functorial trickery. Using trivial seeming Functors can be useful. Identity and Const.

Interesting connection between type level and value level.

fix, id, const and Fix Identity and Const

Recursion Schemes:

http://blog.sumtypeofway.com/an-introduction-to-recursion-schemes/

Nutso shit.

Maybe a start is to try to right fold in terms of fix. Don’t use recursion except fix

Bananas Barbed Wire territory. Paper is incomprehensible due to using Squiggol notation

fix f = let x= fx in x

let is a primitive construct of its own.

bottom is the value a function returns when it gets into an infinite loop (If we insist on things being for real mathematical functions that have to return). bottom is always a possible value. You can’t really return bottom in some sense because of the Halting problem being unable to determine if stuff will halt (although in special cases its possible btw).

https://en.wikibooks.org/wiki/Haskell/Denotational_semantics

Profunctors:

Seem really important. This is the most promising area for me to find a good approach to anyon models.

Contravariant Functors are functors where fmap kind of reverse what happens.

There is a related funniness where you would apply f then apply g but the composition is written g . f  The order is swapped in some kind of wonky way between application and composition

Wrapping Isomorphisms as type. The Product type of the map and inverse map of the isomorphism.

Monads:

Labelled arrows that aren’t quite functions. Partial functions, Linear functions, Nondeterministic functions, random functions, state dependent functions.

bind is an awful lot like function application $.

Useful technique: Dumb write all the stuff you need to pass around, then try and push it all onto the right hand side. state dumbly = s -> a -> (s’ , b) . Returns new state s’ and result b when given original state s and argument a.

DRYing up function pipework (always error checking in every function).

It might tend to be better to understand and use monads without delving into their definition. They form their own language. (DSL) Undoing do notation in your head is a nightmare

CoMonad:

contextual arrow instead of effectful arrow

duplicate instead of join

Monad Transformers:

Stacking monads. That’s all I got.

 

 

 

 

 

Some parsing in Haskell

http://dev.stephendiehl.com/fun/002_parsers.html

https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours/Parsing

http://book.realworldhaskell.org/read/using-parsec.html

> import Control.Applicative hiding (many)
> import Control.Monad (liftM, ap)
> import Control.Monad.Plus
I’m going based on http://www.cs.nott.ac.uk/~pszgmh/pearl.pdf A paper on monadic parsing by Hutton. More than based. I’m copying code.
I’m just writing down what I think as I’m going through this paper.
So what is going on here.
A parser is a little guy that will try to pull off all it can that patches from a string. If it can’t find anything
it will return an empty list. If it can match more than one option, it’ll return a list of possibilities

It feels more like a little regular expression guy
Its kind of a combo of a state monad and the list monad.
The state is the current string that is left to be chewed up
the list monad is for nondeterminstic computation, it returns all possible results
One wonders if this could be constructed useful using monad transformers
Don’t know much aout those but that is how the feel. For composing multiple monads.
> newtype Parser a = Parser (String -> [(a,String)])
> parse (Parser x) = x

Now, why is the type a function? What is going to happen is we’re going to define a way to compose these little
functions in a way to build up the big parser from the little ones. The a type parameter is interesting.
Maybe it returns a token representing what it found. Or maybe it just returns the string that matches

> item :: Parser Char
> item = Parser (\cs -> case cs of
> “” -> []
> (c:cs) -> [(c,cs)])

So item produces a parser that will match any character.

How do we grab all items? Well we need to compose up a bunch of these item guys.
What we kind of want is something that feels like
item3 = item $ item $ item
This doesn’t work because the types in and out down’t match. So it needs to be a monad to replace function application $ with
the monaidc bind >>=

> instance Monad Parser where
> return a = Parser (\cs -> [(a,cs)])
> p >>= f = Parser (\cs -> concat [parse (f a) cs’ |
> (a,cs’) <- parse p cs])

return makes a trivial parser: A function that doesn’t change the string at all and returns the object a no matter what the string said.
The bind operation composes the operations. What does it need to do?
1. It needs to return a function String -> [(a,String)] wrapper in a Parser constructor
2. It needs to strip the left over string coming out of p’s function and pass it into the function f makes
3. f might need to know what came before it to decide what parser to makes?
4. For every possibility that comes out of p you need to try what comes out of (f a)

I think 4 is non obvious. We’ll see if I change my mind.

So he used a list comprehension. Not what I would’ve thought of. It’s clean though. Here’s a very shitty construction process

Needs to return a function from strings
p >>= f = Parser (\cs
Need to use (parse p) to start. Then we have that list [(a,cs’)]
p >>= f = Parser (\cs -> (parse p) cs )
Probably map over the list we need to do something for every element
p >>= f = Parser (\cs -> map (parse p) cs )
What are we applying? f I guess. We need to apply f the the first argument to make a parser. Now we have a list of parsers.
p >>= f = Parser (\cs -> map (f . fst) (parse p) cs )
Now get those functions out
p >>= f = Parser (\cs -> map parse (map (f . fst) (parse p) cs))
This is getting too long. Things are getting out of hand. So I define intermediate expressions.
Now apply them
p >>= f = Parser (\cs -> map fs (map snd acs)
where
acs = (parse p) cs
fs = map parse (map (f . fst) acs)

eh screw it

The list monad has a similar bind.
God Do I have to implement Functor too? Probably.

> instance Functor Parser where
> fmap = liftM

> instance Applicative Parser where
> pure = return
> (<*>) = ap

Monad is automatically a functor and applicatibe. I should look up the defintions of those functions
> p :: Parser (Char,Char)
> p = do {c <- item; item; d <- item; return (c,d)}

ghci returns
*Main> (parse p) “qwerty”
[((‘q’,’e’),”rty”)]

So to add parsers allows you to take different routes.
The zero parser does jack all

> instance MonadPlus Parser where
> mzero = Parser (\cs -> [])
> mplus f g = Parser (\cs -> (parse f) cs ++ (parse g) cs)

> instance Alternative Parser where
> (<|>) = mplus
> empty = mzero
> sat :: (Char -> Bool) -> Parser Char
> sat p = do c <- item
> if (p c) then (return c) else mzero

> findq = sat (== ‘q’)

*Main> parse findq “qwerr”
[(‘q’,”werr”)]
*Main> parse findq “werr”
[]

> char :: Char -> Parser Char
> char c = sat (c ==)

> string :: String -> Parser String
> string “” = return “”
> string (c:cs) = do char c
> string cs
> return (c:cs)
> many :: Parser a -> Parser [a]
> many p = many1 p mplus (return [])
> many1 :: Parser a -> Parser [a]
> many1 p = do a <- p
> as <- many p
> return (a:as)

*Main> parse (many (char ‘c’)) “yoyoyoc”
[(“”,”yoyoyoc”)]
*Main> parse (many (char ‘c’)) “ccyoyoyoc”
[(“cc”,”yoyoyoc”),(“c”,”cyoyoyoc”),(“”,”ccyoyoyoc”)]
> isSpace s = s == ‘ ‘ || s == ‘\t’ || s == ‘\n’ || s == ‘\r’
> space :: Parser String
> space = many (sat isSpace)

> token :: Parser a -> Parser a
> token p = do {a <- p; space; return a}

> symb :: String -> Parser String
> symb cs = token (string cs)

> apply :: Parser a -> String -> [(a,String)]
> apply p = parse (do {space; p})

> data Expr = Expr Addop Term | Term
> data Term = Val Int

and I’m tired So I’ll stop.

A whole new world. Propositions as types. Idris. Rule-rewriting

I’ve been going down a rabbit hole lately. I suppose it started with I wanted to solve the annihilation creation operator algebra (and more long term understand how to program up anyons in a elegant way) in a new way. So I rebooted up my haskell.

I was hoping (through a misunderstanding of what pattern matching and lazy evaluation achieve ) I’d somehow be able to write

This is screwy from a functional programming perspective. I’m no guru, but I think the problem is that really you want f to take a function and return a new function on line 2 whereas you want f to take an integer and return an integer on line 3.

You can hack this together, but I feel it is not an elegant approach.

 

Anyhow, this problem is solvable in mathematica. Mathematica (of is it called Wolfram now? I’m a tad confused there) at its core is a rule rewriting system. This is a different flavor than a functional programming langauge, I think. All the current applicable rules float around somewhere in the enviroment. Wehn you make an = expression you add a new rule. This is similar to functional programming have all the functions floating around and you can define new functions to put into the environment. equals sign and rule replacement /.{x->y} are very very related.

However, I think there is an element of nondeterminism in the rule rewriting. It can try to apply rules and fail and the untry that rule. I think? Mathematica is pretty vague as far as I can tell on what it is trying. Functional programs I think go more one way. You apply a function but then never unapply it. There is no try; only do. You could definitely build up some kind of history system to achieve this but it feels much less intrinsic to the basic evaluation procedure. The order of applications is free to be chosen. Top down, bottom up, whatever. Some sort of procedure.

So this got me curious about how I could implement such a system in a cute way or what else I could do.

Then I came across the Strange Loop talks.

This one in particular intrigues me.

He mentions Session Types, a typing system for communication that is analogous in some way to linear logic. Linear logic has kind of a no copy rulewhere you use up propositions when you use them rather than being usable as much as you like. Like the no-cloning theorem in Quantum. This brought me back to take a peek at Baez’s Rosetta Stone paper. Interesting and cryptic. I guess this is good for communicating. Messages get sent and then used up when received. Sounds neato. Looking at the concrete specification Scribble makes this a bit more clear. Maybe this would be a good paradigm for a quantum computing programming language? Sort of let the quantum information transfer between gates be treated as a network call. It makes sense in that you would want to run different parts of quantum algorithms on quantum and classical computers. The interface probably will be network like. Quantum-Quantum Sockets and Quantum-Classical sockets. Wait for a measurement signal on the classical side, then send a classical valued configuration signal to the gates. The gates send quantum signals to each other. The classical side also needs to setup the gate connections too.

Smart people freak out about types. I don’t understand why yet. They seem important in the very roots of mathematics.  Circumventing paradoxes and such. Types allow to compiler to save you from yourself (and others), I think is the main point. But they also make you figure out. I have found Haskell’s type system to be useful for sketching out what I mean by writing down data structures. It can be a useful part of the thinking process. Like writing function declarations without writing the inner code. “Oh yeah, I’ll need a vectorizing function. Let’s write that down but code it up later.” I’ve also found myself yearning for this in python sometimes. Stronger typing would let me pass stuff in without whatever it is I’m passing in being implied within the code of the function itself. “Error? On what? Oh, the dict should have a ‘foo’ key? Huh. Whoopsy.” I can and should and sometimes do put that stuff in comments, but you have to read those. Which is not guaranteed.

Also, Lambda cube? Huh.

Then I saw some more talks mentioning Coq, Agda, and Idris. Language that are related somehow. Languages that touch or are intended for theorem proving. This is a subject I’ve seen mentioned before and not understood what that could mean. Idris is very close variant of Haskell. This seems like a good road to understand more. Coq has a long history and a difficult reputation.

Maybe the next thing I need is to learn some basics of logic to be a better programmer.

Propositions = Types.

What does that mean?

https://en.wikipedia.org/wiki/Sequent_calculus

This stuff is very evocative of lambda calculus. How have I somehow not seen this before? I guess I just sort of glazed over when the Curry-Howard isomorphism was mentioned. Assumed that was a very high falutin thing not worth checking out.

I’m still in the middle of it all. We’ll see.

Aruco in opencv

So there isn’t great documentation on the python bindings as far as I can find. There are docs on the c++ bindings.  Trying to do this on a mac was a hellish uphill battle, and opencv in the virtual machine has been… hmm actually pretty okay? Well, I did this on my fresh new triple boot ubuntu flash drive.

Invaluable is to go into the python REPL and type

Then you can see what all the available functions are. They’re more or less self explanatory, especially since they are described in the opencv c++ tutorials.

http://docs.opencv.org/3.1.0/d9/d6d/tutorial_table_of_content_aruco.html

I believe the python bindings are generated programmatically, and they are fairly systematic, but always a touch different from the c++ function calls. A big difference is typically the python calls don’t modify in place.

Anyway, to get you up, I cobbled together some really basic code. It can generate a tag and save it

And this is a basic program to detect the markers

They are sprinkled with the requisite garbage and cruft of me wiggling around with print statements to figure out what everything is.

It sounds like more of what I want is to use Aruco boards. They sound good. I’m looking into using this for maybe robot configuration sensing.

 

Installing opencv on ubuntu 16

I’m not finding the intructions hyper easy. I bet sudo apt-get install opencv would work pretty well, but i want some latest features (aruco support in the contrib?)

This is useful

http://docs.opencv.org/trunk/d7/d9f/tutorial_linux_install.html#gsc.tab=0

 

okay git clone  both the contrib

https://github.com/opencv/opencv_contrib

and the main project directory

i have a fresh installation of ubuntu

First I had a error in the cmake process

sudo apt-get build-dep opencv

I had an error about no sources. Had to uncomment the dep-src lines in /etc/apt/sources.list file. The ran sudo apt-get update. Now it installs a ton

http://askubuntu.com/questions/496549/error-you-must-put-some-source-uris-in-your-sources-list

Running this in the created build directory. Have to adjust the path to modules to suit what you’ve done

sudo make install

 

seems to have installed python bindings. Good.

 

 

Triple Booting Macbook for ubuntu on flash drive

So making a live usb of ubuntu seems to restrict you to 4gb space. I don’t want that. I want the full space of my 64gb usb3 flash drive available.

This is harder than it seems it might be.

This was very useful

http://askubuntu.com/questions/525845/how-to-finally-get-ubuntu-to-boot-on-a-mac-from-external-usb-storage

Didn’t do the gdisk stuff?

I have a running ubuntu desktop to piggyback off of.

First I made an install usb drive on a lesser 8gb usb drive using create startup disk ubuntu utility

Choose “something else” in installation menu once plugged in and booted off my mac. (hold option key on startup to pick usb drive)

Then I installed  on the bigger blank usb drive by selecting it as the root partition. Picked ext4.  Used some space for swap. also I should have made an efi partition at this step but I did it later

made efi later using gparted. create a 200mb fat32 partition. Then set flag as boot. That is an efi partition I guess

http://unix.stackexchange.com/questions/174495/creating-efi-bootable-gpt-partition-with-gdisk-on-previous-mbr-gpt-damaged

downloaded refind binary. ran ./refind-install –usefault=/dev/sdb3 which was the efi partition I made.

Turn on macbook holding down option key

efi is there

picked the ubuntu icon on the right that mentioned my flash drive. There is another one. Not sure what that is?

Boots into ubuntu!

Okay. so that’s good

My mac now default boots into grub? This panicked me for a moment, since I thought I did nothing to my hard drive. I suppose the ubuntu installer must have done something. Maybe this is what those deleting a MBR  with gdisk instructions are about. It’s fine this way it is for me. Now I need to hold option if I want to boot into Mac. This is acceptable.

In ubuntu, My tilde key is mapping to <>. That’s weird

http://askubuntu.com/questions/530325/tilde-key-on-mac-air-with-ubuntu

 

To get the facetime camera working i had to follow these very scary instructions

https://github.com/patjak/bcwc_pcie/wiki/Get-Started

Installed sudo apt-get checkinstall to run a line

Had to

 

as suggested near the bottom in order to find the webcam in cheese.

The color balance appears to be messed up. Futzing around with the hue settings you can get something that almost looks right. And conversion to black and white is fine. So That’s good enough for a lot of my purposes.