## Monads are f’ed up

Coming back from a couple weeks off of not reading about them, I find myself mystified once again.

I think they are a way to chain extra data through functions.

And you need them to make basic ghc programs? That sucks.

getArgs gets the command line arguments in a list of strings

getLine takes a string from the program

read converts strings into integers that can be added

foldr1 is a variant on foldr that takes the first element as the first accumulator value

putStrLn

putting stuff on seperate lines in a do block implies >>

>>= is implied by <- notation

Both are bind operations but a little different?

It all kind of does what you think it should from looking at it, but the monadic backend is deeply puzzling (look at the type definitions). I watched a youtube video of Brian something explaining how monads are a natural way of achieving function composition for functions that don’t take in and output the same type, but I can’t really recall how that made so much sense. Monads are slippery

save this is a file hello.hs and run

ghc hello.hs

./hello

 module Main where import System.Environment main :: IO () main = do args <- getArgs num <- getLine putStrLn ("Hello, " ++ show (foldr1 (+) (map read args))) putStrLn num 

## Green’s Functions: Functions Are Vectors

Matrices are the only math problem man actually knows how to solve.

Everything else is just sort of bullshitting our way through the best we can.

Multiplying matrices like $AB$ is a notation that let’s us reason about very large complicated objects at a very simple level.
The convention of multiplication of matrices seems kind of arbitrary (rows times columns) but it is simple.

The two main powerful questions about a matrix we can get an answer to are what is a matrices inverse and what are it’s eigenvalues.

Hence the emphasis on linear differential equations. Any problem phrased in these terms is just a matrix problem.

Here’s the biggest most important point of Green’s Functions. If the differential operator $L$ is considered to be a matrix, then the Green’s function $G$ is it’s matrix inverse $L^{-1}$.

Boom.

## A Haskell is Good?

I am a beginner in functional programming.

Over the past couple weeks, I’ve been revisiting Haskell. As I’ve brought it up in conversation, I’ve been asked a couple times what to define what functional programming even is when I’ve brought it up. Honestly, I do not yet know how to express it in words. But I think I know how it feels. A very gooey statement to make about a typically precise field.

I’ve been asked if passing in callback functions as arguments is functional programming. I think it is an aspect but that it doesn’t gets to the heart of the matter.

A couple things without poisoning my impression by reading the definition from wikipedia:

• Functions are like, really important
• Ordinary programs are like a very explicit list of instructions for the machine. Functional programming doesn’t feel like that.
• If you’re mind wants to explode, maybe it’s functional programming
• Seems Mathy. In that abstract proofy math kind of way. You build programs kind of like how you’d build a proof. Take axioms and tie them together with minimal lemmas and theorems into more complex programs. Also, the goals are mathy. You try to program things as general as possible, a task aided by focussing on the function.
• Recursion is not required but it’s got the flavor.
• While most programming languages have functional bits and some functional paradigms in them now, the only one’s that really have the taste are the pure ones, Lisp, Scheme, Haskell, etc. The languages that have the alternatives to the functional style basically end up programming kind of the same.

Stuff I’ve heard but don’t really get:

• Lazy Evaluation: Stuff doesn’t evaluate until it needs to and then it only evaluates as much as it needs. If you’ve got a thing that builds a list of 1000, but then you only ask for the first 3 elements, it’ll only build the list when you ask for those elements and then also only those three. I think that’s what’s going down.
• The type system is F’ed up.
• How I could ever conceive of the weirdly clever things they do.
• That functional programming is actually good. The boiler plate of wiring thing to thing is 99% of what programs are, and the algorithm is a minimal component. I am disbelieving but hopeful that functional programming could alleviate that.

Updates as I understand more.

## A Start on Green’s Functions

What is a Green’s Function?

While that doesn’t sound like an esoteric question to me anymore, it must to a new initiate.

First off, I think you can achieve immense clarity by understanding that everything that is called a Green’s function is not exactly the same thing. For problem solving and calculating, the similarity of all the things called Green’s functions is helpful, since knowledge of one situation let’s you understand how to manipulate the solution in another.
They do all have a similar flavor.

Conceptually, however, the Green’s function is a many faced god. And for that purpose, it is best to emphasize the differences between these very conceptually different cases.

To start off, for some slight concreteness, let’s list some problems that will have solutions in the form of something called a Green’s function.

1. You have some weird potato shaped charge and hunks of metal. What is the electric potential everywhere?
2. You have a ball getting randomly kicked around. Given it starts at home plate, what is the probability it gets to first?
3. You have a single quantum particle
4. You have a rumbling tumultuous ocean or rubber sheet. How does the height at one place correlate with the height at another?
5. You have a quantum field. God help you.

## Javascript Test

Checkout the console for a fun message!
Apparently, I have the freedom to put script tags in my posts. Excellent.
console.log("Hey There Buddo")

Installed Jetpack to piggyback on WordPress.com and get some nice $\LaTeX$ going
$i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$
$\hbar$

BOBBA BUOY