See Also:

  • compilers
  • computer architecture


Performance matters, it unlocks new applications, important for business python -> avx extensions: x60,000 in one example Measurement is really important and hard. CPU can overclock for a little bit. Try to control the environment

Use statistical tests to determine if real change. student t for example Plot your benchmark data. Bimodal? Two different behaviors are happening microbenchmarks: be careful. Is it inlining a bunch of stuff? Anything except your exact final application and environment is a proxy. That the proxy at all represents the real behavior is fishy. Never forget that. System clock and system counters.

Agner Fog

manual 1

Reduce data dependencies a[i++] may be faster than a[++i] because of a data dependency reduction bool in C++ outputs 0/1 but may have come from a source that didn’t. This means it needs branching code for simple satuff short circuiting && ||, try to short circuit early


List of interesting optimizers - These are compiler optimizations, so hopefully your compiler does them for you, but maybe it doesn’t and maybe Lemire converting integerrs to fix digit representations By considering data dependencies and using lookup tables take from 25ns to 2ns.

See also

  • Computer Architecture
  • Assembly

MIT optimization course surprising subtleites of zeroing a register. agner fog optimization manuals memset and memcpy ooptimizations Go does not need a garbage collector. Compares and contrasts java GC with others. Claims Java poorly designed make high pressure on GC intel opimization manual mimalloc- de moura, daan leijen, ben zorn

I feel like most algorithms and data structures are os ordinary they are kind of boring?

Sparse Sets - knuth - bitvectors + Bitvectors ullmann bitvector algos for binary constraint and subgraph iso.

Books: CLRS

Sorting algorithms Hash tables Dynamic programming Tries Graph algorithsm - shortest path, spanning tree hash table in C. some interesting commments too linear search - an assoc list but he kept it in an array - hashing from z3 source code

lkinear probing vs linked list in hash table.

concurrent hash map

What this? roaring bitmaps simdjson judy arrays People are mentioned warming up the branch predictors on purpose somehow Branchless programming Interesting. Cache-oblivious binary search. Uses the “Heap” ordering or what have you Plus a branchless comparator? I think also a big point is How do you even know when cache something is a problem. How do you use feedback and self correct? How do you organize tight loops? “smart” ways of keeping structure.

microbenchmarking performance counters - cache misses, TLB ht/miss, mispredicted branches nanobench VTune, perf, PAPI, libpfc,

What every programmer should know abouyt memory

modern microprcessor 90 minute guide Bentley Writing Efficient Program - a rust optimization story - handmade hero guy talkin about optimizations - refterm optimization talk. this is fascinating

  1. optimization - measuring.
  2. non-pessimization - don’t do unnecessary work
  3. fake optimziation - people just repeatin shit uica online demo gives info on what’s hurtin ya. Cycle counts and stuff microp_ops. Ports? Queue? DaY 112 of hnadmade hero. perf counter. simd. converting to simd. measuring port usage with iaca

perf seems balla. Works on ocaml btw - linux systems performance The gem5 simulator is a modular platform for computer-system architecture research, encompassing system-level architecture as well as processor microarchitecture gem5, MARSS×86 , Multi2Sim, PTLsim, Sniper, and ZSim. gem5 as an alternaitve to qemu?

NUMA - non uniform memory access register file? l1 cache. instruction and data. instruction is one way lstopo --no-io tells you how your computer looks large /huge pages. faster for TLB. Hugetablefs is linux suppotrt? /proc/cpuinfo Transparent Huge Pages- madvise is a call yes I’d like huge tables. defer cache lines - 64 bytes. even if you read/write 1 byte your’re writing 64 M exculsively own and dirty, E exlucsive and clean, S shared, , I invalid _builtin_prefecth. linear access is good splitting into revcord of arrays tends to be better for cache if only using one field. compressed memory is worth it. compuitayion is fast. memory is slow. Array of structs vs struct of arrays. Compressed pointers? pinning isolcpus boot time option. pinning of thread or memory to cpu taskset. linux admin styuff. isolate cpus to certain tasks numactl and libnuma loop stream decoder branch predictor, pipelikne stall or bubble. branch target predcitro ports, execution units. some logic, some airthmetic. perf - interrogate counters. record report annotate stat skid - bad - precision knobs :p :pp :ppp perf record -b perf record –call-graph lbr -j any_call,any_ret program -e intel_pt//u LBR - last branch record - linux weekly intel processors record control flow Intel processor trace IPC - intrcutions per cycle. 4 is maximum ish. less than 1 is perf stat performance ocunters - perf -list TMAM top down microarctecture analsyis method perf -dtopdown toplev go throgyh process. and kleen. fancy frontend to perf/ -l1 l2 __builtin_expect profile guided optimization may do builtin expect for you loop alginment - 32 bit boundaries. straight from uop cache. llvm flag. align-all-nofallthru-blocks align-all-function code alignment can changed your perfoamnce. BOLT - vinary optimization layout tranformer. defrag your code. Puts hot code in same memory location at runtime Daniel Lemire - simd parser. mechnisms for avodiing branching. masking operations. Summary - cache aligned / cache aware data structures. B-trees. Compress data. Avoid random memory access. Huge pages can help. 10% speedup by enabling maybe. libnuma source memory. branch0free and lock-free. perf /toplev. Use vectorization where you can. his blog reference links

Blog links neato: