An avenue of ways

Algorithms to live by

Optimal stopping problem. Aka the secretary problem. Time to a search if unbounded. If number of candidates is fixed. If no incomplete information, observation only, then 37% is optimal. If full information, score and only that score matters. Never hire unless it’s the best you’ve seen so far, 58%.

Selling a house, know objective dollar value, and rough range. Not single best, most money, waiting has a cost. Set one going in, take first option to exceed it.

Explore vs Exploit. Always exploring can be frustrating. Multi-armed bandit problem and casino’s. Time is the key value, exploring goes down over time, exploit goes up. Need to discount payoffs based on time. You care more about the meal tonight than the one a month from now, geometrically discounting. Giddens Index. If you don’t have time, think about regret. Total amount of regret will always increase, minimize regret. Regret Minimization Algorithms. Upper Confidence Bound. Choose the one with the highest confidence bound, it’s always greater than the expected value. Optimism fights against regret. High value of the 0/0 (try/win) option. Win-stay, loose-shift explores too much.

Sorting. Sometimes sorting isn’t worth the time, only if your searching. Fault tolerant sorting, were comparisons have error bars, you want more comparisons like bubble or counting sort. In animal pecking order, sorting helps against larger amounts of antagonizing. Cardinal ordering like races allow for O(n) to sort.

Caching. LRU is most efficient,along many axes such as geography. Ebbinghouse forgetfulness curve, Brain has unlimited storage, limiting factor for is time. This is also how society organizes information. More memory is slower, interesting collelary with age. Maybe older people are slower because they have to search though more stuff.

Scheduling. Inputs/goals are usually due date, number of workers, throughout or latency. Look from outsiders perspective, sun of completion time, or shortest thing first, can also weight these and divide duration by weight. Procrastination is like DDOS, doing lots of small things of low weight. Weighted shortest processing time with preemption is most efficient in general. Thrashing and context switching. Be “less smart” in how much you can do. Just force a lower limit on responsiveness, like the tomato timer. Great analogy for email.

Bayes Rule. LaPlass and Bayes. Chances of event are W+1/N+2. Look at the distribution for those events, predict accordingly. for a power law, if you’re past a certain point the longer you can expect it to go.