[HN Gopher] Algorithms to Live By - The Computer Science of Huma...
___________________________________________________________________
Algorithms to Live By - The Computer Science of Human Decisions
Author : ingve
Score : 200 points
Date : 2022-12-28 15:25 UTC (2 days ago)
(HTM) web link (blog.galowicz.de)
(TXT) w3m dump (blog.galowicz.de)
| QuantumSeed wrote:
| Brian Christian, the author of "Algorithms to Live By", has also
| written "The Alignment Problem" on the technical and moral
| questions of A.I.
| fumblebee wrote:
| The Alignment Problem was a stand out read for me this year; it
| should be required reading for anyone training and deploying ML
| models. Incredibly well researched and chockablock with real
| world examples.
| gcanyon wrote:
| In the section on over-fitting:
|
| "[...] focusing on production metrics led supervisors to neglect
| maintenance and repairs, setting up future catastrophe. Such
| problems can't simply be dismissed as a failure to achieve
| management goals. Rather, they are the opposite: The ruthless and
| clever optimization of the wrong thing."
|
| Southwest Airlines.
| matsemann wrote:
| This claims to be a review, but is mainly just a summary /
| rehashing of the content. Feels a bit disingenuous.
|
| I can warmly recommend the book, though.
| [deleted]
| rel wrote:
| Just want to give a quick shout out to coauthor Tom Griffiths for
| being an amazing educator; I attended his class before this book
| was published and was delighted when this book covered the
| general ideas covered. I'm always happy to recommend it to others
| looking to understand more about computer science in an
| approachable way
| sonabinu wrote:
| This is a really insightful book. I read it as part of a book
| club. The algorithm that generated the most delightful discussion
| and examples was the optimal stopping algorithm.
| gcanyon wrote:
| For anyone who's curious, the 37% for optimal stopping is the
| rounding of 1/e. https://en.wikipedia.org/wiki/Secretary_problem
| shaftoe444 wrote:
| Interesting article, I would recommend also reading Russ Robert's
| 'Wild Problems', who makes the case that algorithms are a bad fit
| for many of life's big decisions.
|
| Podcast and transcript on it here. https://www.econtalk.org/russ-
| roberts-and-mike-munger-on-wil...
| dinosaurdynasty wrote:
| Isn't that just using the subconscious's algorithms instead of
| the conscious's?
| ssivark wrote:
| Calling everything an algorithm rests on some implicit/vague
| assumption of computational universality (that subsumes human
| functioning!) which seems quite non-obvious.
|
| It's a useless (tautological) statement unless we start with
| a good definition of what is and is not an "algorithm". From
| a cursory glance, this seems trickier than it looks, and once
| we have a constrained definition it's not clear any more that
| human minds operate in the same framework (strong claims
| require strong evidence).
|
| Eg: If we define algorithms as what can be implemented on a
| Turing machine, then we're necessarily talking deterministic
| algorithms (allowing pseudorandomness), etc.
| shaftoe444 wrote:
| Good way of putting it, when does an algorithm become a
| heuristic?
|
| For me the key thing is that when something is too
| complicated to quantify, attempting to quantify it will
| result in worse decisions. A bit like Hayek's calculation
| problem for the economy but for personal decisions.
| oski wrote:
| I'm curious to know if anyone has "implemented" any of these
| approaches in their own life...
| bumby wrote:
| After reading the book, I use the exponential backoff approach
| to relationships that seem one-sided.
| pewpewyouhit wrote:
| I apply some things from the explore/exploit chapter when
| travelling. If for the first half of the trip I try as many
| places to eat as I can. For the second half I'm fine with
| revisiting the best ones.
| [deleted]
| huijzer wrote:
| I often make estimations based on the heuristic that if we
| don't know much about how long something will remain, then
| we're most likely half way currently. For example, McDonalds
| was founded 82 years ago and if we have to guess how long it
| will still exist then probably around 82 years (until 2104).
|
| This also works great, for example, to answer whether you
| should make plans for Christmas 2023 with the girl you have
| been seeing for two months now: probably not yet.
| greenpeas wrote:
| What is your heuristics based on?
|
| Quite often though, you know a little about some thing. How
| do you adjust your heuristics then? What about the job that I
| started two months ago, should I expect to work there by
| December 2023? If the US was founded in 1776, how long will
| it still exist?
| vcxy wrote:
| The heuristic is that the average of an interval is the
| middle. If you know nothing about it other than you're at
| some point on the time interval, assuming you're at the
| middle time is a good prior.
|
| When you know more, you certainly should adjust. For the
| job example, you might think "how long have I usually
| stayed jobs that have lasted least two months?", "how long
| do people usually stay in jobs if they make it through the
| first two months?". Generally speaking, Bayes' theorem is
| the technical answer to "how do you adjust". Not that I
| ever actually do that...but I think it's the technically
| correct answer.
| Bilal_io wrote:
| Not OP and haven't read the book, but maybe this is more
| about survival, if McDonald's survived 82 years, then we
| can assume it can survive another 82, if you've been at the
| job for 2 months and there are no signs of trouble, then
| you can assume you'll survive another 2, reevaluate then to
| conclude that you can survive another 4...
| thenerdhead wrote:
| I used the optimal stopping guidelines to help people I mentor
| stop applying for jobs and change their resumes/approach. It
| worked pretty well for them.
| 0x4d464d48 wrote:
| When dealing with particularly toxic people I find the
| exponential backoff to be an excellent strategy.
|
| In my case, I hate cutting people off because I know people can
| change. What I do to manage relationships is run a forgiving
| version of exponential backoff. Start off friendly and
| forgiving. If someone becomes transgressive, increase the
| latency between interactions. If the transgressions continue,
| double the latency. If bad interactions persist, the time
| latency can go on to months or even years which means you'll
| probably never interact with that person again. Conversely, if
| an interaction goes well, reduce the delay for when you're
| willing to meet again. E.g. say an irritating individual causes
| the latency to go to once a month. If you have an interaction
| that goes well then the latency drops to 2 weeks. If
| interactions continue to go well they drop further to say no
| latency, i.e. you're willing to meet this person whenever.
| Obviously it's not perfect but it suites my needs quite well.
|
| I also found his chapter on "overfitting" excellent. I like to
| think of it as "smart person disease." Big idea is that having
| more data can actually hamper decision making instead of
| enhance it because you winde up solving the wrong problem.
| chasd00 wrote:
| I read the book a while back and realized I do the caching one
| automatically. I have a pretty messy work bench where I build
| rockets and play around with microcontrollers. I purposely
| didn't try to organize it because, over time, it organizes
| itself. All the stuff that has my attention gradually drifts to
| arms reach where the stuff I don't currently need gradually
| drifts to the back of the workbench.
|
| Edit: the stopping and explore/exploit chapters mirror my
| career too
| Mezzie wrote:
| I do the same. My other rule is that wherever I look for it
| when I've lost it is where it belongs.
|
| It causes a fair amount of friction with housemates, though.
| Have you figured out any way to alleviate that when it comes
| to areas used by multiple people?
| RheingoldRiver wrote:
| That sounds like something I do; if I can't find something
| I don't think "where should it be" but rather "if I were
| going to put it down right now, where would I put it"
|
| Honestly, I'm still pretty shit at finding things, but this
| strategy has helped considerably.
| AmpsterMan wrote:
| In some sense you have a race condition. By taking the item
| and misplacing it, you've caused a deadlock. Solutions are
| kinda the same: have a copy of the item for every person
| that might use it, or be strict about freeing all locked
| resources.
| Mezzie wrote:
| In my case the problem is that I need things to be where
| they make sense for my brain or I lose them, and my
| sister (who I live with) has the type of anxiety that
| manifests as needing control over and having a 'tidy'
| space. So it does end up in a deadlock because she wants
| things all nice and 'organized' but then I can't see them
| and have no idea where they are.
| divan wrote:
| I read the book around 3-4 years ago and regularly returning to
| the exploit vs explore and randomness concepts. 37% rule is
| something I regularly talk about as well, but mostly to help
| other people make sense of the dilemmas "should I continue
| looking or stop now" (like searching for a flat, for example).
| fierro wrote:
| I am currently 35% the way through of all my Hinge matches,
| planning on proposing to the next girl I grab drinks with.
| rperez333 wrote:
| I've read years ago, and became my comfortable keeping my inbox
| or my files become messier, relying more on the search. I wish
| Google Desktop would still exist, however.
| [deleted]
| raydiatian wrote:
| Wasn't that good of book, read it twice.
| cratermoon wrote:
| This image alone is worth the read https://giphy.com/gifs/funny-
| how-task-iCFlLMvzDHIk0
| shubhamjain wrote:
| > Other animal behavior also evokes TCP flow control, with its
| characteristic sawtooth. Squirrels and pigeons going after human
| food scraps will creep forward a step at a time, occasionally
| leap back, then steadily creep forward again.
|
| > Caching gives us the language to understand what's happening.
| We say "brain fart" when we should really say "cache miss".
|
| Sorry, but how can anyone find this book insightful? Doesn't it
| sound dumb to anyone else? Seems as if the author made list of
| bunch of algorithms and filled up hundreds of pages with lazy
| analogies. Having read a bunch of similar books (classic self-
| help crap), I must say that these books are a giant waste of
| time. It reminds me of mental models. Reading about mental models
| isn't going to magically make you smarter, you'll likely develop
| on your own from experience. But hey, if it helps you, awesome.
| Just giving my two cents as a person who has largely become
| disillusioned with books like these.
| AYBABTME wrote:
| If you read it, you'll find it full of useful strategies to
| leverage in making better decisions in your life. It's also
| amusing for CS-educated folks because it's a fun application of
| the material to everyday life.
| [deleted]
| matsemann wrote:
| This isn't a self-help book, so your whole big rant misses the
| mark.
| leetcodesucks wrote:
| [dead]
| squidgyhead wrote:
| Sounds like the author is slowly discovering micro economics?
| O__________O wrote:
| Optimal Stopping Problem always confused me, since it seems to
| assume you're not aware of a meaningful measure of what an
| optimal match would be, but aware of what the optimal set of
| potential matches is.
|
| For example, say there's a goose looking for a mate and they only
| look at geese of the opposite sex, but in fact, that specific
| goose's optimal mate type is a black swan. Maybe it's just me,
| but at the point you're able to limit yourself to a type of X
| then you likely known Y are the attributes that best define it.
|
| Am I missing something other than the obvious point that as the
| selector you aware of a finite set or the spectrum of quality
| within it, but lack control over the order for which possible
| candidates are presented for selection?
| nighthawk454 wrote:
| It's not about optimal matches at all - it's about when to stop
| looking.
|
| The assumption is you don't known the set of potential matches,
| or the order they come in, or anything really. But there is a
| deadline for the decision (or a maximum number of attempts). So
| how to balance making attempts to gather information with
| committing to a final decision so you don't run out of time?
| All else being equal, the rule is 1/e. Spend the first 37% of
| your time/attempts gathering information, then commit to the
| next option that's better than you've seen.
|
| This doesn't guarantee a good match (or even a match!) but
| probabilistically the strategy is optimal.
| fierro wrote:
| exactly. The optimal stopping solution maximizes the
| probability you find the best candidate. That probability
| ends up being quite low, unfortunately.
| maxminminmax wrote:
| >but probabilistically the strategy is optimal.
|
| For what value function? It is basically never the case that
| my value function is "all choices other than the optimal are
| equally bad" -- which is what this rule is based on.
|
| As a personal opinion, this drives me up the wall. There is a
| great problem here, and there is a whole area (several of
| them, actually!) of applied math dedicated to it (Statistical
| Decision Theory, Reinforcement Learning, you name it).
| Instead we get this toy version -- which at best is an
| oversimplified intro to he subject, and at worst an excuse to
| bamboozle with math-fairy-dust -- brought out as some kind of
| rule "to live by". Your algorithm is bad, and you should feel
| bad.
| taeric wrote:
| I'm confused, isn't this literally one of the founding
| problems to "Statistical Decision Theory"?
|
| That is, this may be a simplified version of the problem,
| but it is a legit problem from that field. And the results
| being presented here don't disagree with the legit problem,
| do they?
|
| Now, is it a simplification of a simplification? Sure. I'm
| not clear on why it is as bad as you are putting forth,
| though.
| bumby wrote:
| I think they address this in a discussion about lacking full
| information.
|
| _" We don't have an objective or preexisting sense of what
| makes for a good or bad applicant; moreover, when we compare
| two of them we know which of the two is better, but not by how
| much."_ (p. 18)
|
| They then go on to explain stopping thresholds in the cases
| when you do have full information.
| chrisweekly wrote:
| ATLB is such an interesting and worthwhile book. My note-taking
| skills have improved a lot since I first read it maybe 6 or 7
| years ago... def time to revisit.
| haffi112 wrote:
| I also recommend other books by Brian Christan, especially the
| Alignment Problem.
___________________________________________________________________
(page generated 2022-12-30 23:00 UTC)