[HN Gopher] Heuristics that almost always work
       ___________________________________________________________________
        
       Heuristics that almost always work
        
       Author : mudro_zboris
       Score  : 297 points
       Date   : 2022-02-08 20:23 UTC (2 hours ago)
        
 (HTM) web link (astralcodexten.substack.com)
 (TXT) w3m dump (astralcodexten.substack.com)
        
       | adamc wrote:
       | The argument in the first example is just wrong. 1) His value
       | might be that it looks like there is a security guard on duty,
       | and that a) encourages customers, b) discourages robbers. A rock
       | cannot do that. 2) He could do that, but he probably won't,
       | because it's boring to just sit there. Once in a while, he will
       | walk around, looking, paying at least a little attention. It
       | makes robbery riskier.
        
       | jppope wrote:
       | One of the amazing subtleties in this article... if you're a fast
       | reader you'll stop paying attention to the stories as they go
       | through... kind of amazing that this article is layered like
       | that.
        
         | [deleted]
        
         | aksss wrote:
         | This article could be replaced by a rock with the words
         | "Heuristics are valuable but will bite you in the ass" chiseled
         | on it. /s
        
           | llaolleh wrote:
           | If he finished with that line with a photoshopped rock it
           | would've been great.
        
         | AlexCoventry wrote:
         | I developed a rock inside my head, and the rock said "THESE
         | PEOPLE WILL ALL FALL AFOUL OF RIGID DICTATES WRITTEN ON A
         | ROCK."
        
           | bentcorner wrote:
           | Second-to-last story should have been about a person that did
           | not fall afoul of the rock. Then in the last segment ask the
           | reader if they developed their own rock.
        
       | csee wrote:
       | This sounds like learned helplessness. I have seen it in
       | difficult R&D jobs where the hitrate from experimenting is very
       | small, and people subconsciously give up. That's why hiring
       | extremely motivated and intrinsically curious people for these
       | roles is important.
        
       | jiggawatts wrote:
       | My favourite one is seeing IT admins plan for "this almost always
       | won't fail."
       | 
       | Substitute "this" for SAN array, core switch, or entire data
       | centre.
       | 
       | I've had someone argue with me at length that simultaneous multi
       | disk failures in a RAID5 never happen.
       | 
       | Two weeks later it did and the main SAN disk array went up in
       | smoke.
        
         | kqr wrote:
         | It is useful to remember that "almost never" is a synonym for
         | "sometimes".
        
           | wombatpm wrote:
           | "Scientists have calculated that the chances of something so
           | patently absurd actually existing are millions to one. But
           | magicians have calculated that million-to-one chances crop up
           | nine times out of ten."
           | 
           | Terry Pratchett
        
       | yupper32 wrote:
       | I don't understand how articles like this get upvoted. Who is
       | getting value from this?
       | 
       | This reads like some generic LinkedIn CEO post that sounds deep
       | on the surface but actually means nothing.
        
         | TulliusCicero wrote:
         | It seems to make its point pretty clearly to me: "just naysay"
         | is a strategy that is extremely effective/accurate in many
         | domains, but provides no actual value compared to people
         | attempting more honest evaluations or predictions.
         | 
         | I just realized it's possible I'm being whooshed by your
         | comment.
        
           | slothtrop wrote:
           | I don't think he's being meta, just obtuse.
        
         | scoofy wrote:
         | I mean, this is a weird example, but i feel like it's like
         | Scott Auckerman says in Comedy Bang Bang, when he introduces
         | the show. He'll do the welcome, and explain the show, and
         | inevitably some guess will retort that this is silly, since
         | everyone knows what the show is about, but he constantly
         | responds with: "every episode is somebody's first episode."
         | 
         | These ideas that should be obvious to anyone who's studied
         | advanced statistics, or formal logic, or read some books about
         | extreme events, all likely already know, but the fact is, not
         | everyone... better yet _most people_ have not every studied
         | these things.
         | 
         | These ideas are inherently interesting, and every year, there
         | are new people coming of age that are introduced to these
         | interesting ideas via an article like this, and then it'll get
         | upvotes. The world is like a fire hose of young people. Add in
         | a popular author who will likely get attention anyway, and here
         | we are at the top of the feed.
        
           | yupper32 wrote:
           | But it's _not_ interesting, it 's _not_ introducing anyone to
           | advanced statistics or formal logic, it 's _not_ showing any
           | real world uses that can be applied by anyone coming of age.
           | 
           | It's just generalized parables by someone not in any of the
           | fields or positions mentioned, some weak conclusions, and a
           | "Heuristics That Almost Always Works" book title.
        
             | scoofy wrote:
             | I mean _most people I know_ that are  "very smart people"
             | including myself regularly exercise the cool customer vibe
             | of "yea right, that'll never happen." I think it's a good
             | parable. I mean, it's not really important, and I didn't
             | learn anything, but it's a good reminder that "probably
             | not" is a lot different than "definitely not."
        
         | pjc50 wrote:
         | Everyone thinks _they 're_ the person who's right that 1% of
         | the time about the thing everyone else is conservatively wrong
         | about.
        
       | dwohnitmok wrote:
       | The last sentence of the edit the author provides is the key
       | insight of this piece.
       | 
       | > the existence of experts using heuristics causes predictable
       | over-updates towards those heuristics.
       | 
       | That's the essence of this piece. If you expect that consultation
       | with experts will leave you with a more accurate picture of
       | things than before consultation, you should first be sure that
       | their heuristics are not equivalent to reading a rock with a
       | single message painted on it, otherwise no matter what your
       | conclusions will be biased towards that rock. "X is an expert and
       | X says Y is good so I should have more confidence that Y is good
       | than before" is not a useful conclusion if that conclusion came
       | from X looking at a rock that says "Y is good."
       | 
       | The Queen example in particular, but all of the others as well,
       | is a warning that looking only at the accuracy of predictions is
       | not enough to avoid this problem. In order to make sure that
       | those predictions are useful for yourself, you have to ensure
       | that those predictions actually incorporate new information.
        
         | crent wrote:
         | Which is the rub, right? How can a non-expert reasonably come
         | to a conclusion of whether or not an expert's prediction is
         | baseless or is actually solid/insightful?
        
           | bigodbiel wrote:
           | consult another expert and hope his heurestics are different
           | from the latter, ad infinitum (or you yourself end up
           | creating a meta-heurestic of their heurestics)!
        
       | rthomas6 wrote:
       | This is what Nassim Nicholas Taleb has been writing books about.
       | He calls them black swan events, because if you took a sample of
       | 1000 swans, chances are you'd conclude that all swans are white,
       | but it just isn't so. People tend to round down the probability
       | of very rare events to zero, even when the upside of them is
       | small and the downside is catastrophically bad. Examples: the
       | 2008 housing crisis, Fukushima, and our current supply chain
       | problems.
        
         | CobrastanJorji wrote:
         | If you get to the end of the article, the author explains why
         | black swan events are not what he is talking about.
        
         | scoofy wrote:
         | These are NOT black swan events. These are probably all White
         | Swan events (possibly grey swan events, but i'd have to review
         | stuff that i don't want to right now). E.g. High certainty,
         | just low predictability. From the book, when you know the
         | statistics of a rare event, and then the even occurs, it's
         | absolutely not a black swan event.
         | 
         | For an event to be a Black Swan event, you literally need to
         | have no possibly for the event in your deductive framework
         | (e.g. _the problem of induction_ which is what the book is
         | actually about). In every single one of these examples, the
         | possibly of the event occurring is accepted by everyone.
         | 
         | This is why Taleb _lost his mind_ when people started calling
         | the Covid Pandemic a  "black swan event," which it was
         | absolutely not. We know pandemics happen, we know about what
         | power law they happen at. The fact we were not prepared at all
         | is a problem of not being prepared for something _we know will
         | happen with certainty_.
         | 
         | https://medium.com/incerto/corporate-socialism-the-governmen...
        
           | PaulHoule wrote:
           | When I was doing a postdoc in Germany I would go eat
           | mushrooms in Amsterdam and have the same trip about
           | "exceptional events" over and over again.
           | 
           | Than Taleb wrote that book and I wished I'd written something
           | about "exceptional events".
           | 
           | Then Taleb just coasted, drifted and became irrelevant.
        
             | scoofy wrote:
             | Lol, sure, starting Universa after basically inventing
             | tail-risk trading strategies is definitely "becoming
             | irrelevant."
        
           | XorNot wrote:
           | We know pandemics happen but we have no idea which viruses
           | will become pandemic viruses - until one emerges, we're
           | generally confident that there is not an imminent pandemic.
        
         | NewEntryHN wrote:
         | I think it's what he calls "fat tails" : events happening at
         | low frequency at the tail of the probability distribution, but
         | which have a significant impact.
        
         | ppsreejith wrote:
         | IIRC, a black swan event highlights the problem of induction
         | when there is _no_ prior event that you can use to learn from.
         | Eg: humans thought all swans were white until the first black
         | swan was encountered. So not just exceedingly rare but outside
         | what you know AND extremely rare. Eg: Neither the housing
         | crisis, Fukushima, supply chain crisis, nor the pandemic are
         | black swans (they were seen before and was predicted/theorised)
         | but the internet is.
        
         | stevage wrote:
         | It's such an annoying metaphor for those of us who live in a
         | country where all swans are black.
        
           | pessimizer wrote:
           | It's an annoying metaphor anyway. If you've defined a swan as
           | a particular type of white bird, it's impossible for a black
           | swan to ever come. "Black swan" is just a tautological term
           | for _new thing we 've never seen before_, but pretending to
           | be a term for _known thing that suddenly behaved
           | differently._
           | 
           | Sometimes things happen that, in order to make money or cut
           | costs, we convinced people were impossible.
        
         | dan-robertson wrote:
         | Maybe you're right but I don't see why this addresses the reply
         | to (at least when I loaded it) the first comment on the post
         | which claims the same thing and is disagreed with by the
         | author.
        
       | [deleted]
        
       | the_af wrote:
       | One interesting twist on the doctor example: we know she is
       | almost always right that it's nothing, that it will all go away
       | and that just two aspirins are ok. The article correctly points
       | out that she will miss one or two cases where it was a terrible
       | disease instead, and that her prescription of aspirin and to go
       | away and rest will be misguided.
       | 
       | However... doctors must do more than just cure you. They must
       | also "do no harm"; in fact that is (or should be) their default.
       | What if she intervened more directly in more cases, maybe poked
       | and prodded and recommended more invasive treatments? She would
       | get more cases wrong in the opposite direction (recommending a
       | potentially invasive or even harmful treatment when some rest and
       | an aspirin would have sufficed), maybe resulting in accidental
       | death through _action_ rather than _inaction_.
       | 
       | She must be alert, but hers is a good default/heuristic. It's not
       | the same as a rock with "TAKE ASPIRIN" written on it.
       | 
       | And this is just an example. I think the heuristics that work
       | 99.9% of the time do so because they _do indeed work_. Erring in
       | the opposite direction can, in some cases, be also harmful.
        
         | klik99 wrote:
         | Good point - he assumes that false positives are always less
         | costly than false negatives, but that may not be true in this
         | example.
        
         | alexfrydl wrote:
         | The problem is that people are not statistics. It may sound
         | reasonable on the surface to say that this heuristic minimizes
         | harm on average because she doesn't perform unnecessary
         | interventions on the 99.9%. However, there are still actual
         | human beings in the 0.1% who are harmed. What you're really
         | saying is that if a group of people is small enough, it's fair
         | for them to suffer preventable harm if preventing it would
         | expose the larger group of people to risk.
         | 
         | I'm not going to argue about whether that is true or not,
         | because I think that clearly depends on many factors and may be
         | unanswerable. But as a member of a minority group who is often
         | denied health care, it is often denied for this very reason. If
         | the wrong person is prescribed this treatment, it is harmful.
         | I'm just saying that when you're in the 0.1%, it can be
         | difficult to accept the idea that you have to sacrifice
         | yourself because someone in the 99.9% might be at risk
         | otherwise.
        
           | pessimizer wrote:
           | You have no way to know if you're in the 0.1%. It's not
           | written on your body anywhere. So if an early test can save
           | 1/1000 from dying, but the false positives from an early test
           | kill 3/1000, false positives are more dangerous than the
           | disease you may or may not have.
        
         | pessimizer wrote:
         | I think this is a little different - it's "doing the math." The
         | best contemporary example I think is early/frequent
         | mammograms/checks for prostate cancer. If we check how often
         | what we can detect will develop into a threat to health, and
         | compare that to the consequences of treatment at that early
         | stage, we may determine that under some conditions the results
         | of our diagnostics kill more people than the disease would, and
         | therefore we shouldn't do them.
         | 
         | That's different than not treating people at all - _even if it
         | 's not treating people at all,_ because the reason you're not
         | doing it is because your diagnostics and treatments are
         | inadequate.
        
         | robomc wrote:
         | Yeah this is literally found to be the case with back injuries
         | I think?
        
         | AlexCoventry wrote:
         | Ironically, the author is a medical doctor.
        
         | mherdeg wrote:
         | I liked Bob Wachter's story about how taking aspirin
         | accidentally revealed early stomach cancer:
         | https://twitter.com/Bob_Wachter/status/1447972627956391941
        
         | derbOac wrote:
         | This article is a little misleading because it conflates a few
         | things.
         | 
         | The low base rate prediction problem is a problem not just
         | because of lazy application, it's because the numbers make it
         | impossible to do anything else in some situations. With a low
         | enough base rate, you have to have a preternaturally good
         | indicator to make anything but a negative prediction.
         | 
         | Then you have to resort to utility theory and decide if false
         | positives are worth the cost.
         | 
         | Incidentally, the hiring example is poor because it's just not
         | the same situation at all. The fact he's equating it to the
         | other scenarios maybe says as much about the real problem as
         | the scenario does itself.
        
       | travisporter wrote:
       | > But then you consult a bunch of experts, who all claim they
       | have additional evidence that the thing won't happen, and you
       | raise your probability to 99.999%
       | 
       | I lose the line of reasoning here - 99.9 to 99.999 doesn't happen
       | if you don't have new evidence, so why would you raise your
       | probability? or maybe i'm being too literal?
        
         | doubleunplussed wrote:
         | You _think_ the supposed experts have additional evidence, so
         | you 're treating their claim to have evidence as evidence in
         | itself. You're unaware that they have no more evidence than you
         | already did.
        
       | Imnimo wrote:
       | The flip side of this, is that if the rock is 99.9% right, and
       | the high-minded rationalist is only 99% right, what does that say
       | about the value of his rationality?
        
         | josephcsible wrote:
         | What if the rock's errors are all false negatives, the high-
         | minded rationalist's errors are all false positives, and false
         | negatives are 100 times worse than false positives?
        
       | wly_cdgr wrote:
       | The security guard still provides value because he deters would-
       | be robbers who have no way of being certain that he is using the
       | wind heuristic
        
       | etchalon wrote:
       | Generally agree with the article, though the reverse is
       | obviously, "Heuristic which almost never works", and all the
       | stories are about the 1 out 100000 times the weirdo was right and
       | everyone lauded them as a genius but really they just got lucky
       | the once.
        
         | ajuc wrote:
         | The reverse is heuristic that works 50% of the time.
         | 
         | If you have heuristic that works 0.0001% of the time - it's
         | almost as good as one that is correct 99.999% of the time. You
         | will notice and learn to just invert it.
        
         | swagasaurus-rex wrote:
         | There's 100000 or more weirdos out there claiming all sorts of
         | spurious stuff, it's only natural one of them would be right
        
       | bryanrasmussen wrote:
       | >The only problem is: he now provides literally no value. He's
       | excluded by fiat the possibility of ever being useful in any way.
       | He could be losslessly replaced by a rock with the words "THERE
       | ARE NO ROBBERS" on it
       | 
       | except for the value of having a security guard visible so that
       | 99% of the robbers who might conceivably want to rob a Pillow
       | Mart decide to go rob Quilting Heaven down the road instead.
        
         | soheil wrote:
         | It's not 99% of the robbers, you're exaggerating. Maybe at best
         | you fool 5% of them, after all if they were that dumb to be
         | fooled so easily they would have been caught by now. But now
         | you have diminishing returns. Just like you, they know a
         | security guard isn't effective, ... and down the game theory
         | rabbit hole we go.
        
         | wahern wrote:
         | A coworker and I were once stuck in an office building for an
         | hour or two. We were working as consultants at a client's
         | building and ended up working rather late. Not particularly
         | late by software programmer standards, but clearly
         | exceptionally late by the culture of the client company.
         | 
         | At some point in the evening all the exit doors, including the
         | front door, became armed, and this was conspicuously noted as
         | when we packed up for the night and tried to exit to the
         | parking lot, we realized we couldn't open the door without an
         | alert being sent to the police (not just the security company).
         | There should have been a guard at his station (desk, CCTVs,
         | etc) in the entryway, but we found none.
         | 
         | We waited for awhile. Then we walked up, down, and through
         | every corridor and restroom of that 4-5 story building,
         | multiple times, looking for the guard. When that failed, we
         | called the security company to ask them if it was okay to open
         | the door. They swore there was a guard on duty and asked us to
         | wait a little longer in case he was doing rounds. Despite
         | knowing that couldn't possibly be the case, we obligingly
         | passed more time waiting in the entryway. Then we walked up,
         | down, and around the building _again_ , but this time splitting
         | up and shouting. Nothing. Nobody.
         | 
         | We go back down and inform the security company that we weren't
         | going to wait any longer and that we'd be triggering the silent
         | alarm as we left. And guess who exits the elevator just as we
         | were about to open the door.... Apparently he had been sound
         | asleep in a cozy nook somewhere in the upper floors--presumably
         | in a conference room or more likely a private office, the
         | former being something we inspected in passing (glass walls),
         | the latter we didn't feel comfortable opening and entering, and
         | both being the last place you'd expect to find a security
         | guard. IIRC, he wouldn't admit it outright, but just played
         | coy. We weren't mad. A little tired and frustrated because as
         | consultants we still had to get in early the next morning, but
         | that was mostly offset by the sheer absurdity of the situation,
         | and by the fact that he seemed quite elderly.
         | 
         | Anyhow, you may assume too much if you assume the security
         | guard actually maintains some kind of useful presence. I guess
         | these days it's more common to have electronic way stations to
         | log a guard doing rounds. I dunno if this building had such
         | measures (this was circa 2001-2002), but as the sole guard he
         | probably was expected to spend most of his time, if not all of
         | his time, manning the security desk, providing ample
         | opportunity to be doing something else, instead.
        
           | wheybags wrote:
           | That security guard was performing the important function of
           | allowing the management to legally tick the "we have a
           | security guard" box on the insurance form.
        
         | nabla9 wrote:
         | Simple metaphors and examples always have holes in them. They
         | exist to illuminate the point author is making. They are not
         | the argument.
        
           | thehappypm wrote:
           | Yeah this was a particularly weak one though. The doctor
           | example was much stronger.
        
         | PaulHoule wrote:
         | Instead of a rock you just need an inflatable security guard.
         | 
         | What's frightening about all this is that this article has
         | gotten 15 upvotes despite 100% of the comments so far being
         | about what a pointless article this is.
        
           | PoignardAzur wrote:
           | People who agree upvote and have nothing more to say, people
           | who disagree come to post their objections.
        
           | sjg007 wrote:
           | Or security cameras.. or a drone..
        
           | joe_the_user wrote:
           | I didn't unvote it and I think each of the example is missing
           | subtle aspects.
           | 
           | But I think it's interesting enough to discuss. The main
           | thing is that are a whole lot of human activities where one
           | can imagine completely rote activity could replace thinking.
           | But in all of these, a deeper look shows to subtle factors
           | actually require a human being to be present.
           | 
           | It's a bit like self-driving cars. 90% of driving is really
           | easy to get working. 99% is moderately hard. 100% looks like
           | it won't arrive for quite a while.
        
           | soheil wrote:
           | Are you saying 15 votes isn't a lot? Also 100% of 2 comments
           | is still 100% of the comments.
        
             | PaulHoule wrote:
             | It's a lot of votes for something for which the best
             | analysis is something like "move along folks, nothing to
             | see here..."
        
           | bryanrasmussen wrote:
           | yeah I don't know I think people might spot the difference.
           | But maybe most of these robbers have a method that says if
           | you see anything like a security guard don't figure out if it
           | is one just go rob somewhere that definitely doesn't have it
           | because there's always somewhere else to rob - but maybe one
           | of the groups of robbers is better than everyone else and
           | they have a real robber mastermind in charge who determines
           | the security guard is inflatable. Then it's on.
           | 
           | on edit: added in missing two words that clarified meaning.
        
           | MattGaiser wrote:
           | People mostly seem to object to the security guard example as
           | missing a part of the analysis.
        
             | isleyaardvark wrote:
             | All of them are missing a big part of the analysis, that it
             | is often not a choice of "a rock" but of a person with a
             | heuristic that works way less than 99.9% of the time. A
             | security guard that constantly harasses shoppers, a
             | "futurist" that buys into every new fad, the conspiracy
             | theorist that believes everything on Facebook.
             | 
             | Would anyone really think a weatherman that had say a 70%
             | correct heuristic was good? Or go to a doctor like that?
        
             | moffkalast wrote:
             | Seems like the article writer has fallen for the very issue
             | they're trying to portray, by writing so many examples that
             | they've stopped thinking and just resorted to writing
             | "PERSON CAN BE REPLACED WITH ROCK". If this isn't ironic
             | then I don't know what is.
        
               | PaulHoule wrote:
               | "Rationalism" is so ironic you might say it is an iron or
               | at least made of iron.
        
             | PaulHoule wrote:
             | I suspect a lot of them are stopping when they see that
             | when is bogus and aren't bothering to poke holes in the
             | rest of them. Readers have their own heuristics, but that
             | doesn't mean the rest of the examples aren't full of holes.
        
               | ravi-delia wrote:
               | It's really not about several examples of the same thing,
               | but interrogating our intuitions around each one. I at
               | least found I felt differently about different heuristics
               | being replaced by rocks.
        
       | claudiulodro wrote:
       | Specifically around the security guard example, the mere presence
       | of a security guard should deter thieves (in theory), so I think
       | the analysis is a little more nuanced than "security guards
       | investigate weird noises".
       | 
       | Unless this all went over my head and that's all sort-of the
       | point of what he's getting at . . ?
        
         | PaulHoule wrote:
         | He's not talking to you, he's talking to the other people in
         | his pod.
        
       | zwieback wrote:
       | Heuristics? For amateurs and lazy folk. We find metrics that
       | reliably measure something, find or make a gauge that measures
       | that metric, then make sure our gauge stays good over time by
       | doing R&Rs or recalibrations.
        
       | Traster wrote:
       | I feel like a lot of this is just... bullshit. Yes, if a security
       | guard literally decided to ignore every sound as the wind they
       | would probably be useless - as long as their role is literally
       | only fast response. Except they don't, and it isn't. Their role
       | is just as often to politely escort someone out, or remind
       | someone of their manners. So there's that but also, maybe over
       | time the security guard doesn't ignore everything as wind, maybe
       | the security guard...learns quite well what wind is like. If I
       | hear a vague rumbling on a monday I know it's my neighbour
       | bringing the bins out. Similarly, the security guard probably can
       | go from 5 false positives a day to 0.01 false positives a day
       | without increasing their false negatives at all.
       | 
       | It's like, it's a nice thought, but if you really apply attention
       | to it, it doesn't hold up for long.
       | 
       | >Fast, fun to read, and a 99.9% success rate. Pretty good,
       | especially compared to everyone who "does their own research" and
       | sometimes gets it wrong.
       | 
       | I would wager the "do your own research" crowd hits much lower
       | than 99.9% success rate, so what's the argument here? The person
       | who accepts mainstream view - which is apparently what this
       | article calls a skeptic - is no better than a rock. But that
       | actually, the people who contradict the skeptic are no better
       | than a rock thrown in a glass house.
       | 
       | In my experience though, the skeptic is far more likely to be a
       | "skeptic" of the mainstream view and to advertise themselves as
       | that (see: literallly thedailyskeptic.org) and in reality,
       | they're taking the bad side of the bet, by this logic you're
       | 99.9% right by dismissing contrarians and accepting the
       | mainstream view. if you're constantly endorsing the contrarians,
       | you're basically taking the 0.01%. And this isn't theoretical,
       | the skeptics I listed earlier literally posted an article today
       | telling us how global warming is fine, because the earth was
       | warmer... 50 million years ago when no human life was viable. The
       | problem with skeptics isn't their skepticism, it's where they
       | choose to apply it.
       | 
       | There is value in there being contrarians, in order to hold
       | people's feet to the fire, but that doesn't really apply if
       | they're just bringing up dumb arguments and are no more complex
       | than the people they're questioning.
        
       | toiletfuneral wrote:
        
       | PragmaticPulp wrote:
       | This article uses a trope that is frustratingly common in
       | rationalist articles: It sets up over simplified straw-man
       | versions of real world scenarios and then knocks them out of the
       | park.
       | 
       | In the real world, a doctor who ignores the complaints of every
       | patients will quickly find themselves the subject of malpractice
       | lawsuits. Not by _every_ wronged patient, but it only takes one
       | or two angry patients with lawyers in the family to cause huge
       | problems. Malpractice insurance is expensive for a reason.
       | 
       | Real-world security guards do a lot more than just catch robbers
       | in the act. I've had security guards catch employees trying to
       | remove company assets late at night, catch doors left open,
       | notice faulty security mechanisms that need to be repaired (e.g.
       | door sticks open), and so on. Not to mention the presence of a
       | security guard is a huge deterrent for getting robbed in the
       | first place.
       | 
       | And so on. Yes, there are situations where you can get away with
       | betting on the most common outcome for a while, but unless the
       | people around you are all oblivious then eventually they'll
       | notice.
        
         | mediaman wrote:
         | And for the robber, crime is a lot more common than he seems to
         | think it is. Metal theft, for example, is extremely frequent:
         | it is rare that an industrial facility would have so little
         | theft that such a proposed heuristic would be reasonable.
         | 
         | If crime were that low, then the guard's position doesn't exist
         | anyway.
         | 
         | So then we're just left with an empty thought exercise with no
         | relation to reality, as an argument for how we should think
         | about reality.
        
         | operator-name wrote:
         | I don't think there's the need to take the examples in the
         | article hyper literally. You can call them strawmen if you'd
         | like but simplification for the sake of presenting a point
         | (which you seem to have ignored) is necessary.
        
           | banku_brougham wrote:
           | aka exactly what straw man means
        
       | DiggyJohnson wrote:
       | Great article, really enjoyed it. I second the Talib connection.
       | 
       | But anyways... Does anyone else struggle with Substack's
       | typeface, specifically it's width and spacing between characters?
       | I'm a bit of a typeface nerd, and I genuinely like or enjoy most
       | of our common fonts. Substack is the only site that I find the
       | typeface to significantly affect the reading experience.
        
         | AlanYx wrote:
         | I tend to agree with you about the typeface they use, Spectral.
         | I think it's a combination of factors: loose tracking designed
         | to permit slightly over-emphasized differences in weight (which
         | were a design goal for the font) and somewhat selective
         | replacement of inktraps from the inspiration face (Elzevir)
         | with angular serifs, which creates a bit of a visual
         | discordance. I don't find it unreadable, personally, but it
         | does call attention to itself.
        
       | PaulHoule wrote:
       | You'd do a lot better with one strong example rather than seven
       | weak examples.
       | 
       | For instance, if you are interested in Bayes Theorem like a lot
       | of rationalists say they are, you could talk about the medical
       | test which is 99.99% accurate but for which 90% of the positives
       | are false positives.
        
         | will4274 wrote:
         | > You'd do a lot better with one strong example rather than
         | seven weak examples
         | 
         | Tend to disagree. It's easy to dismiss one example as "well,
         | medicine is special because XYZ." Multiple examples are the
         | core aspect of showing a general pattern.
         | 
         | He could probably have stopped at 3, 4, or 5 though, not 7.
        
         | Corence wrote:
         | Fortunately I have a rock that says "the next example will make
         | the same point as the example I just read" so I saved a bunch
         | of time.
        
         | robbedpeter wrote:
         | Drug testing is a good example for most people to understand
         | why Bayesian thinking is relevant.
         | 
         | https://www.mun.ca/biology/scarr/4250_Bayes_Theorem.html
         | 
         | Imagine that a driver gets hit by accident. He's tested as part
         | of company policy, and tests positive. He gets fired, even
         | though the test only really tells us there's a 33.2% chance he
         | was actually using the drug.
         | 
         | Real world drug tests are a lot worse than 1% false positive
         | and false negative rate.
         | 
         | Every time someone gets fired for a positive test, or loses
         | custody of their kid, or so on, it reinforces whatever
         | statistics are being collected as if the test were a ground
         | truth. They're hardly ever questioned, and there's usually no
         | recourse without an expensive legal fight.
         | 
         | The false positive rate for drug dogs is higher than 40%, for
         | contrast. When a dog "alerts" its worse than a flip of a coin.
         | All that matters is if an officer feels like fucking up your
         | day.
         | 
         | Testing used in situations that are legally significant in
         | people's lives should be required to reach a statistically
         | valid threshold of accuracy, like 99.999% of the times this
         | process is performed, it matches reality. A high sensitivity
         | and high specificity aren't enough, but they're framed as
         | highly accurate and reliable by often well intentioned people
         | who simply aren't thinking in a Bayesian way.
        
           | moralestapia wrote:
           | >All that matters is if an officer feels like fucking up your
           | day.
           | 
           | This is what most people don't seem to get. Devices like the
           | ADE 651 or the GT200 were bought by the thousands by law
           | enforcement agencies worldwide, not because they were stupid,
           | but instead, so they could have another "data point" against
           | you that they can use at their discretion.
           | 
           | "Sorry, this dot blinked three times so I'm gonna have to
           | detain you: It's standard procedure, I'm only doing my job."
        
       | Waterluvian wrote:
       | I am completely in love with how this article introduces itself.
        
       | gegtik wrote:
       | What I'm getting here is we should listen to the 0.1% when
       | they're right.
       | 
       | I'll leave it up to the reader to determine when that is
        
       | s1artibartfast wrote:
       | I think a lot of folks are missing the point (and the author
       | could have been more clear).
       | 
       | The point is not that bad heuristics are bad, but to think about
       | when heuristics should be used and what value they add.
       | 
       | In the examples, heuristics shouldn't be used to reduce
       | probabilistic occurrence to binary likelihood before deciding to
       | act. Decisions should be informed based on the actual data when
       | available. Application of a heuristic results in a loss of
       | information, which reduces accuracy and applicable scope.
       | Sometimes this can be entirely defeat the purpose.
       | 
       | Perhaps the recommendation is that if you are tempted to use a
       | heuristic, stop and ask if it is necessary, and what you stand to
       | gain from using it instead of other data or new analysis.
        
       | banku_brougham wrote:
       | Repackaged straw men, in the form of several narratives that
       | describe the exact same thing.
        
         | pdonis wrote:
         | I stopped reading the examples after the second one and
         | scrolled straight to the end, to confirm my suspicion that
         | there was actually no useful information in the article at all.
        
       | leto_ii wrote:
       | I haven't gone through the whole thing, but the point seems
       | belabored and superficial tbh.
       | 
       | I imagine the whole piece is essentially a comment on having a
       | discriminator with great true negative rate and terrible true
       | positive rate in a context where there is a large class imbalance
       | (very rarely do positives occur). In real life this is quite easy
       | to account for (just fill in your confusion matrix and see how
       | you stand). I also strongly suspect that it doesn't really happen
       | that much. People do have a conscience, professional pride etc.
       | At the very least they will get bored and actually do smth
       | different from time to time.
       | 
       | > The Security Guard [...] The only problem is: he now provides
       | literally no value. He's excluded by fiat the possibility of ever
       | being useful in any way. He could be losslessly replaced by a
       | rock with the words "THERE ARE NO ROBBERS" on it.
       | 
       | At the very least such a security guard would act as a human
       | scarecrow. More realistically, the guard would actually look for
       | robbers from time to time, if for no other reason then because if
       | he misses them he might be out of a job.
       | 
       | > The doctor [...] "It's nothing, take two aspirin and call me in
       | a week if it doesn't improve"
       | 
       | In my experience this sums up the Dutch (country where I
       | currently reside) medical system quite well :)) Somehow they
       | manage to have good health results.
       | 
       | EDIT: moved concluding paragraph up.
        
       | born-jre wrote:
       | No We are not going to die, hit by some asteroid. believe me.
        
       | tempestn wrote:
       | I agree with this, with the caveat that you don't want to fall
       | into the opposite trap of believing the people who just use the
       | contrarian heuristic.
        
       | somethingAlex wrote:
       | I'm surprised how many people are just nitpicking the examples
       | like they are supposed to be rigorous analogies.
       | 
       | The point that I walked away with is that oftentimes experts use
       | these same heuristics even when people assume they are not.
       | People think that experts don't have to use them because they
       | have better tools and skills at their disposal. However, for
       | reasons involving human factors, they oftentimes do use them.
       | Finally, these opinions then get thrown into the body of evidence
       | as if they are ground truth values.
        
       | moralestapia wrote:
       | One could add:
       | 
       |  _The Barking Dog_
       | 
       | Barks at everything all the time, people learn to ignore it.
       | Then, when there's real danger, no one cares, providing literally
       | no value and becoming only an annoyance.
       | 
       | It's kind of like a dual for the security guard one.
        
         | kuratkull wrote:
         | I think the best known illustration for this concept is
         | actually called "The boy who cried wolf"
        
           | moralestapia wrote:
           | Yes, but one important difference is that on OP's examples
           | there is some sort of system that you put in place for a
           | specific purpose, and it fails to do so in practice.
           | 
           | It would be akin to that lying boy being the officially
           | appointed wolf-spotter for his village.
        
         | myfavoritedog wrote:
         | That's only true if the burglar knows that the owners ignore
         | the dog's barking.
        
       | verisimi wrote:
       | This is an interesting article. In fact I really do think _all_
       | any of us do is heuristics - where  "A heuristic is a mental
       | shortcut that allows an individual to make a decision, pass
       | judgment, or solve a problem quickly and with minimal mental
       | effort."
       | 
       | I think this is how all of us are when we are doing something.
       | 
       | I also think this is existence is a free will experience.
       | 
       | That might seem incompatible, but I think our free will is
       | engaged in selecting what heuristics we can choose.
       | Alternatively, you can say we are programmable creatures, but we
       | get to choose what programming we run.
       | 
       | I would also say, that the heuristics/programming we mostly run
       | is that which has been provided to us by default - a consequence
       | of our education and situation.
       | 
       | Not many of us take the time to review our programming - or to
       | engage with the more 'meta' elements of our experience. I
       | daresay, that the principles we run are not really coherent. Eg,
       | we are reasonable, we do the right thing, we also take shortcuts
       | to get what we want, or save time, etc. These are examples of
       | principles that cannot all be true!
       | 
       | If you were to ask me, the best thing we can do in this life is
       | to consider our values. Are these truth and reason, personal
       | gain, saving time, etc. What means most to us? And the review the
       | heuristics/programming we run according to those values. At least
       | we have a foundation for our programs that we selected ourselves,
       | rather than running the programs that were provided to us by
       | default!
        
       | efitz wrote:
       | A big straw man.
       | 
       | In real life some people are more or less diligent about their
       | jobs, and more or less contrarian, and have different expertise,
       | strengths and weaknesses.
       | 
       | Each of the vignettes portrays the counter position as stupid
       | (literally using a rock as a metaphor).
       | 
       | The reality is much different. In each of the cases there's an
       | argument to be made that the proposition was flawed- the security
       | guard never finds anything but instead of just not looking
       | anymore, maybe they propose installation of cameras. The
       | volcanologists aren't very helpful if they don't have predictive
       | value - if they are always waffling then they are no more useful
       | than the rock cult. And if they are over-activated, then they run
       | the risk of "boy who cried wolf" or of being dismissed because
       | too frequent false positives cost the rest of the society too
       | much.
       | 
       | Overall I think the essay is shallow and not a useful treatment
       | of the subject.
        
         | ravi-delia wrote:
         | I think you're correct in thinking the situation is
         | complicated, and wrong in thinking the author disagrees! Each
         | situation is subtly different, and some seem more wrong than
         | others. Regardless, a security guard recommending cameras is
         | different from a rock precisely because they suggested that!
        
         | PaulHoule wrote:
         | What I really don't get is how this article is getting so many
         | upvotes but the comments are unanimous about how vacuous it
         | is... Unless Aella asked her "fans" to vote it up or something.
         | (I for one manufacture my flying monkeys with a statistical
         | model but it's been a really long time..)
        
       | AnimalMuppet wrote:
       | This is like the turkey on the day before Thanksgiving. "They fed
       | me today. They fed me yesterday. They fed me the day before that.
       | _Of course_ they 're going to feed me tomorrow!" Except they
       | aren't.
       | 
       | So, yeah. Our heuristics fail on black swan events. There needs
       | to be a balance between "trust your heuristics" and "watch out
       | for black swans".
        
       | prakhar897 wrote:
       | 1. I think the author forgot the term "Anomaly Detection" and is
       | trying to reinvent it. Also, Anomaly or "Sensing something is
       | wrong" is one of the most basic human instinct.
       | 
       | 2. >By this time they were 100% cultists, so they all consulted
       | the rock and said "No, the volcano is not erupting". The sulfur
       | started to smell different, and the Queen asked "Are you sure?"
       | 
       | Even the queen deciphered that something is wrong without any
       | volcano knowledge. The author itself is providing an example of
       | human instinct without acknowledging it,
       | 
       | 3. The author assumes all guards as the same when in fact they
       | all are different individuals. Sure most of them might be lousy
       | at their jobs but there will be some who understand how rare the
       | "robbery event" is and so will still look when there's sound.
       | 
       | 4. The examples suffer from cold start problem. What if robbery
       | happens the first month of a new guard. Will he still be asleep?
       | If not, then Utility (hiring a guard in all cases) > Utility (not
       | hiring a guard).
       | 
       | 5. As another commentor mentioned that the value of having a
       | security guard visible so that 99% of the robbers who might
       | conceivably want to rob decides to go someplace else instead.
       | 
       | 6. Contrarionism is seen as a virtue by certain people hence "I
       | don't like this generation's music" and "Popular thing bad"
       | phenomenon. Also, they make sure to be as loud as possible
       | whenever their contradiction is right. This helps humanity in
       | mentally modelling rare events.
       | 
       | All in all, the author is underestimating the capabilities of
       | humans and humanity.
        
       | AceJohnny2 wrote:
       | 99.9% right isn't great: that means 1 in 1000 is bad. If this
       | were a medical condition, that'd be a pretty high rate.
       | 
       | I work with large-scale testing, and we use a measure called
       | "DPPM", or Defective Part Per Million (manufactured). For my
       | team, a DPPM in 10s is noise/acceptable loss, ~100 we keep an eye
       | on, and 100s-1000 is cause for investigation. Going back to
       | percentage, that translates to 0.001% fail-rate is noise, 0.01%
       | we keep an eye on, and 0.01-0.1% is cause for investigation (and
       | 1% is "Stop The Line").
       | 
       | My point is that the "percentage" scale of failure/risk is one
       | tuned to human perception: "1 in 100 is nothing!", but at the
       | scale of events that we deal with in many areas of our modern
       | life, it's actually huge.
       | 
       | Or: don't use humans for dealing with large numbers.
       | Alternatively, exercise them with the occasional positive result
       | to keep them on their toes.
        
         | XorNot wrote:
         | I've generally stopped using percentages to communicate these
         | days because it's apparent people don't understand them.
         | Someone will say 80% like it's a sure thing - until you point
         | out the failure rate is 1 in 5.
        
       | spongechameleon wrote:
       | I find it so comforting to believe in Heuristics That Almost
       | Always Work even when I know I ought to employ more scrutiny.
       | It's too easy to jump on board.
        
       | RcouF1uZ4gsC wrote:
       | This is where Bayesian inference really signs.
       | 
       | For the security guard, hearing a single noise is likely to be
       | nothing. However, what if you heard two noises, and the sound of
       | tires outside?
       | 
       | Same thing with the doctor. Most good doctor's I know have a
       | sixth sense, about when something is off and needs further tests
       | beyond just take an aspirin. So maybe the person had a stomach
       | ache, and they had lost some weight, and they were looking a
       | little yellow. All of a sudden the probabilities start looking a
       | lot different.
        
         | PaulHoule wrote:
         | You would think Bayesian inference is good at integrating
         | multiple information sources but practically you have to model
         | the dependencies between different information sources and even
         | doing a good job of that doesn't save you away from logical
         | fallacies such as "Explaining away". In real life people use
         | Naive Bayes a lot because properly modelling a Bayesian network
         | is hard and trying to learn the network gets you in all sorts
         | of problems -- allow arbitrary dependencies between N inputs
         | and you are talking e coefficients in your model and you'll
         | never solve it.
         | 
         | This is one of the reasons why people got frustrated with
         | Expert Systems as real-life reasoning requires reasoning with
         | uncertainty and we don't have a satisfactory general way to do
         | it.
        
           | zozbot234 wrote:
           | The whole point of Bayesian networks is to have something
           | that's asymptotically simpler than "arbitrary dependencies
           | between N inputs" while still being able to model useful
           | scenarios.
        
       | political12345 wrote:
       | Lol, this article can be replaced exacty with a rock with same
       | title.
       | 
       | This is why we have 2727272 self help books that I can't read
       | past chapter 3 as they regurgitate the same idea in every
       | sentence
        
       | guyzero wrote:
       | This essay, but not ironically.
       | 
       | If you think you have something that no one else in the world has
       | noticed, you're probably wrong. You're going to need a LOT of
       | evidence to prove yourself right. Lots of people & companies
       | spend years, decades even, proving themselves right. You're not
       | going to do it overnight and you're not going to do it with a
       | wikipedia article.
        
       | stevage wrote:
       | I think my main takeaway is how profitable it would be to be one
       | of these naysayers. Good career move.
        
         | dools wrote:
         | I think history has shown it is far more profitable to amplify
         | fringe theories and capitalise on everyone's fears that we are
         | being deliberately mislead by corrupt authorities claiming
         | false expertise.
        
       | tablespoon wrote:
       | > The Futurist
       | 
       | > He comments on the latest breathless press releases from tech
       | companies. This will change everything! say the press releases.
       | "No it won't", he comments. This is the greatest invention ever
       | to exist! say the press releases. "It's a scam," he says.
       | 
       | He's got the name backwards on this one. What he's describing is
       | more of an anti-futurist. IMHO, futurists and the ones that make
       | implausibly grand predictions about the future that almost always
       | end up not being true.
        
         | SolarNet wrote:
         | I think you are listening to the wrong futurists then.
        
         | quocanh wrote:
         | I think the irony here is more pleasing.
        
         | _Nat_ wrote:
         | Legitimate futurists are objecting to some pop-culture nonsense
         | with NFT's, cryptocurrency, etc., often describing such things
         | as scams. I suspect that the author may've had that in mind
         | there.
         | 
         | It may be sorta like the problem with science: there's real
         | science and pop-culture science-flavored junk, and pop-culture
         | audiences may perceive real scientists as dismissive because
         | they're always so critical of the latest pop-culture fads.
         | 
         | So while futurists may love new-tech and scientists may love
         | science, pop-culture may see things differently because they
         | see futurists/scientists dismissing (what they perceive to be)
         | new-tech/science.
        
         | ducttapecrown wrote:
         | Your idea that there's a format to the names is a great
         | heuristic that works most of the time :-)
        
         | AlexCoventry wrote:
         | Bruce Sterling did an OK job. _Heavy Weather_ reads like
         | prophecy, these days.
        
         | koboll wrote:
         | This seems like the weakest example. I immediately thought of
         | that Paul Krugman quote from the 1990s where he pooh-poohed the
         | internet, something that's stuck with him forever and made him
         | a laughingstock where futurism is concerned.
        
       | [deleted]
        
       | inglor_cz wrote:
       | This story reminded me of a story written by a Czech biologist
       | who studied animals in Papua-New Guinea and went to a hunt with a
       | group of local tribesmen.
       | 
       | The dusk was approaching, they were still in the forest and he
       | proposed that they could sleep under a tree. The hunters were
       | adamant in their refusal: _no, this is dangerous, a tree might
       | fall on you in your sleep and kill you_. He relented, but
       | silently considered them irrational, given that his assessment of
       | a chance of a tree falling on you overnight was less then 1:5000.
       | 
       | Only later did he realize that for a lifelong hunter, 1:5000 are
       | pretty bad odds that translate to a very significant probability
       | of getting killed over a 30-40 year long hunting career.
        
         | reactspa wrote:
         | Also: in the version of the story I remember reading somewhere,
         | all night they kept hearing trees fall. And that had a
         | significant effect in affecting their impression of the
         | probability.
        
         | EGreg wrote:
         | I am just wondering what skydivers must think every time they
         | do it ... given so many trials, the odds of nothing happening
         | are going down exponentially right?
        
         | beaned wrote:
         | Reminds me of the fact that on overage, one out of every
         | hundred places you know will be experiencing a once-in-a-
         | hundred-years event.
         | 
         | When you hear once-in-a-hundred-year event, it makes it sound
         | quite rare. One might look around and say (for example, in
         | relation to climate) "why are so many of these happening?"
         | 
         | But it is unsurprising statistically. If you know just a
         | thousand distinct geographic places, about 10 of them would
         | experience such an event each year.
        
           | [deleted]
        
         | 323 wrote:
         | Taleb has a similar example: the difference between playing
         | Russian roulette once for a chance of winning $10 million,
         | versus playing it repeatedly.
        
         | memish wrote:
        
         | robaato wrote:
         | Jared Diamond talks about dead trees:
         | https://www.openculture.com/2015/08/jared-diamond-underscore...
        
         | kadoban wrote:
         | Is 1:5000 an actually good assessment? I'd suspect it's very
         | much not, especially if you avoid obviously dead trees and
         | maybe move if a storm comes.
        
           | OscarCunningham wrote:
           | You could calculate it by taking the average lifetime of a
           | tree and dividing by the length of time slept and the number
           | of trees in squashing distance.
        
           | gpm wrote:
           | 1:5000 would suggest an average lifetime of 13.7 years for a
           | grown tree (i.e. not counting the years where it's too young
           | to sleep under), and that's before the ability to avoid trees
           | that look more likely to fallover.
           | 
           | I don't know anything about trees around there, maybe they're
           | really short-lived? For forests around here, it's a gross
           | overestimate.
        
             | thehappypm wrote:
             | Somehow I suspect trees in Papa New Guinea are different
             | than the trees where you are
        
         | PragmaticPulp wrote:
         | Great anecdote.
         | 
         | The tricks in this article might work in the short term.
         | However, over the length of a career, it's difficult to outrun
         | a negative reputation forever. Especially in the age of the
         | internet, people will eventually catch on to what you're doing.
        
         | gringoDan wrote:
         | Great example of a non-ergodic event. The outcome (odds of
         | dying) when considering of one individual longitudinally is
         | entirely different from the outcome when considering a
         | population of individuals at a single point in time.
         | 
         | https://taylorpearson.me/ergodicity/
        
         | bpodgursky wrote:
         | Isn't this from Jared Diamond? I read this in The World until
         | Yesterday I thought, from his time in New Guinea.
        
         | kbuchanan wrote:
         | It's a great example. This is the very reason I have scaled
         | back the amount of time I rock climb as I've gotten older --
         | not because any individual outing is dangerous, but there's an
         | element of Russian roulette wherein the mere act of doing it
         | more often dramatically changes the risk.
        
           | segmondy wrote:
           | I would imagine that if you scale back enough tho, you won't
           | be as sharp. Sure the odds increase the more you do it, but
           | not just because you do it, but often because of other
           | variables, such as the weather, not listening to your body,
           | over confidence, etc.
        
           | kqr wrote:
           | Also known as the Kelly criterion. If one possible outcome of
           | an action is associated with a great enough loss, it doesn't
           | make sense to perform the action no matter how unlikely the
           | loss.
        
             | Fernicia wrote:
             | Isn't that called Pascal's Wager?
        
               | AlexCoventry wrote:
               | Pascal's Wager is a fallacy resulting from plugging
               | infinity into your risk analysis and interpreting [?]/0
               | in a way that suits you.
        
               | wongarsu wrote:
               | Which of course directly leads to Pascal's Mugging: I can
               | simply say "I'm a god, give me $10000 or you will burn in
               | hell for all eternity". Now if you follow Pascal's Wager
               | or GP's logic you have to give me the money: I'm probably
               | lying, but the potential downside is too great to risk
               | upsetting me.
        
               | kqr wrote:
               | There's actually a rational explanation for that: humans
               | don't care very much about burning in hell for all
               | eternity, when it comes down to it.
               | 
               | There's actually a similar though experiment that might
               | seem even more bizarre: I could tell you "give me $100 or
               | I will kill you tomorrow" and you probably wouldn't give
               | me the $100. That's because when it comes down to it,
               | humans don't see the loss of their life as that big a
               | deal as one might think. It's a big deal, of course, but
               | in combination with the low likelihood, still not big
               | enough to forgo the $100.
        
             | anter wrote:
             | If anyone wants to play around with an interactive
             | explanation of the The Kelly criterion:
             | https://explore.paulbutler.org/bet/
        
             | PaulHoule wrote:
             | No, Kelly is about what fraction of your bankroll you
             | should bet if you want to maximize your rate of return for
             | a bet with variable odds.
             | 
             | It's essential if you want to:
             | 
             | * make money by counting cards at Blackjack (the odds are a
             | function of how many 10 cards are left in the deck)
             | 
             | * make money at the racetrack with a system like this
             | https://www.amazon.com/Dr-Beat-Racetrack-William-
             | Ziemba/dp/0...
             | 
             | * turn a predictive model for financial prices into a
             | profitable trading system
             | 
             | In the case where the bet loses money you can interpret
             | Kelly as either "the only way to win is not to play" or
             | "bet it all on Red exactly once and walk away " depending
             | on how you take the limit.
        
               | kqr wrote:
               | That is a much narrower view of the Kelly criterion than
               | the general concept.
               | 
               | The general idea is about choosing an action that
               | maximises the expected logarithm of the result.
               | 
               | In practise this means, among other things, not choosing
               | an action that gets you close to "ruin", however you
               | choose to measure the result. Another way to phrase it is
               | that the Kelly criterion leads to actions that avoid
               | large losses.
        
               | PaulHoule wrote:
               | Actually
               | 
               | https://en.wikipedia.org/wiki/Kelly_criterion
               | 
               | "The Kelly bet size is found by maximizing the expected
               | value of the logarithm of wealth, which is equivalent to
               | maximizing the expected geometric growth rate"
               | 
               | In real life people often choose to make bets smaller
               | than the Kelley bet. Part of that is that even if you
               | have a good model there are still "unknown unknowns" that
               | will make your model wrong some of the time. Also most
               | people aren't comfortable with the sharp ups and downs
               | and probability of ruin you have with Kelley.
        
               | kqr wrote:
               | I've long found that Wikipedia article woefully lacking
               | in generality.
               | 
               | 1) The Kelly criterion is a general decision rule not
               | limited to bet sizing. Bet sizing is just a special case
               | where you're choosing between actions that correspond to
               | different bet sizes. The Kelly criterion works very well
               | also for other actions, like whether to pursue project A
               | or B, whether to get insurance or not, and indeed whether
               | to sleep under a tree or on a rock.
               | 
               | 2) The Kelly criterion is not limited to what people
               | would ordinarily think of as "wealth". It applies just as
               | well to anything you can measure with some sort of
               | utility where compounding makes sense.
               | 
               | The best overview I've found so far is The Kelly Capital
               | Growth Investment Criterion[1], which unfortunately is a
               | thick collection of peer-reviewed science, so it's very
               | detailed and heavy on the maths, too.
               | 
               | [1]: https://www.amazon.com/KELLY-CAPITAL-GROWTH-
               | INVESTMENT-CRITE...
        
           | vincentmarle wrote:
           | This is called ergodic theory, the more you repeat an action
           | that could result in catastrophe, the likelihood of that
           | catastrophe occuring will be close to 100% if the number of
           | events is high enough.
        
           | derbOac wrote:
           | "There are bold X, and old X, but no old, bold X."
           | 
           | Replace X with any practitioners subject to sufficient risk
           | as a result of their practice.
           | 
           | I first heard it in the context of mushroom foraging.
        
             | thehappypm wrote:
             | Mountaineers is where I heard it.
        
           | JadeNB wrote:
           | > It's a great example. This is the very reason I have scaled
           | back the amount of time I rock climb as I've gotten older --
           | not because any individual outing is dangerous, but there's
           | an element of Russian roulette wherein the mere act of doing
           | it more often dramatically changes the risk.
           | 
           | Indoor climbing, and especially bouldering, can be a lot of
           | fun at the right gym, and with dramatically reduced risk of
           | death (though injury is still a very real possibility, I say,
           | recalling all the time I spent nursing my sprained ankle).
        
           | HWR_14 wrote:
           | > the mere act of doing it more often dramatically changes
           | the risk.
           | 
           | Kind of. However, you already know that the first N outings
           | didn't have a disaster. So those should be discarded from
           | your analysis.
           | 
           | Doing it N times _more_ has a lot of risk, doing it the N+1th
           | time has barely any.
        
             | montebicyclelo wrote:
             | The parent comment talks about scaling back the amount of
             | rock climbing they do in order to reduce risk.. And now you
             | are saying that they should go one more time, because a
             | single climb is low risk?
        
               | HWR_14 wrote:
               | Yes. I am saying their analysis of risk is incorrect, and
               | therefore if that's the only reason they aren't climbing
               | then they should climb more often.
        
             | srcreigh wrote:
             | Under this assumption, by the principal of mathematical
             | induction, you can easily do it K more times for any K
             | without taking on barely any risk at each step of the way.
        
             | jvanderbot wrote:
             | In a skill-based game, N+1 has less incremental risk than
             | adding 1 more trial with N-1 games did.
        
             | 323 wrote:
             | This is called the Turkey fallacy: the turkey was feed by
             | humans for 1000 days, and after each feed event he updated
             | his belief that humans care for him until it's now almost a
             | statistical certainty.
        
               | lapetitejort wrote:
               | Is this the reverse of the Gambler's Fallacy? Instead of
               | "The numbers haven't hit in a while, therefore they're
               | going to hit soon." it's "The numbers haven't hit yet,
               | therefore they're never gonna hit."
        
               | jaggederest wrote:
               | Also known as complacency. Working in a woodshop, one of
               | the things you are most vulnerable to is failing to
               | respect the danger you're in. This is why many
               | experienced woodworkers have been injured by e.g. a table
               | saw - you stop being as careful after such long exposure.
        
               | 323 wrote:
               | A related thing is normalization of deviance. You start
               | removing safety because you see nothing bad happened
               | before, until you are at a point where almost no safety
               | rules are respected anymore. You can see this a lot in
               | construction videos.
        
               | ska wrote:
               | That only applies if you are updating priors. In this
               | case the odds are fixed, the GP is correct.
        
               | 323 wrote:
               | The odds of a rocking accident are known and fixed?
        
             | mrow84 wrote:
             | This assumes a lot about the underlying process,
             | particularly independence. Whilst assuming independence
             | might hold reasonably well for low numbers of samples, the
             | assumption might be increasingly (and dangerously)
             | misleading. The intuition expressed by GP captures that.
        
             | pessimizer wrote:
             | If you'll die if a roll of three dice comes up sixes,
             | you're not really in a lot of danger. If you do it every
             | day, you have about 15 months to live.
        
               | [deleted]
        
               | jrochkind1 wrote:
               | If you've already done it for 12 months without it
               | happening though, the next 3 months are no more dangerous
               | for you than for someone starting from scratch.
        
               | moralestapia wrote:
               | >you have about 15 months to live
               | 
               | Or a few minutes ... or 20 years.
               | 
               | That's the thing w/ statistically independent trials.
        
               | ska wrote:
               | That's like the difference between
               | 
               | You could win 100mm in the lottery (true statement!)
               | 
               | Lottery tickets are a good investment (almost always,
               | false statement).
               | 
               | Planning on "well it could happen, technically" isn't a
               | good approach.
        
             | HPsquared wrote:
             | The "slippery slope" principle applies here though: N+1
             | enables N+2, which enables N+3 and so on.
        
               | aaronblohowiak wrote:
               | but the risk is independent. so once you do the N+1 time
               | safely, you are back to N and your next time is _also_
               | just an N+1.
        
               | twobitshifter wrote:
               | True but it would be incorrect to assume that you can
               | safely keep basejumping every day in a year, just because
               | you haven't died in the last 50 days. Eventually the
               | stats say you will be 87% likely to have an accident when
               | you consider your choice at the beginning of the year. It
               | might be day 20 or day 300, but you won't know what case
               | you end up in. The chance of your next jump being your
               | last is always the same, but that doesn't decrease the
               | risk of repeated trials.
        
               | jrochkind1 wrote:
               | Not exactly. If you've done it 50 days without an
               | accident, your current chances of the accident happening
               | in the remainder of the year are NOW _less_ than 87%.
               | 
               | If you've made it Jan 1 to July 1 months without an
               | accident, the chances of you making it to Dec 31 _are_
               | now better than they were on Jan 1 -- because now they
               | are just the chances of you making it six months, not a
               | year.
               | 
               | The chances of flipping 6 heads in a row are 1/64. But if
               | I've already flipped _3_ in a row... the chances of
               | flipping three _more_ heads in a row is 1 /8, the same as
               | always for flipping 3 heads in a row. The ones that
               | already happened don't effect your future chances.
        
               | pessimizer wrote:
               | Continuing to do something regularly doesn't ever mean
               | you're just going to do it once more.
        
               | nefitty wrote:
               | Psychologically, behaving in a certain way makes it more
               | likely that you'll behave in the same way in the future.
               | That's an integral idea underpinning justice systems.
        
               | Stronico wrote:
               | Risk is independent of prior events, habits are not - I
               | think that was what the anthropologist story is about
        
               | mkolodny wrote:
               | Slippery slope is a fallacy, not a principle. Just
               | because you took N steps, that doesn't necessarily mean
               | you will take N+1 steps.
               | 
               | It's a convincing fallacy because sometimes you do take
               | N+1 steps. But just like in the article, heuristics
               | aren't always right.
        
               | vojvod wrote:
               | Slippery slope arguments aren't inherently fallacious. If
               | you can justify one more climb on the grounds that
               | probability of injury or death is very low then you will
               | be able to justify every subsequent climb on the same
               | basis.
        
       | charcircuit wrote:
       | I'm confused. These heuristics don't almost always work. The
       | security guard has a 0% chance of investigating. How is 0% almost
       | always?
       | 
       | If you make a confusion matrix its precision and recall is 0. If
       | it almost always worked then its precision and recall would be
       | close to 1.
        
         | Enginerrrd wrote:
         | You're only counting the positives. When positives are rare,
         | just guessing the result will be negative is usually a really
         | good starting place.
        
           | charcircuit wrote:
           | >is usually a really good starting place
           | 
           | No, you are getting misleading results because you have an
           | imbalanced dataset.
        
           | PaulHoule wrote:
           | This is how you can throw out most of the COVID tests in the
           | trash and say they all were negative and get away with it...
           | for a while!
        
         | jstx1 wrote:
         | 99.9% chance of being right (i.e. no robbers) => the heuristic
         | almost always works
         | 
         | (This is accuracy. You can get recall of 1 if he always
         | investigates; for precision to be 1 he needs a way to discern
         | wind from robbers based on the noise)
        
       | operator-name wrote:
       | There's some very interesting discussion here and in the
       | comments. Many have pointed out the similarity of ideas to
       | Taleb's Black Swan, and extremisation, which was also brought up
       | in Superforcasting by Gardner and Tetlock.
       | 
       | Instead of such a discussion, I'd like highlight a book that
       | provides the "oposite" perspective: Gerd Gigerenzer's Rationality
       | for Mortals. Gigerenzer presents the an anti hyper-rationalist
       | perspective for heuristics, arguing that they're not only human,
       | but necessary and inevitable for time and compute bounded beings.
        
       | woodruffw wrote:
       | The reasoning in this post is completely backwards: just
       | _because_ a job could be completely replaced with a rock without
       | affecting the majority of cases doesn 't mean that the _actual
       | practitioners_ of that job are either completely useless or are
       | themselves on autopilot.
       | 
       | Siskind assumes the latter and reasons towards the former, which
       | isn't aligned at all with _what actually happens_ in the world:
       | we do predict hurricanes and exploding volcanoes, and there 's no
       | particular evidence that the average doctor is ignoring their
       | patients. We're all subject to biases and fatigue, but neither of
       | those supports the claim that we're all phoning it in all the
       | time.
       | 
       | Edit: I will also note that "when nothing happens at all, a
       | person can be replaced with a note on a rock" is not an
       | interesting statement to make. Dressing it up with eloquent prose
       | (and he is indeed eloquent!) does not change this, and does not a
       | poignant observation make.
        
         | three14 wrote:
         | His whole point is that the occasional person who IS "phoning
         | it in all the time" will appear to be very good at their job,
         | possibly better than the people who are really trying their
         | best to get it right.
        
         | ravi-delia wrote:
         | Where are you seeing the author suggest most people in the
         | described jobs could actually be replaced by rocks?
        
           | woodruffw wrote:
           | > Where are you seeing the author suggest most people in the
           | described jobs could actually be replaced by rocks?
           | 
           | I really think it's harder to get more literal than this:
           | 
           | > He could be losslessly replaced by a rock with the words
           | "THERE ARE NO ROBBERS" on it.
        
             | s1artibartfast wrote:
             | I think the point is that the person who inappropriately
             | uses a heuristic instead of doing their job, is in fact,
             | not doing their job.
        
             | andrewla wrote:
             | A guard who adopts the heuristic "there are no robbers" can
             | be replaced with a rock, but has adopted a heuristic that
             | almost always works.
             | 
             | Guards that do not adopt that heuristic cannot be replaced
             | by that rock.
        
       | mgas wrote:
       | I feel like this article does a much better job of reviewing
       | Don't Look Up than the article that passed through here last
       | week. As an allegorical reading, it calls out the major plot
       | points of the film and hits on the ulterior motivations of both
       | those using heuristics to naysay experts, and the experts who
       | inevitably fall out of grace because of them.
        
       | im3w1l wrote:
       | One way to solve this is that instead of asking for a yes - no
       | answer, you ask for a ranking, and you disallow equally likely.
       | Is bigfoot more or less likely than telepathy? Is telepathy more
       | ore less likely than the vaccines being dangerous? Are the
       | vaccines being dangeroues more or less likely than some guy
       | achieving cold fusion in his garage?
       | 
       | A ranking forcibly brings the metric away from accuracy (which
       | the heuristic can score well on) to something based around
       | precision-recall (which it cannot).
        
       | bsuvc wrote:
       | Sure, relying too much on heuristics can be a bad idea in tail
       | risk situations.
       | 
       | But other times, they make perfect sense and save a lot of time
       | and effort.
       | 
       | This post reads like a series of straw men created to show that
       | heuristics are dangerous. I'm not sure who is going to argue that
       | heuristics are appropriate in those situations.
        
       | pards wrote:
       | The doctor rings true. I had 3 separate doctors on 3 separate
       | occasions diagnose my 21 month old son with an ear infection,
       | instead of the plum-sized malignant brain tumour that it was.
       | 
       | From their point of view, pediatric brain tumours are very rare
       | and ear infections are common.
       | 
       | Their 99% heuristic almost killed him.
       | 
       | That was 2010. He survived and is now a vibrant 13 year old, but
       | only because of one curious intern/fellow at the children's
       | hospital ER that decided to order a CT to rule out the remote
       | possibility. Her diligence got him admitted and into surgery
       | within a day.
        
       | osrec wrote:
       | While the security guard doesn't actively catch criminals, he
       | still may act as a deterrent. In that sense, he's still somewhat
       | useful.
       | 
       | Interesting article nonetheless!
        
       | ghostly_s wrote:
       | What is the impetus for this writing style that repeats different
       | versions of the same analogy 10 times when one would have
       | sufficed? Surely Substack doesn't have a word count minimum.
        
         | yccs27 wrote:
         | I actually found the ramp-up from security guard to sceptic
         | pretty clever. Demonstrate the principle on an easy,
         | constructed case; verify on a real-world example; then present
         | the applications you care about and have somewhat more
         | controversial content. Although I agree that the number of
         | repetitions is higher than optimal here.
        
         | AlexCoventry wrote:
         | A long article is a sign of effort expended, which rationalists
         | value as a sign that a lot of thought has gone into the ideas
         | the article puts forward. (They have a rock in their heads
         | saying "DON'T BOTHER YOURSELF WITH BRIEF EXPOSITIONS.")
        
       | drzoltar wrote:
       | Homer: Not a bear in sight. The Bear Patrol must be working like
       | a charm.
       | 
       | Lisa: That's specious reasoning, Dad.
       | 
       | Homer: Thank you, dear.
       | 
       | Lisa: By your logic I could claim that this rock keeps tigers
       | away.
       | 
       | Homer: Oh, how does it work?
       | 
       | Lisa: It doesn't work.
       | 
       | Homer: Uh-huh.
       | 
       | Lisa: It's just a stupid rock.
       | 
       | Homer: Uh-huh.
       | 
       | Lisa: But I don't see any tigers around, do you?
       | 
       | Homer: Lisa, I want to buy your rock.
       | 
       | [0]: https://youtu.be/xSVqLHghLpw
        
       | deltaonefour wrote:
       | Sounds like he's describing software architects. I feel software
       | architects follow a slightly more complicated heuristic. They
       | can't do the same thing every time when they draw the line
       | connecting all the boxes. It has to be a little different every
       | time with different boxes and a different set of lines.
        
       | joosters wrote:
       | A pedantic criticism of the title: ' _Heuristics that almost
       | always work_ ' is a truism, or a tautology. If a heuristic worked
       | 100% of the time, it would no longer be a heuristic, it would be
       | a rule!
        
         | pessimizer wrote:
         | But if a heuristic worked 75% of the time, it wouldn't almost
         | always work.
        
       | rakejake wrote:
       | Funnily enough, you could replace this article with a rock that
       | said, "When people are confronted with a very skewed probability
       | distribution, after a while they become complacent and default to
       | the most probable outcome".
        
         | marcosdumay wrote:
         | Hum... You mean you can replace some text with some other text?
        
         | banku_brougham wrote:
         | This is the best summary of the article. How is this #1 on HN
         | right now.
        
       | dools wrote:
       | This 2189 word essay can be replaced with a rock that says "ALARM
       | FATIGUE" on it and a QR code pointing to this Wikipedia article:
       | 
       | https://en.wikipedia.org/wiki/Alarm_fatigue
        
       | asplake wrote:
       | > Whenever someone pooh-poohs rationality as unnecessary, or
       | makes fun of rationalists
       | 
       | Fun and clever article, but for it all to land on that was
       | jarring and disappointing. Preaching to the choir I guess.
        
         | _Microft wrote:
         | He's not complaining about those criticizing rationalists but
         | warns fellow rationalists to not fall into this heuristics trap
         | themselves.
        
       ___________________________________________________________________
       (page generated 2022-02-08 23:00 UTC)