[HN Gopher] Pascal's Scams (2012)
___________________________________________________________________
Pascal's Scams (2012)
Author : walterbell
Score : 56 points
Date : 2025-07-12 17:41 UTC (4 days ago)
(HTM) web link (unenumerated.blogspot.com)
(TXT) w3m dump (unenumerated.blogspot.com)
| praptak wrote:
| It has a wiki page under a slightly different name (but the
| concept is the same):
| https://en.wikipedia.org/wiki/Pascal%27s_mugging
| skrebbel wrote:
| Kinda hilarious that this got invented by the same people who
| later Pascal's-Mugged half the world with AI doomerism.
| Symmetry wrote:
| If someone thinks there's only 1% change of AI doom they're
| definitely not a doomer, much less 1 in 1,000.
| thornewolf wrote:
| I argue that it's not that hilarious because the thinking is
| very tightly related. The very contemplations that lead to AI
| doomerism lead to pascals mugging.
|
| One of my main gripes with AI doomerism is that it is
| downstream of being pascal's mugged into being a doomer
| elcapitan wrote:
| Came to read a programming language related polemic, stayed to
| read about philosophy.
| BigChemical wrote:
| Pascal's gamble wasn't just about probability, it was about
| storytelling: the promise of nearly infinite payoff with minimal
| risk. That same allure is still at play today whenever people
| chase "moonshot" returns on crypto or quick-rich schemes.
|
| It underscores a timeless lesson: no matter how much data or
| logic we have, we're still wired to fall for well-crafted
| optimism and that means skepticism remains the best defense.
| hollerith wrote:
| >people chase "moonshot" returns on crypto
|
| Your comment would have been better if you'd chosen and example
| that did not create hundreds of thousands of millionaires.
| empath75 wrote:
| > Your comment would have been better if you'd chosen and
| example that did not create many tens of thousands of
| millionaires.
|
| Lotteries have also produced lots of millionaires. Crypto
| could produce lots of winners just from wealth transfer even
| if it was a zero sum or net negative game in terms of wealth
| creation.
| RodgerTheGreat wrote:
| ...and plenty more folks who lost their shirts- or even just
| their pizza money- on crypto scams in order to subsidize
| those millionaires.
| analog31 wrote:
| Not to mention countries who subsidized the electricity.
| alach11 wrote:
| A lower-key variant of this frequently comes into play with
| consulting or other sales pitches. "You spend <big number> per
| year on this <necessary business expense>. Our service will
| easily shave 2% off this, making the cost of our service
| completely negligible and this purchase an obviously good
| decision."
| michaelcampbell wrote:
| You've lost me; can you explain how these two relate?
| mturmon wrote:
| For reasons explained in the article, we are bad at
| estimating small probabilities.
|
| Similarly, we are bad at estimating small proportions
| ("easily shave 2%"). What is being claimed in the parentheses
| here is that there's a probability distribution of "how much
| costs are shaved" and that we can estimate where the bulk of
| its support is.
|
| But we're not really good at making such estimates. Maybe
| there is some probability mass around 2%, but the bulk is
| around 0.5%. It seems like that's a small difference (just
| 1.5%!) but it's a factor of 4 in terms of savings.
|
| So now we have a large number (annual spend), multiplied by a
| very uncertain number (cost shave, with poor experimental
| support), leading to a very uncertain outcome in terms of
| savings.
|
| And it can be that, in reality, the costs of changing service
| turn out to overwhelm this outcome.
| aspenmayer wrote:
| When modern advertising is a spectrum of "lies, damn lies,
| and statistics," I don't blame folks for crying foul and
| demanding a baseline level of truth in advertising. When
| folks trust but verify, this is seen as a change in the
| status quo by folks, and some of those folks who protest
| about it in those terms are trying to sell you something.
| munchler wrote:
| This is exactly how I feel when Effective Altruism starts talking
| about the wellbeing of trillions of humans living in the far
| distant future that we should be devoting ourselves to now.
|
| https://www.effectivealtruism.org/articles/cause-profile-lon...
| clueless wrote:
| how are "trillions of humans living in the far distant future"
| improbable, or poor information or has a lack of information?
| seems pretty obvious
| munchler wrote:
| It seems obvious to you that there will be trillions of
| people alive in the far distant future? Please explain how
| you know this.
| tasty_freeze wrote:
| They claim isn't that there will be trillions of people
| alive at the same time; they are integrating over the
| course of tens and hundreds of thousands of years.
|
| Although we are at a peak population of a bit over 8B
| people at the moment, it is estimated that more than 100B
| people we would classify as humans have ever lived. The
| population long ago was much smaller than 1B, but thousands
| of generations have lived and died.
| andsoitis wrote:
| What is something specific that an individual should have
| done or should not have done, say, 2000 years ago, that
| would have made a positive impact on your life?
| nancyminusone wrote:
| 1. Suppose you have a chart of the total past and future
| history of human population.
|
| 2. Cover up the chart so only the data from the past to
| present day is visible.
|
| 3. Note that most humans in that subset exist near or at the
| present. You are one of these people today, it should make
| sense for you to be born in one of the densest parts of the
| graph.
|
| 4. Now uncover the graph. If there are trillions of humans in
| the future, it seems almost impossibly unlikely that you
| would born in a part of the graph with "so few" humans as
| today, and not in the far future.
|
| Therefore, you must conclude that the actual graph rapidly
| drops to zero in the near future. QED.
|
| This "doomsday argument" is a pretty shit one, but not worse
| than others I've seen arguing the opposite.
| breuleux wrote:
| Regardless, it is an extremely uncertain proposition that we
| can do anything in the present that would have a reliably
| positive impact on their lives in the far future. It's hard
| enough to figure something out for the billions of people who
| actually exist right now.
| tasty_freeze wrote:
| You have personified EA, but it isn't a person. Some EA people
| are into long-termism, but it is an error to pretend EA is a
| monolith that speaks with one voice.
|
| I think the core idea is simply: since resources for helping
| the poor/sick is not unlimited, we should try to allocate those
| resources in the most effective way. Before EA charity
| evaluation came along, the only metric for most people was
| simply looking at the charity overhead via Charity Navigator.
| But that isn't a great metric. A charity with only a 1%
| overhead with a mission to make balloon animals for children
| dying in a famine will score well on Charity Navigator but does
| nothing to help the problem.
|
| To be honest I haven't looked deeply into long-termism, but
| from what I've heard (eg, hearing Will MacAskill on a few
| podcasts) it seems to ignore a few things. Just like a bird in
| the hand is worth two in the bush, long-termers have no good
| way to estimate the likelihood of future events, and that
| discounting needs to increase greatly the further out one
| looks. At best many of these estimates are like the Drake
| Equation -- better than nothing, but with multiple orders of
| magnitude error bars.
|
| There are other second-order reasons which don't seem to factor
| in, or at least haven't come across in the few hours of
| listening to long-termers talk about the issue. One is that by
| working to make a better world now, it effects the trajectory
| of future events much more directly than the low-probability
| guesswork they think my have an impact in the distant future.
| rwmj wrote:
| That was always a convenient excuse to justify amassing lots of
| money, if necessary by theft (see SBF). They have no intention
| of actually doing good with it.
| Elextric wrote:
| Let me reframe it. Among these trillions of people, there will
| be many who are 99% similar to you. Wouldn't you want that
| version of yourself to live a great life?
| munchler wrote:
| By that kind of logic, I'm actually a Boltzmann brain
| floating alone in an infinite void, so I don't have to worry
| about anyone but myself.
| Elextric wrote:
| You can certainly rationalize anything, but I fail to see
| what help that is to us.
|
| "The great subverter of Pyrrhonism [radical skepticism] is
| action, and employment, and the occupations of common life.
| [...] I dine, I play a game of back-gammon, I converse, and
| am merry with my friends; and when after three or four
| hour's amusement, I wou'd return to these speculations,
| they appear so cold, and strain'd, and ridiculous, that I
| cannot find in my heart to enter into them any farther."
|
| Hume
| andsoitis wrote:
| > Wouldn't you want that version of yourself to live a great
| life?
|
| The biggest positive change you can make, even for future
| generations, is to uplevel the people who are alive today.
| jerf wrote:
| One of the things you need to do for this situation, as well as
| the stuff in the blog post, is apply what is commonly called
| "the time value of money" to the concept. However the concept
| extends beyond money into any attempt to modify the future,
| whether or not it involves money. Money just happens to
| function as a good, quantifiable example in its "score keeping"
| role here. Your ability to model the future and take actions
| based on it exponentially decays, and really quite rapidly.
|
| Or to put it another way, everything fuzzes out into noise for
| me much sooner than humanity will have trillions of new
| members. There's no way for me to predict whatsoever what
| effect any action I take today will have a thousand years from
| now. Even in extreme cases, like, I push a magic button that
| instantly copies whatever you, the reader, believe is the
| optimal distribution of ideological beliefs out into the world
| (ignoring for the moment the possibility that your ideology
| might consider that unethical, this is just a thought
| experiment anyhow so no need to go that meta), you really don't
| know what that would do 1000 years from now, what the
| seventeenth-order effects of such a thing would be. I'm not
| even saying that it might not be as good as you think or
| something; I'm saying you just have no idea what it would be at
| all. So there's no way to hold people responsible for that, and
| no way to build plans based on it.
| jollyllama wrote:
| That's more of a Bentham's mugging.
| superb-owl wrote:
| As with most cognitive biases, there's an inverse to this, where
| we ignore low-probability high-impact scenarios. E.g. people
| drive drunk or without a seatbelt, because it'll *probably* be
| fine. And they repeatedly have that assumption confirmed--until
| one day it isn't.
|
| I had one friend who would leave his bike chained partially
| blocking a fire exit, because "what are the odds the fire
| inspector will come today?" But the fire inspector comes once a
| year, and if your bike is chained there 99% of the time, odds are
| you're going to get a fine. He couldn't see the logic. He got
| fined.
| sfn42 wrote:
| Traffic in general is riddled with this. People don't
| understand the risks they're taking during their everyday
| driving and get offended when you comment on it.
| jajko wrote:
| Typical folks cutting in front of me while I am barely at
| safe breaking distance from the car in front of me, on speeds
| > 100kmh. This is of course always in at least semi-dense
| traffic, and them immediately obscuring view further means I
| have less than second to react to any stronger breaking or I
| slam into them.
|
| I honk them, then they often get aggressive that I dared to
| react to their perfectly cool maneuver that gave them those
| precious extra 5 seconds. Bloody a-holes. Had few almost-
| collisions even this year due to too aggressive drivers
| riding too close, some were literally car in front of us or
| next one behind. Keep your distance, I can't emphasize this
| enough.
| quickthrowman wrote:
| I highly suggest not paying so much attention to the car in
| front of you and pay attention to the cars that are 3-6
| cars ahead of you, you can react way ahead of the driver
| directly in front of you when you see the car 6 cars ahead
| of them is braking.
| skrebbel wrote:
| My hand gesture for "Hey did you hear about the inverse
| Pascal Scam? It suggests that low-probability high-impact
| risks are easy to ignore, and I think that's what you're
| doing right now and that's not going to be good for your
| health, or mine for that matter, so maybe think about that a
| bit more in the future" is to raise my middle finger.
| Unfortunately it inevitably makes the situation worse
| somehow.
| smogcutter wrote:
| I've switched to a thumbs down in traffic and can't
| recommend it enough. Let's then know how they should feel
| without escalating like a middle finger.
| skrebbel wrote:
| Nice! I tried thumbs up (ie sarcasm) but that's snarky
| too, and somehow never realized that you could actually
| do the same thing non-sarcastically. Srsly wow :-) Gonna
| try, thanks
| rangerelf wrote:
| "Odds vs. Stakes"
|
| "The odds of X happening are so low that what's the point?", to
| which I respond "It only needs to happen once for me to be
| dead, so, the stakes are way too high for me to risk the odds".
| andsoitis wrote:
| > low-probability high-impact
|
| People often equate "risk" with "likelihood", when it would be
| more effective to view risk = impact * likelihood.
| omoikane wrote:
| In a similar spirit, I knew someone who claimed to not pay for
| parking permits at our university, and just parked wherever he
| liked. The parking permits were $100+ per month and the parking
| fines were ~$300 per citation, so if he gets caught less than
| once per quarter, he would come out ahead.
|
| He tells me later that it didn't quite work out in terms of
| saving money, but because he sometimes parked in spots that he
| could not get permits for, it actually saved time.
| atomic_cowprod wrote:
| Up until recently, fares for the LRT system in my city were
| enforced by a random check by transit police, typically by
| having an officer board trains and check riders' tickets at
| random times during random days and handing out fines to fare
| evaders who they caught.
|
| Between around mid-2006 and the end of 2008 I rode the train
| to work downtown every day. The trains were so crowded during
| rush hour that it was impossible for Transit police to board
| trains to check fares, and even outside rush hour, fare
| checks were _very_ occasional. A monthly pass at the time was
| around $75 and a fine for fare evasion was around $200 (the
| first violation was less than $200, and I think it increased
| until a cap of something like $250 for repeat offenders). I
| 'd worked it out that if I was caught without paying a fare
| less than once every three months, it would be cheaper to
| just pay the fine if/when I got caught rather than buy a
| pass. So I didn't buy a pass and decided to see how long it
| would take to actually get caught.
|
| The answer was about 18 months. Got a $170 fine. Which I then
| forgot about and never actually paid. The statute of
| limitations on that fine has long since expired.
| jonas21 wrote:
| After reading the first half of your comment, I was afraid the
| second half was going to end with something like "Then there
| was a fire and 3 people died because the exit was blocked."
|
| Getting fined doesn't sound so bad -- if it was like $100, your
| friend could just be treating it as a $0.30/day fee for
| convenient parking. But you both seemed to ignore the really
| high-impact potential outcome. So I guess that proves your
| point.
| b450 wrote:
| It's amusing to consider how much of a Rorschach test this
| article must be. But it's a great point, even if it arms us to
| abusively write off unwelcome ideas as scams. As the author
| points out, Pascal's reasoning is easily applied to an infinity
| of conceivable catastrophes - alien invasions, etc. That Pascal
| specifically applied his argument to the possibility of
| punishment by a biblical God was due to the psychological
| salience of that possibility in Pascal's culture - a truly
| balanced application of his fallacious reasoning would be
| completely paralyzing.
| aspenmayer wrote:
| I often like to pair Pascal's wager with Hitchens's razor:
|
| > Hitchens's razor is an epistemological razor that serves as a
| general rule for rejecting certain knowledge claims. It states:
|
| > > What can be asserted without evidence can also be dismissed
| without evidence.
|
| https://en.wikipedia.org/wiki/Hitchens%27s_razor
| taeric wrote:
| This is a funny read in contrast to the latest Nate Silver book.
| He seems to have gone all in on justifying EV bets.
| empath75 wrote:
| Rationalists and Effective Altruism people fall for this stuff
| _constantly_. Roko's Basilisk being the canonical example of it.
|
| They assign infinite negative or postive values to outcomes and
| then it doesn't mighter what the likelihood or how much they
| uncertainty they have everywhere else, they insist that they need
| to do everything possible to cause or prevent whatever that
| outcome is.
|
| Aside from other problems with it, there are a vast number of
| highly improbable and near-infinitely bad or good outcomes that
| might possibly occur which would require completey different
| actions if you're concerned about them.
___________________________________________________________________
(page generated 2025-07-16 23:01 UTC)