[HN Gopher] A critique of longtermism: why you shouldn't worry a...
___________________________________________________________________
A critique of longtermism: why you shouldn't worry about the far
future
Author : gautamcgoel
Score : 26 points
Date : 2022-01-26 21:14 UTC (1 hours ago)
(HTM) web link (gautamcgoel.substack.com)
(TXT) w3m dump (gautamcgoel.substack.com)
| mensetmanusman wrote:
| It's hard to reason about care for the future in a nihilistic
| framework, which seems to be an issue at the present moment, I.e.
| humanity may have a critical mass of people who say 'meh'.
|
| What if 'meh' is the great filter?
| ouid wrote:
| Wait, people use longtermism to argue that man-made climate
| change isn't important?
| gautamcgoel wrote:
| See here:
|
| https://www.currentaffairs.org/2021/07/the-dangerous-ideas-o...
| dotsam wrote:
| If you have the choice between taking an action that may
| potentially save billions and billions of future lives, or
| taking an action that may have an extremely marginal impact on
| a very bad, and yet not absolutely apocalyptic, climate change
| disaster, a longtermist would probably encourage you to
| maximise your impact by focusing at least some of your
| resources on preventing the more deadly scenarios (though not
| necessarily to the exclusion of dealing with climate change).
| SkyMarshal wrote:
| _> Longtermists use this reasoning to justify many controversial
| stances; for example, longtermists rate climate change as a
| relatively unimportant concern, since it will only have a direct
| effect on the billions of people who live on the Earth, and not
| the countless multitudes, who, they believe, will one day live in
| various star systems across the galaxy._
|
| There's longtermism and then there's _longtermism_. Climate
| Change is a problem that will affect our kids and grand kids in
| this century and the next, hence it 's a longterm problem.
|
| But worrying about the future diaspora of _" potentially hundreds
| of trillions or more"_ humans across the Milky Way Galaxy is so
| far off and unforeseeable I'm not even sure "longtermism" is the
| right word for it. That's more SciFi territory.
| marginalia_nu wrote:
| Who is he even arguing against, Hari Seldon?
|
| It's kinda tiresome all these -isms that people invent to argue
| against. More often they're strawman positions that few to no
| people actually hold. We should hold people to counter specific
| arguments made by specific people, not vague ideas held by
| nebulous -ists.
| gautamcgoel wrote:
| Longtermism is increasingly viewed as a respectable moral
| movement; one of its best-known proponents is Nick Bostrom,
| director of the Future of Humanity Institute at Oxford, which
| received a $10M donation from Elon Musk. You can read more
| about longtermism (and some of the really bizarre views its
| adherents espouse) in this Current Affairs piece:
|
| https://www.currentaffairs.org/2021/07/the-dangerous-
| ideas-o...
| lacker wrote:
| _Who is he even arguing against, Hari Seldon?_
|
| It's common for people concerned about an impending AI
| apocalypse to use this sort of "longtermist" argument. For
| example this article:
|
| https://oxfordpoliticalreview.com/2019/08/25/is-ai-safety-
| ra...
|
| _If we reduce existential risk by mere one-millionth of one
| percentage point, it will be worth more than 100 times the
| value of saving a million human lives. The expected value of
| any other good actions - like helping people here and now -
| will be trivial compared to even the slightest reduction in
| existential risk._
|
| Similarly, the recent book The Precipice argued that the most
| important risk for humanity is AI risk because the badness of
| AI destroying all humanity is much worse than, say, climate
| change killing off 99% of humans.
| Barrin92 wrote:
| Nick Bostrom for example who seems fairly respected with some
| money behind his institution.
|
| Ideologically 'longtermism' functions the same way as
| arguments about 'existential risk'. They're often highbrow
| ways to justify authoritarian stances. To give you a concrete
| example, Bostrom's writing contains a lot of arguments along
| the lines of "if there is an infinitesimally small risk of
| some terrorist making a bioweapon that wipes out life on
| earth, it's worth to introduce widespread surveillance
| measures".
|
| I'd agree that it's not a particularly common belief system
| and it's not like the government will be full of draconian
| longtermists but it has gained some traction.
|
| https://cherwell.org/2019/04/27/mass-surveillance-could-
| save...
| MattGaiser wrote:
| Indeed. I want to meet these people who say not to worry
| about climate change because we will have space travel.
| teekert wrote:
| You can even use this to reason the other way: Climate change
| will lead to a planet that can sustain less humans 100 years
| from now, so those lost people won't have their 100's of
| trillions of offspring thousands of years in the future. We
| should stop climate change.
| fleddr wrote:
| Well, I'm quite confident that a species that figures out how
| to travel at the speed of light (which is still very slow if
| you consider that the milky way is 100,000 light years
| across) will no longer depend on a 9 month biological process
| for reproduction.
|
| In fact, it's really questionable if the human species will
| still exist in its biological form. AI might already meet
| human intelligence within decades and then exponentially grow
| in might from there. Add this exponential function to a
| timeline of thousands of years and where do you end up?
|
| Current humans? I don't think so.
| lolsal wrote:
| I agree with your sentiment - longtermism as a term needs to be
| tempered with a dose of reality. When does longtermism become
| another word for "fantasy"?
| fleddr wrote:
| A fair warning, but I don't think anybody was worrying about this
| kind of far future anyway.
|
| Just adjusting and staying relevant in our fast-paced world
| requires an increasingly short term focus, if anything. Paycheck
| to paycheck and business quarter to business quarter. Further,
| many people aren't very hopeful for the future as everything
| seems to be breaking down.
|
| We're also absolutely terrible at even "short term" problems.
| Climate change was warned about since the 70s. We had decades to
| curb it but did nothing. We can foresee population booms as well
| as aging decades before they happen but again do absolutely
| nothing.
|
| We're a short term species. And getting even more short term.
| Instant gratification and all that.
|
| Personally, I don't care at all about the prospect of people
| populating the galaxy. Imagine the mess they make. It wouldn't be
| long before the sky looked like an ad.
| conformist wrote:
| This feels a bit straw man like in that proponents of longtermism
| appear to be aware of improbable outcomes and of discounting and
| things like that.
|
| One of the problems seems to be that even if you agree that the
| utility of unlikely but great outcomes for humanity in the far
| future is something like (very small probability of perfect
| scenario) x (very small discount factor) x (very large utility),
| people might disagree fundamentally about the order of magnitude
| of all three quantities, which will lead to vastly different
| judgements? I'm curious whether this sort of argument is too
| naive, too, and whether there are convincing arguments why this
| ought not to be the case that are prevalent in the longtermist
| community?
| ALittleLight wrote:
| I recently read The Precipice which advocates for longtermism.
| I'd recommend this substack author read that book as it deals
| with the objections raised here.
|
| It's true that longtermists view climate change as relatively
| less of a threat than other existential risks. The reason is that
| climate change seems unlikely to literally make humanity extinct
| or to trap us in an unrecoverable and inescapable bad position.
| Plus, climate change is relatively slow moving compared to other
| existential risks. The book does better than I will be about to
| do here in explaining this
|
| I could see a grease fire on my kitchen stove and decide to put
| it out even while acknowledging that a fire is relatively less
| important than villains abducting me and my family and torturing
| us to death.
| gautamcgoel wrote:
| I haven't read it. Can you please summarize the
| counterarguments presented in the book?
| Hokusai wrote:
| https://en.wikipedia.org/wiki/The_Precipice:_Existential_Ris.
| ..
|
| I find the logic flawed as it seems to imply that extrem
| suffering of some is justified if it benefits many others.
| "You can kill a person to harvest its organs to save many"
| seems implicit, but I just read the summary. The book may
| contain much more nuance.
| jrpt wrote:
| Have you read Beckstead's dissertation?
| https://rucore.libraries.rutgers.edu/rutgers-
| lib/40469/PDF/1...
| zopa wrote:
| When we think about the time value of money, we're thinking about
| the value of money to an individual. It isn't fundamentally about
| market returns, at bottom; it's about the near-universal human
| preference for something now over something later. Once you have
| that concept in mind you can do some calculations and get the
| wealth-weighted aggregate time preferences of everyone in the
| market. But notice that that's an average: particular people can
| and do have wildly different time preferences, and there's no
| abstract Platonic "true" value of money aside from its value to
| people.
|
| Which is all well and good, but not really relevant to effective
| altruism or utilitarian ethics. Effective altruism starts with
| the premise that people have inherent worth in some universal
| sense. If we're just thinking about people's value relative to
| me, then sure, humanity thousands or millions of years from now
| isn't worth much. I'll never know or interact with them, and nor
| will anyone I know or care about. But the whole appeal of
| utilitarian ethics is that it's not based on who I happen to
| like.
| gautamcgoel wrote:
| I see the point you're making, but part of what I was trying to
| get at is that economic utility and social welfare should be
| discounted, not necessarily utility. For example, a life saving
| invention created today is worth more than if it's created a
| century from now, because it could potentially save many lives
| during that century.
| satellite2 wrote:
| So we should discount value of future human life usine the DCF
| method? This is ridiculous.
|
| The DCF makes sense because receiving a dollar tomorrow doesn't
| have the same value as receiving one today because in the second
| case I could invest it in the risk free asset and get more than
| one dollar tomorrow. This doesn't apply to human life.
|
| I understand the feeling of the author regarding designing policy
| aimed at future people but the argument is just plain wrong. I'm
| sure there is a non fallacious argument supporting this point
| though. Maybe a risk / black swan based one?
| gautamcgoel wrote:
| "I don't agree with this argument but have no substantive
| criticism, so I'll call it ridiculous. Also, I'm sure there is
| a good argument against longtermism, I just don't know what it
| is. Maybe something involving <buzzword>?"
| randallsquared wrote:
| "No substantive criticism" seems to be filling the same role
| as "ridiculous". Applying DCF doesn't seem ridiculous, but
| there is still an argument to be made about whether humans
| that exist now are each more important to our current plans
| than humans that (may) exist in two hundred, two thousand, or
| two million years.
| adamisom wrote:
| The substantive criticism is clear: discounting is
| questionable when it comes to human lives.
| csdvrx wrote:
| > This doesn't apply to human life.
|
| Yes it does, as the certainty even for someone healthy and
| alive today that they will remain alive as time goes can only
| decrease.
|
| Look at how the best plans go to trash for people saving when
| they're young, with grand plans to retire early and gorge on
| leisure/travel/etc, when they suddenly find out they have a
| terminal disease or some other health problem that at best
| reduce their ability to travel or enjoy their leisure (trekking
| in the desert on a wheelchair at 70? Or on painkillers? Might
| be possible, but certainly not as fun as doing that same thing
| at 20, if only because you'll find fewer people of your same
| age group willing to join in on the adventure)
| adamisom wrote:
| Good point yet the appropriate discount rate has gone over
| time with better medicine and technology. So it comes down
| to: do you believe medicine and tech will keep advancing and
| if so, doesn't that mean the discount rate will get ever
| smaller over time? I think it does.
| csdvrx wrote:
| While I agree with you that discount rate should decrease,
| I also believe that eventually we all get sick then die.
|
| Nothing short of a few radical breakthroughts can fix a
| strictly decreasing function.
| bell-cot wrote:
| I've no idea how legit, or popular, this notion of longtermism
| is.
|
| But it sounds like a great excuse to ignore almost all of the
| world's current or "in our lifetimes" problems, focus on a few
| very theoretical problems - of serious interest to only a few
| SciFi fans and longtermists - and proclaim it a virtue to do
| that. (While probably _doing_ nothing whatever about any problem,
| beyond talking big and puffing themselves up.)
___________________________________________________________________
(page generated 2022-01-26 23:00 UTC)