[HN Gopher] The illusion of AI's existential risk
       ___________________________________________________________________
        
       The illusion of AI's existential risk
        
       Author : headalgorithm
       Score  : 28 points
       Date   : 2023-07-19 20:51 UTC (2 hours ago)
        
 (HTM) web link (www.noemamag.com)
 (TXT) w3m dump (www.noemamag.com)
        
       | jiggawatts wrote:
       | I'm just re-reading Blindsight by Peter Watts, a novel that is
       | amazingly prescient in -- amongst other things -- its predictions
       | of this coming economic upheaval.
       | 
       | Speaking of... one scene in the book has humans communicating
       | with an alien that appears to have "learned" human speech by
       | training a non-sentient LLM on human ship-to-ship transmissions,
       | and then "fine tuned" it to achieve the desired communication
       | goal without ever actually understanding what the LLM is saying.
       | 
       | This is a book from 2006 accurately using the salient features of
       | LLMs popularised in the 2020s! That's proper science fiction,
       | right there.
       | 
       | Back to the economic aspect: Several characters in the book had
       | to "butcher themselves" with implants and enhancements to remain
       | economically relevant in the age of AIs. It's that... or you're
       | packed away in storage. Useless.
       | 
       | PS: Imagine training an LLM on cetacean recordings and then fine-
       | tuning on "orca attack imminent". You could use this to scare
       | whales away from ships without understanding what specifically
       | the LLM was singing to them!
        
       | martythemaniak wrote:
       | We live in a very confusing, transitory time. Right now there's a
       | large chunk of people who are convinced that we are a few years
       | away from extinction if we do not ration out GPUs and bomb
       | illegal clusters of compute, while at the same time there's a
       | large chunk that believes that a car can never navigate a single-
       | lane one-way tunnel safely without a driver (ie, The Boring
       | Company). Absolutely wild.
        
         | RC_ITR wrote:
         | > a car can never navigate a single-lane one-way tunnel safely
         | without a driver (ie, The Boring Company).
         | 
         | I think the criticisms of the Boring Company are more "small
         | diameter tunnels are a bad way to build high-throughput
         | underground infrastructure, and autonomous cars will not solve
         | that"
        
         | version_five wrote:
         | I've only seen grifters, hysterics, or the ignorant talking
         | about existential AI risk. A lot of people just nods their
         | heads and go along, but I don't think there are many that have
         | really thought about it (and understand what modern AI is) that
         | really belive the risk.
         | 
         | It's more like political polarization where people dwell on
         | something they don't agree with to the point where they treat
         | it as if the world is going to end because of it, instead of in
         | context. AI is more like the political party they don't like
         | winning, potentially undesirable from a certain point of view
         | but they see it as "existential".
         | 
         | I think it's important to see it in this light, instead of
         | trying to actually debate the points raised, which are absurd
         | and will only give credibility where none is due.
        
           | jefftk wrote:
           | _> I 've only seen grifters, hysterics, or the ignorant
           | talking about existential AI risk._
           | 
           | Which category would you put Geoffrey Hinton in?
           | https://www.utoronto.ca/news/risks-artificial-
           | intelligence-m...
        
             | cristiancavalli wrote:
             | hysterics. Remember when he said we should stop training
             | radiologists, that's aged pretty poorly given there's
             | actually a need to have more:
             | 
             | https://www.rsna.org/news/2022/may/global-radiologist-
             | shorta...
             | 
             | https://mindmatters.ai/2022/08/turns-out-computers-are-
             | not-v...
        
         | hollerith wrote:
         | "Convinced that we are a few years from extinction," is an
         | exaggeration of the pessimistic/alarmed end of the spectrum of
         | opinion.
         | 
         | All the forecasts I've seen have the period of danger spread
         | out over at least 20 or 30 years, and no one claims to
         | understand the topic well enough to be able to tell whether a
         | particular model is dangerous even if given complete freedom to
         | examine the source code and interview the creators.
         | 
         | Our inability to predict is in large part the _basis_ of the
         | pessimism /alarm because it means that the leading labs won't
         | know when to stop.
        
           | acqq wrote:
           | Exactly. And here is an example of that inability in the much
           | clearer case:
           | 
           | "How many years after 1945 ... would it take for the Soviet
           | Union to get the bomb? Framed this way, it was a much easier
           | question than forecasting the advent of AGI: The Americans
           | were concerned with only one adversary and the development of
           | a single technology which they already knew all the details
           | of. American predictions of Soviet proliferation offer a
           | highly constrained case study for those who would undertake
           | technological forecasting today."
           | 
           | https://asteriskmag.com/issues/03/how-long-until-armageddon
        
       | somewhereoutth wrote:
       | Probably things would become clearer if what we currently
       | describe as 'AI' we instead term as 'Statistical Pattern
       | Regurgitator' or similar.
       | 
       | Any serious attempt at AI would: need to be trained on reality -
       | like people are trained; need to overcome the cardinality barrier
       | - digital systems can only approximate analogue systems; and need
       | to demonstrate spontaneous language emergence driven by survival
       | in a social environment.
        
       | ggm wrote:
       | AGI doesn't ever have to happen for rapid deployment of AI based
       | systems to cause harm.
       | 
       | Google "robodebt australia" for what happens when government uses
       | machine derived decisions to penalise the poor.
        
         | __loam wrote:
         | That point is basically what the article is about.
        
       | janalsncm wrote:
       | I don't understand why economic disruption and human obsolescence
       | isn't considered an existential risk. We could end up in a world
       | where 99.99% of people are redundant and useless cost centers for
       | our GDP maximizing economic paradigm. In that case, you don't
       | need killer robots in order to push humanity towards near-
       | extinction. The _invisible hand_ will smite them.
       | 
       | But don't worry, TED Talk attendee. The _obsoletariat_ mostly
       | won't be Americans. The median member of this class will probably
       | be Chinese or Indian. So you can continue your performative
       | concerns about Roko's Basillisk or whatever topic of distant,
       | paralyzing uncertainty is overflowing from the next room into
       | mine.
        
         | TheOtherHobbes wrote:
         | It's a bizarre and blinkered article. The most immediate
         | dangers are economic. We already have a hopelessly unstable
         | economic system, with increasing swathes of the population
         | economically disenfranchised. AI is more likely to accelerate
         | that than prevent it.
         | 
         | The other immediate dangers are social and political. When one
         | person with resources can run an AI-enhanced troll farm and
         | social media PR engine - not even remotely science fiction - we
         | have a serious problem.
         | 
         | Those threats are already politically and culturally
         | existential.
         | 
         | And that's before anyone has even fitted a gun to an AI-powered
         | autonomous drone. Or gunship.
         | 
         | AI is inherently conservative because it reinforces hierarchy.
         | The AI-poor will have far less political, cultural, and
         | personal leverage than the AI-rich.
         | 
         | Essentially it will have the same effects as money as a
         | cultural and political practice - but much more so.
        
           | RandomLensman wrote:
           | AI could also be used for counter PR. Eventually, societies
           | could shut down social media or ban use of AI content for
           | political purposes, for example.
           | 
           | I would also expect a huge uptick in bureaucracy from AI, so
           | lots of new jobs will spring up.
        
         | rootusrootus wrote:
         | > We could end up in a world where 99.99% of people are
         | redundant and useless cost centers for our GDP maximizing
         | economic paradigm
         | 
         | If 99% of consumers no longer have money to spend, GDP will
         | definitely not be maximized.
        
           | janalsncm wrote:
           | They don't need to be consumer goods. $1000 spent on clothes
           | is the same as $1000 spent on GPUs as far as GDP is
           | concerned. Headless businesses swapping dollars around
           | creates plenty of GDP with few to no people involved.
        
             | RC_ITR wrote:
             | A common fallacy is thinking that transactions exist
             | without a human at either end of it (usually due to many
             | layers of complexity in between).
             | 
             | You can talk all you want about companies selling to
             | companies, etc., but at the end of the day _everything on
             | earth_ is owned by a human eventually.
             | 
             | Even high-frequency trades between two hedge funds are done
             | with capital that was supplied by Limited Partners who
             | probably are acting on behalf of pensioners.
             | 
             | The scenario you're describing is one of extreme wealth
             | inequality, which is a real problem, but one that's
             | supposed to be solvable through democratic means. Stopping
             | technological progress isn't going to solve it.
        
               | janalsncm wrote:
               | Sure, everything is owned by a human. But does it have to
               | be 8 billion humans? Why not 8 million or even 8
               | thousand? If humans provide literally zero economic value
               | and their costs are significant, what can our economic
               | systems say about whether they should even exist? It's
               | Macroeconomic Changes Have Made it Impossible for Me to
               | Want to Pay You [1] but on a global level.
               | 
               | [1] https://www.mcsweeneys.net/articles/macroeconomic-
               | changes-ha...
        
               | RC_ITR wrote:
               | The scenario you're describing is one of extreme wealth
               | inequality, which is a real problem, but one that's
               | supposed to be solvable through democratic means.
               | Stopping technological progress isn't going to solve it.
        
             | semi-extrinsic wrote:
             | Just a tinge of broken window fallacy there?
        
               | RC_ITR wrote:
               | The broken window fallacy is a controversial axiom that
               | implies demand generation isn't a meaningful way to
               | increase GDP (it comes from the camp that believes output
               | capacity is the only true measure of GDP).
               | 
               | This concept became particularly controversial during the
               | Cambridge Capital Controversy [0].
               | 
               | If you believe that GDP is an exogenous measure of what
               | society can produce, you run into oddities like global
               | GDP changing trajectory in 2008 [1] (i.e. we never
               | 'caught up' but did the US housing crisis really cause
               | the human race to lose our ability to create more goods
               | and services?).
               | 
               | On the other hand, if you believe demand drives GDP, then
               | why don't we just demand ourselves into more wealth?
               | Shouldn't natural constraints like resource scarcity then
               | drive GDP (i.e. the original underpinning of the broken
               | window fallacy)?
               | 
               | In either case, the broken window fallacy is far from an
               | agreed-upon axiom.
               | 
               | [0]https://www.aeaweb.org/articles?id=10.1257/08953300332
               | 116501... [1]
               | https://data.worldbank.org/indicator/NY.GDP.MKTP.PP.CD
        
         | [deleted]
        
         | __loam wrote:
         | This is a problem with capitalism in general, not just with AI.
        
           | atq2119 wrote:
           | Reminds me of the point made by Ted Chiang that (roughly,
           | from memory) when people express fear of technology, quite
           | often what they really fear is capitalism and how it will use
           | the technology.
           | 
           | This goes back all the way to the Luddites, who weren't
           | actually anti-technology/progress. They were opposed to how
           | its benefits were captured and by whom.
        
           | version_five wrote:
           | We're seeing this happen a lot with "AI" - now that it's a
           | super popular topic people use it as the lens though which
           | they project their general political grievances. It gets more
           | attention.
        
           | janalsncm wrote:
           | Well, it's a problem with any economic system which uses
           | labor as a proxy for value/rights. If my political power is
           | predicated on my ability to withdraw labor or move to a new
           | country, it's not going to be good for me when my services
           | are no longer required.
        
         | Aerroon wrote:
         | > _I don't understand why economic disruption and human
         | obsolescence isn't considered an existential risk._
         | 
         | Because that's the _goal_ of our economic system! If everything
         | could be made with no human labor requirement, then the price
         | of these goods is going to trend towards zero. The disruption
         | is going to cause pain, but ultimately that 's what we've
         | always been after.
        
           | MattPalmer1086 wrote:
           | Well, no. The goal is to maximise profit. One way to do that
           | is to minimise labour costs. But at the extreme, if there are
           | then no consumers who can buy your goods there is no
           | profit...
        
         | gumballindie wrote:
         | It is, just not by the media. They only worry when their jobs
         | are threatened. The rest of us can have cake.
        
       | hashstring wrote:
       | Yes, this is a good take.
       | 
       | The general discussion is definitely prematurely focused on some
       | sort of end-boss fight, while AI and large scale data collection
       | are causing serious harm and privacy concerns to real humans
       | already. It's only reasonable to expect that this increases in
       | the future.
       | 
       | I would like to see more discussion focussing on these pressing
       | issues.
       | 
       | Also a pause on AI development is not an option, not a solution
       | and is digressing from the issue at hand. Big capital with their
       | hands on big valuable data will support anything that distracts
       | here.
       | 
       | Finally, I do think that AI also brings many positives, I am not
       | against the technology itself at all- we shouldn't be.
        
       | more_corn wrote:
       | The distant future. Like five years. Self-aware (or a simulation
       | thereof) AGI with self-directed motivation and the ability to
       | self modify is barely a half step from ASI. And AGI could happen
       | in the coming year. Five if you wanna be pessimistic. Never if
       | you're extraordinarily pessimistic but that's looking unlikely.
       | If it's possible it's possible now.
       | 
       | This isn't the Yellowstone super volcano that might blow in the
       | next billion years.
       | 
       | People who talk like this are idiotic at best.
       | 
       | Granted, the other concerns are real too but the accusation that
       | the concern of existential risk is being used to hide the other
       | known problems is dangerous in the extreme.
        
         | JimtheCoder wrote:
         | "And AGI could happen in the coming year."
         | 
         | "People who talk like this are idiotic at best."
        
           | __loam wrote:
           | Someone built what is essentially a fancy markov chain that
           | was trained to look convincing to people and people think
           | that's going to lead to AGI lol.
        
             | version_five wrote:
             | "Introducing the AI Mirror Test, which very smart people
             | keep failing"
             | 
             | https://www.theverge.com/23604075/ai-chatbots-bing-
             | chatgpt-i...
        
         | atq2119 wrote:
         | I'm pretty sure you could have expressed your opinion without
         | resorting to insults.
        
       ___________________________________________________________________
       (page generated 2023-07-19 23:00 UTC)