[HN Gopher] AI's big rift is like a religious schism
       ___________________________________________________________________
        
       AI's big rift is like a religious schism
        
       Author : anigbrowl
       Score  : 61 points
       Date   : 2023-12-12 19:13 UTC (3 hours ago)
        
 (HTM) web link (www.programmablemutter.com)
 (TXT) w3m dump (www.programmablemutter.com)
        
       | cs702 wrote:
       | I don't agree with all of the OP's arguments, but wow, what a
       | great little piece of writing!
       | 
       | As the OP points out, the "accelerators vs doomers" debate in AI
       | has more than a few similarities with the medieval debates about
       | the nature of angels.
        
         | ssss11 wrote:
         | You sound like you have some knowledge to share and I know
         | nothing about the medieval debates about the nature of angels!
         | Could you elaborate please?
        
           | dllthomas wrote:
           | I believe parent is referencing https://en.wikipedia.org/wiki
           | /How_many_angels_can_dance_on_t...
        
             | mmcdermott wrote:
             | That same Wikipedia article casts some doubt about whether
             | the question "How many angels can dance on the head of a
             | pin?" was really a question of serious discussion.
             | 
             | From the same entry:
             | 
             | > However, evidence that the question was widely debated in
             | medieval scholarship is lacking.[5] One theory is that it
             | is an early modern fabrication,[a] used to discredit
             | scholastic philosophy at a time when it still played a
             | significant role in university education.
        
               | hoerensagen wrote:
               | The answer is: One if it's the gavotte
        
               | skeaker wrote:
               | Sure, but the point of the phrase is that the question
               | itself is a waste of time.
        
               | empath-nirvana wrote:
               | It's really just short hand for rationalist debate in
               | general, which is what the scholastics were engaged in.
               | Once you decide you _know_ certain things, then you can
               | end up with all kinds of frankly nutty beliefs based on
               | those priors, following a rock solid chain of rational
               | logic all along, as long as your priors are _wrong_.
               | Scholastics ended up having debates about the nature of
               | the universal intellect or angels or whatever, and
               | rationalists today argue about super human AI. That's
               | really the problem with rationalist discourse in general.
               | A lot of them start with what they want to argue, and
               | then use whatever assumptions they need to start building
               | a chain of rationalist logic to support that outcome.
               | 
               | Clearly a lot of "effective altruists" for example, want
               | to argue that the most altruistic thing they could
               | possibly be doing is to earn as much money as they
               | possibly can and horde as much wealth as they possibly
               | can, so they'll come up with a tower of logic based on
               | far-fetched ideas like making humans an interplanetary
               | species or hyperintelligent AIs, or life extension or
               | whatever so they can come up with absurd arguments like:
               | If we don't end up an interplanetary species, billions
               | and billions billions of people will never be born so
               | that's obviously the most important thing anybody could
               | ever be working on, so who cares about that kid starving
               | in Africa right now. He's not going to be a rocket
               | scientist, what good is he?
               | 
               | One thing most philosophers learned at some point is that
               | you need to temper rationalism with a lot of humility
               | because every chain of logic has all kinds of places
               | where it could be wrong, and any of those being wrong is
               | catastrophic to the outcome of your logic.
        
               | aprilthird2021 wrote:
               | The point is that religions at the time had a logical
               | framework which the scholars liked to interrogate and
               | play with the logic of even if that served no real world
               | purpose. Likewise, fighting about doom vs accel when
               | current day Gen AI is nowhere close to that kind of stuff
               | (and hasn't shown it can ever be) is kind of pointless
        
         | gumby wrote:
         | > wow, what a great little piece of writing!
         | 
         | If you like this essay from The Economist, note that this is
         | the standard level of quality for that magazine (or, as they
         | call themselves for historical reasons, "newspaper"). I've been
         | a subscriber since 1985.
        
           | cs702 wrote:
           | Long-time occasional reader. The level of quality is
           | excellent, I agree.
        
         | nradov wrote:
         | Brief in the imminent arrival of super intelligent AGI that
         | will transform society is essentially a new secular religion.
         | The technological cognoscenti who believe in dismiss the
         | doubters who insist on evidence as fools.
         | 
         | "Surely I come quickly. Amen."
        
           | vlovich123 wrote:
           | I would say a definition for GAI is a system that can improve
           | its own ability to adapt to new problems. That's a more
           | concrete formulation than I've typically seen.
           | 
           | Currently humans are still in the loop, but we already have
           | AI enabling advancements in their own functioning at a very
           | primitive level. Extrapolating from previous growth is a form
           | of belief without evidence since past performance not
           | indicative of future results. But that's generally true of
           | all prognostication and I'm not sure what kind of evidence
           | you'd be looking for aside from past performance.
           | 
           | The doubters are dismissed as naive thinking that something
           | is outside our ability to achieve something, but that's only
           | if you keep moving goalposts and treat it like Zeno's
           | paradox. Like yes, there are weaknesses to our current
           | techniques. At the same time we've also demonstrated an
           | uncanny ability to step around them and reach new heights.
           | For example, our ability to beat Go took less time than it
           | took to develop techniques to beat humans at chess.
           | Automation now outcompetes humans at many many things that
           | seemed impossible before. Techniques / solutions will also be
           | combined to solve even harder problems (eg now LLMs are being
           | researched to take over executive command control operations
           | of robots for example instead of using classical control
           | systems algorithms that were hand built and hand tuned)
        
           | ben_w wrote:
           | Automation has been radically changing our societies since
           | before Marx wrote down some thoughts and called it communism.
           | 
           | Things which used to be considered AI before we solved them,
           | e.g. automated optimisation of things like code compilation
           | or CPU layouts, have improved our capacity to automate design
           | and testing of what is now called AI.
           | 
           | Could stop at any point. I'll be _very surprised_ if someone
           | makes a CPU with more than one transistor per atom.
           | 
           | But even if development stops right now, our qualification
           | systems haven't caught up (and IMO _can 't_ catch up) with
           | LLMs. Might need to replace them with mandatory 5 years
           | internships to get people beyond what is now the "junior"
           | stage in many professions -- junior being approximately the
           | level which the better existing LLMs can respond at.
           | 
           | "Transform society" covers a lot more than anyone's idea of
           | the singularity.
        
           | concordDance wrote:
           | Do you doubt that copy-pasteable human level intelligence
           | would transform society or that it will come quickly?
        
       | heyitsguay wrote:
       | This piece frames this as a debate between broad camps of AI
       | makers, but in my experience both the accelerationist and doomer
       | sides are basically media/attention economy phenomena --
       | narratives wielded by those who know the power of compelling
       | narratives in media. The bulk of the AI researchers, engineers,
       | etc I know kind of just roll their eyes at both. We know there
       | are concrete, mundane, but important application risks in AI
       | product development, like dataset bias and the perils of
       | imperfect automated decision making, and it's a shame that tech-
       | weak showmen like Musk and Altman suck up so much discursive
       | oxygen.
        
         | zamfi wrote:
         | > it's a shame that tech-weak showmen like Musk and Altman suck
         | up so much discursive oxygen
         | 
         | Is it that bad, though? It does mean there's lots of attention
         | (and thus funding, etc.) for AI research, engineering, etc. --
         | unless you are expressing a wish that the discursive oxygen
         | were instead spent on other things. In which case, I ask: what
         | things?
        
           | sonicanatidae wrote:
           | What things?
           | 
           | The pauses to consider _if_ we should do  <action>, before we
           | actually do <action>.
           | 
           | Tesla's "Self-Driving" is an example of too soon, but fuck
           | it, we gots PROFITS to make and if a few pedestrians die,
           | we'll just throw them a check and keep going.
           | 
           | Imagine the trainwreck caused by millions of people
           | leveraging AI like the SCOTUS lawyers, where their brief was
           | written by AI and noted imagined cases in support of its
           | decision.
           | 
           | AI has the potential to make great change in the world, as
           | the tech grows, but it's being guided by humans. Humans
           | aren't known for altruism or kindness. (source: history) and
           | now we're concentrating even more power into fewer hands.
           | 
           | Luckily, I'll be dead long before AI gets crammed into every
           | possible facet of life. Note that AI is inserted, not because
           | it makes your life better, not because the world would be a
           | better place for it and not even to free humans of mundane
           | tasks. Instead it's because someone, somewhere can earn more
           | profits, whether it works right or not and humans are the
           | grease in the wheels.
        
             | pixl97 wrote:
             | >The pauses to consider if we should do <action>, before we
             | actually do <action>.
             | 
             | Unless there has been an effective gatekeeper, that's
             | almost never happened in history. With nuclear the
             | gatekeeper is it's easy to detect. With genetics there
             | pretty universal revulsion to it to the point a large
             | portion of most populations are concerned about it.
             | 
             | But with AI, to most people it's just software. And pretty
             | much it is, if you want a universal ban of AI you really
             | are asking for authoritarian type controls on it.
        
               | JoshTriplett wrote:
               | > But with AI, to most people it's just software.
               | 
               | Practical AI involves cutting-edge hardware, which is
               | produced in relatively few places. AI that runs on a CPU
               | will not be a danger to anyone for much longer.
               | 
               | Also, nobody's asking for a universal ban on AI. People
               | are asking for an upper bound on AI capabilities (e.g.
               | number of nodes/tokens) until we have widely proven
               | techniques for AI alignment. (Or, in other words, until
               | we have the ability to reliably tell AI to do something
               | and have it do _that thing_ and not entirely different
               | and dangerous things).
        
             | pc86 wrote:
             | Is a Tesla FSD car a worse driver than a human of median
             | skill and ability? Sure we can pull out articles of
             | tragedies, but I'm not asking about that. Everything I've
             | seen points to cars being driven on Autopilot being quite a
             | bit safer than your average human driver, which is
             | admittedly not a high bar, but I think painting it as
             | "greedy billionaire literally kills people for PROFITS" is
             | at best disingenuous to what's actually occurring.
        
           | permanent wrote:
           | It is very bad. There's more money and fame to be made by
           | taking these two extreme stances. The media and the general
           | public is eating up this discourse, that are polarizing the
           | society, instead of educating.
           | 
           | > What things?
           | 
           | There are helpful developments and applications that go
           | unnoticed and unfunded. And there are actual dangerous AI
           | practices right now. Instead we talk about hypotheticals.
        
             | zamfi wrote:
             | Respectfully, I don't think it's _AI hype_ that is
             | "polarizing the society".
        
           | heyitsguay wrote:
           | They're talking about shit that isn't real because it
           | advances their personal goals, keeps eyes on them, whatever.
           | I think the effect on funding is overhyped -- OpenAI got
           | their big investment before this doomer/e-acc dueling
           | narrative surge, and serious investors are still determining
           | viability through due diligence, not social media front
           | pages.
           | 
           | Basically, it's just more self-serving media pollution in an
           | era that's drowning in it. Let the nerds who actually make
           | this stuff have their say and argue it out, it's a shame
           | they're famously bad at grabbing and holding onto the
           | spotlight.
        
             | pixl97 wrote:
             | Just to play devils advocate to this type of response.
             | 
             | What if tomorrow I drop a small computer unit in front of
             | you that has human level intelligence?
             | 
             | Now, you're not allowed to say humans are magical and
             | computers will never do this. For the sake of this
             | theoretical debate it's already been developed and we can
             | make millions of them.
             | 
             | What does this world look like?
        
               | AnimalMuppet wrote:
               | > What does this world look like?
               | 
               | It looks imaginary. Or, if you prefer, it looks
               | hypothetical.
               | 
               | The point isn't how we would respond if this were real.
               | The point is, _it isn 't real_ - at least not at this
               | point in time, and it's not looking like it's going to be
               | real tomorrow, either.
               | 
               | I'm not sure what purpose is served by "imagine that I'm
               | right and you're wrong; how do you respond"?
        
               | pixl97 wrote:
               | Thank god you're not charge of military planning.
               | 
               | "Hey the next door neighbors are spending billions on a
               | superweapon, but don't worry, they'll never build it"
        
               | RandomLensman wrote:
               | On some things that is not a bad position: The old SDI
               | had a lot of spending but really not much to show for it
               | while at the same time forcing the USSR into a reaction
               | based on what today might be called "hype".
        
             | zamfi wrote:
             | The "nerds" are having their say and arguing it out, mostly
             | outside of the public view but the questions are too
             | nuanced or technical for a general audience.
             | 
             | I'm not sure I see how the hype intrudes on that so much?
             | 
             | It seems like you have a bone to pick and it's about the
             | attention being on Musk/Altman/etc. but I'm still not sure
             | that "self-serving media pollution" is having that much of
             | an impact on the people on the ground? What am I missing,
             | exactly?
        
               | heyitsguay wrote:
               | My comment was about wanting to see more (nerds) ->
               | (public) communication, not about anything (public) ->
               | (nerds). I understand they're not good at it, it was just
               | an idealistic lament.
               | 
               | My bone to pick with Musk and Altman and their ilk is
               | their damage to public discourse, not that they're
               | getting attention per se. Whether that public discourse
               | damage really matters is its own conversation.
        
           | fallingknife wrote:
           | Very bad. The Biden admin is proposing AI regulation that
           | will protect large companies from competition due to all the
           | nonsense being said about AI.
        
             | jazzyjackson wrote:
             | Alternatively:
             | 
             | there is nonsense being said about AI _so that_ the Biden
             | admin can protect large companies from competition
        
             | dragonwriter wrote:
             | > The Biden admin is proposing AI regulation that will
             | protect large companies from competition
             | 
             | Mostly, the Biden Administration is proposing a bunch of
             | studies by different agencies of different areas, and some
             | authorities for the government to take action regarding AI
             | in some security-related areas. The concrete regulation
             | mostly is envisioned to be drafted based on the studies,
             | and the idea that it will be incumbent protective is mostly
             | based on the fact that certain incumbents have been pretty
             | nakedly tying safety concerns to proposals to pull up the
             | ladder behind themselves. But the Administration is, at a
             | minimum, resisting the lure of relying on those incumbents
             | presentation of the facts and alternatives out of the gate,
             | and also taking a more expansive view of safety and related
             | concerns than the incumbents are proposing (expressly
             | factoring in some of the issues that they have used
             | "safety" concerns to distract from), so I think prejudging
             | the orientation of the regulatory proposals that will
             | follow on the study directives is premature.
        
         | pixl97 wrote:
         | The problem with humanity is we are really poor at recognizing
         | all the ramifications of things when they happen.
         | 
         | Did the indigenous people of north America recognize the threat
         | that they'd be driven to near extinction in a few hundred years
         | when a boat showed up? Even if they did, could they have done
         | anything about it, the germs and viruses that would lead to
         | their destruction had been quickly planted.
         | 
         | Many people focus on the pseudo-religious connotations of a
         | technological singularity instead of the more traditional "loss
         | of predictability" definition. Decreasing predictability of the
         | future state of the world stands to destabilize us far more
         | likely than the FOOM event. If you can't predict your enemies
         | actions, you're more apt to take offensive action. If you can't
         | (at least somewhat) predict the future market state then you
         | may pull all investment. The AI doesn't have to do the hard
         | work here, with potential economic collapse and war humans have
         | shown the capability to put themselves at risk.
         | 
         | And the existential risks are the improbable ones. The "Big
         | Brother LLM" where you're watched by a sentiment analysis AI
         | for your entire life and if you try to hide from it you
         | disappear forever are much more, very terrible, likelihoods.
        
           | MichaelZuo wrote:
           | > The problem with humanity is we are really poor at
           | recognizing all the ramifications of things when they happen.
           | 
           | Zero percent of humanity can recognize "all the
           | ramifications" due to the butterfly effect and various other
           | issues.
           | 
           | Some small fraction of bonafide super geniuses can likely
           | recognize the majority, but beyond that is just fantasy.
        
             | pixl97 wrote:
             | And by increasing uncertainty the super genius recognizes
             | less...
        
         | JohnFen wrote:
         | Yes. I frequently get asked by laypeople about how likely I
         | think adverse effects of AI are. My answer is "it depends on
         | what risk you're talking about. I think there's nearly zero
         | risk of a Skynet situation. The risk is around what people are
         | going to do, not machines."
        
           | ben_w wrote:
           | I don't know the risk of Terminator robots running around,
           | but automatic systems on both USA and USSR (and post-Soviet
           | Russian) systems have been triggered by stupid things like
           | "we forgot the moon didn't have an IFF transponder" and "we
           | misplaced our copy of your public announcement about planning
           | a polar rocket launch".
        
             | JohnFen wrote:
             | Sure, but that's an automation problem, not an AI-specific
             | one.
        
             | pdonis wrote:
             | But the reason those incidents didn't become a lot worse
             | was that the humans in the loop exercised sound judgment
             | and common sense and had an ethical norm of not
             | inadvertently causing a nuclear exchange. That's the GP's
             | point: the risk is in what humans do, not what automated
             | systems do. Even creating a situation where an automated
             | system's wrong response is _allowed_ to trigger a
             | disastrous event because humans are taken out of the loop,
             | is still a _human_ decision; it won 't happen unless humans
             | who _don 't_ exercise sound judgment and common sense or
             | who don't have proper ethical norms make such a disastrous
             | decision.
             | 
             | My biggest takeaway from all the recent events surrounding
             | AI, and in fact from the AI hype in general, including hype
             | about the singularity, AI existential risk, etc., is that I
             | see _nobody_ in these areas who qualifies under the
             | criteria I stated above: exercising sound judgment and
             | common sense and having proper ethical norms.
        
           | concordDance wrote:
           | What timescale are you answering that question on? This
           | decade or the next hundred years?
        
             | JohnFen wrote:
             | In the decades to come. Although if you asked me to predict
             | the state of things in 100 years, my answer would be pretty
             | much the same.
             | 
             | I mean, all predictions that far out are worthless,
             | including this one. That said, extrapolating from what I
             | know right now, I don't see a reason to think that there
             | will be an AGI a hundred years from now. But it's entirely
             | possible that some unknown advance will happen between now
             | and then that would make me change my prediction.
        
             | pdonis wrote:
             | I don't think it matters. Even if within a hundred years an
             | AI comes into existence that is smarter than humans and
             | that humans can't control, that will only happen if humans
             | make choices that make it happen. So the ultimate risk is
             | still human choices and actions, and the only way to
             | mitigate the risk is to figure out how to _not_ have humans
             | making such choices.
        
         | concordDance wrote:
         | Does Ilya count as a "tech-weak" showman in your book too?
        
         | twinge wrote:
         | The media also doesn't define what it means to be a "doomer".
         | Would an accelerationist with a p(doom) = 20% be a "doomer"?
        
       | gumby wrote:
       | The reference to the origin of the concept of a singularity was
       | better than most, but still misunderstood it:
       | 
       | > In 1993 Vernor Vinge drew on computer science and his fellow
       | science-fiction writers to argue that ordinary human history was
       | drawing to a close. We would surely create superhuman
       | intelligence sometime within the next three decades, leading to a
       | "Singularity", in which AI would start feeding on itself.
       | 
       | Yes it was Vernor, but he said something much more interesting:
       | that as the speed of innovation itself sped up (the derivative of
       | acceleration) the curve could bend up until it became essentially
       | vertical, literally a singularity in the curve. And then things
       | on the other side of that singularity would be incomprehensible
       | to those of us on our side of it. This is reflected in Peace and
       | Fire upon the deep and other of his novels going back before the
       | essay.
       | 
       | You can see in this idea is itself rooted in ideas from Alvin
       | Toffler in the 70s (Future Shock) and Ray Lafferty in the 60s
       | (e.g. Slow Tuesday Night).
       | 
       | So AI machines were just part of the enabling phenomena -- the
       | most important, and yes the center of his '93 essay. But the core
       | of the metaphor was broader than that.
       | 
       | I'm a little disappointed that The Economist, of all
       | publications, didn't get ths quite right, but in their defense,
       | it was a bit tangental to the point of the essay.
        
         | elteto wrote:
         | Thank you for this great explanation of where "singularity"
         | comes from in this context. Always wondered.
        
         | gardenhedge wrote:
         | TIL, thanks
        
         | galangalalgol wrote:
         | Rainbows End is another good one where he explores the earlier
         | part of the curve, the elbow perhaps. Some of that stuff is
         | already happening and that book isn't so old.
        
         | ghaff wrote:
         | A related concept comes from social progression by historical
         | measures. Based on pretty much any metrics, _Why the West Rules
         | for Now_ shows that the industrial revolution essentially went
         | vertical and that prior measures--including the rise of the
         | Roman Empire and its fall--were essentially insignificant.
        
         | stvltvs wrote:
         | > derivative of acceleration
         | 
         | Was this intended literally? I'm skeptical that saying
         | something so precise about a fuzzy metric like rate of
         | innovation is warranted.
         | 
         | https://en.wikipedia.org/wiki/Jerk_(physics)
        
           | dougmwne wrote:
           | I believe the point being made is that the rate of innovation
           | over time would turn asymptotic as the acceleration
           | increased, creating a point in time of infinite progress. On
           | one side would be human history as we know and on the other,
           | every innovation possible would happen all in a moment. The
           | prediction was specifically that we were going to infinity in
           | less than infinite time.
        
             | bloppe wrote:
             | You only reach a vertical asymptote if every derivative up
             | to the infinite order is increasing. That means
             | acceleration, jerk, snap, crackle, pop, etc. are all
             | increasing.
             | 
             | The physical world tends to have certain constraints that
             | make such true singularities impossible. For example, the
             | universal speed limit: c. But, you could argues that we
             | could approximate a singularity well enough to fool us
             | humans.
        
         | JumpCrisscross wrote:
         | > _In 1993 Vernor Vinge drew on computer science and his fellow
         | science-fiction writers to argue that ordinary human history
         | was drawing to a close_
         | 
         | Note that this category of hypothesis was common in various
         | disciplines at the end of the Cold War [1]. (Vinge's being
         | unique because the precipice lies ahead, not behind.)
         | 
         | [1]
         | https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...
        
         | dekhn wrote:
         | I think it's worth going back and reading Vinge's "The Coming
         | Technological Singularity"
         | (https://edoras.sdsu.edu/~vinge/misc/singularity.html) and then
         | follow it up reading The Peace War, but most importantly its
         | unappreciated detective novel sequel, Marooned In Realtime,
         | which explores some of the interesting implications about
         | people who live right before the singularity. I think this book
         | is even better than Fire Upon the Deep.
         | 
         | When I read the Coming Technological Singularity back in the
         | mid-90s it resonated with me and for a while I was a
         | singularitarian- basically, dedicated to learning enough
         | technology, and doing enough projects that I could help
         | contribute to that singularity. Nowadays I think that's not the
         | best way to spend my time, but it was interesting to meet Larry
         | Page and see that he had concluded something familiar (for
         | those not aware, Larry founded Google to provide a consistent
         | revenue stream to carry out ML research to enable the
         | singularity, and would be quite happy if robots replaced
         | humans).
        
         | bloppe wrote:
         | > I'm a little disappointed that The Economist, of all
         | publications, didn't get ths quite right
         | 
         | It's a guest essay. The Economist does not edit guest essays.
         | They routinely publish guest essays from unabashed
         | propagandists as well.
        
         | leereeves wrote:
         | > Vernor...said something much more interesting: that as the
         | speed of innovation itself sped up (the derivative of
         | acceleration) the curve could bend up until it became
         | essentially vertical, literally a singularity in the curve.
         | 
         | In other words, Vernor described an exponential curve. But are
         | there any exponential curves in reality? AFAIK they always hit
         | resource limits where growth stops. That is, anything that
         | looks like an exponential curve eventually becomes an S-shaped
         | curve.
        
       | ctoth wrote:
       | > They didn't suggest a Council of the Elect. Instead, they
       | proposed that we should "make AI work for eight billion people,
       | not eight billionaires". It might be nice to hear from some of
       | those 8bn voices.
       | 
       | Good sloganizing, A+++ would slogan with them.
       | 
       | But any concrete suggestions?
        
         | kolme wrote:
         | The Hollywood writers had some very good suggestions!
        
         | _heimdall wrote:
         | If there was any serious concern over the 8bn voices, I'd
         | assume we would first have been offered some say in whether we
         | even wanted this research done in the first place. Getting to
         | the point of developing an AGI and only then asking what we
         | collectively want to do with it seems pointless.
        
           | arjun_krishna1 wrote:
           | I'd like to bring up that most people in the developing world
           | (China, India, Pakistan) are absolutely thrilled with AI and
           | ChatGPT as long as they are allowed to use them. They see it
           | as a plus
        
         | Animats wrote:
         | > It might be nice to hear from some of those 8bn voices.
         | 
         | They won't matter.
         | 
         | Where AI is likely to take us is a society where a few people
         | at the top run things and most of the mid-level bureaucracy is
         | automated, optimizing for the benefit of the people at the top.
         | People at the bottom do what they are told, supervised by
         | computers. Amazon and Uber gig workers are there now. That's
         | what corporations do, post-Friedman. AI just makes them better
         | at it.
         | 
         | AI mostly replaces the middle class. People in offices. People
         | at desks. Like farmers, there will be far fewer of them.
         | 
         | Somebody will try a revolution, but the places that revolt will
         | get worse, not better.
        
       | thriftwy wrote:
       | "Pandem" is still not translated into English in 2023. A great
       | book about singularity and the most important that I have ever
       | read.
        
         | BWStearns wrote:
         | Have a link? Due to uh, recentish events, this book appears
         | ungoogleable.
        
           | thriftwy wrote:
           | https://fantlab.ru/work1052
        
       | arisAlexis wrote:
       | Good to have impartial articles but it should be noted that the
       | top 3 most cited AI researchers have all the same opinion.
       | 
       | That's Hinton, Bengio and Sutskever.
       | 
       | Their voices should have a heavier weight than Andressen and
       | other irrelevant with AI VCs with vested interests.
        
         | kesslern wrote:
         | What is that opinion?
        
         | nwiswell wrote:
         | I'm not sure how you are getting the citation data for Top 3,
         | but LeCun must be close and he does not agree.
        
           | downWidOutaFite wrote:
           | Not that it means anything for this question but LeCun has
           | half as many citations as Hinton
           | 
           | Hinton 732,799
           | https://scholar.google.com/citations?hl=en&user=JicYPdAAAAAJ
           | 
           | Bengio 730,391
           | https://scholar.google.com/citations?hl=en&user=kukA0LcAAAAJ
           | 
           | He 508,365
           | https://scholar.google.com/citations?hl=en&user=DhtAFkwAAAAJ
           | 
           | Sutskever 454,430
           | https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ
           | 
           | Girshick 418,751
           | https://scholar.google.com/citations?hl=en&user=W8VIEZgAAAAJ
           | 
           | Zisserman 389,748
           | https://scholar.google.com/citations?hl=en&user=UZ5wscMAAAAJ
           | 
           | LeCun 332,027
           | https://scholar.google.com/citations?hl=en&user=WLN3QrAAAAAJ
        
       | neonate wrote:
       | https://web.archive.org/web/20231212220138/https://www.progr...
        
       | concordDance wrote:
       | Damn, there's a lot of nonsense in that essay.
       | 
       | "Rationalists believed that Bayesian statistics and decision
       | theory could de-bias human thinking and model the behaviour of
       | godlike intelligences"
       | 
       | Lol
        
         | nullc wrote:
         | That's actually the lite version told to non-believers, the
         | full Xenu version is that they're going to construct an AI God
         | that embodies "human" (their) values which will instantly take
         | over the world and maximizes all the positive things, protect
         | everyone from harm, and (most importantly) prohibit the
         | formation of any competing super-intelligence.
         | 
         | And if they fail to do this, all life on earth will soon be
         | extinguished (maybe in the next few years, probably not more
         | than 50 years), and potentially not just destroyed but
         | converted into an unspeakable hell.
         | 
         | An offshoot of the group started protesting the main group for
         | not doing enough to stop the apocalypse, but has been set back
         | a bit by the attempted murder and felony-murder charges they're
         | dealing with. Both the protests and the murder made the news a
         | bit but the media hasn't managed to note the connections
         | between the events or to AI doomer cult yet.
         | 
         | I worry that it's going to evolve to outright terrorist
         | attacks, esp with their community moralizing the use of nuclear
         | weapons to shut down GPU clusters... but even it doesn't its
         | still harming people by convincing them that they likely have
         | no future and that they're murdering trillions of future humans
         | by not doing everything they can to stop the doom and by
         | influencing product and policy discussions in weird ways.
        
           | turnsout wrote:
           | Uh, what? Please write about this somewhere.
        
           | ttt11199907 wrote:
           | > their community moralizing the use of nuclear weapons to
           | shut down GPU clusters
           | 
           | Did I miss some news?
        
         | EamonnMR wrote:
         | That's a pretty fair characterization of the Roko's Basilisk
         | crowd though, isn't it?
        
         | ImaCake wrote:
         | That seems like a pretty accurate take on the rationalist
         | movement though? My skepticism with it is that awareness is not
         | enough to overcome bias.
        
       | zzzeek wrote:
       | the billionaires can't decide if AI will create for them a god or
       | a demon. But they do know they're going to make them boatloads of
       | cash at everyone else's expense no matter which way it goes, they
       | aren't debating that part.
        
         | tharne wrote:
         | There's nothing to debate. This is the way these folks have
         | been operating for decades. Why change now?
        
       | rambambram wrote:
       | I only see the word 'AI', it's mentioned exactly 27 times. The
       | word 'LLM' is used nowhere in this article.
        
       | skepticATX wrote:
       | Eschatological cults are not a new phenomenon. And this is what
       | we have with both AI safety and e/acc. They're different ends of
       | the same horseshoe.
       | 
       | Quite frankly, I think for many followers, these beliefs are
       | filling in a gap which would have been filled with another type
       | of religious belief, had they been born in another era. We all
       | want to feel like we're part of something bigger than ourselves;
       | something world altering.
       | 
       | From where I stand, we are already in a sort of technological
       | singularity - people born in the early 1900s now live in a world
       | that has been completely transformed. And yet it's still an
       | intimately familiar world. Past results don't guarantee future
       | results, but I think it's worth considering.
        
       ___________________________________________________________________
       (page generated 2023-12-12 23:00 UTC)