[HN Gopher] AI Companies and Advocates Are Becoming More Cult-Like
___________________________________________________________________
AI Companies and Advocates Are Becoming More Cult-Like
Author : legrande
Score : 28 points
Date : 2024-01-30 19:24 UTC (3 hours ago)
(HTM) web link (www.rollingstone.com)
(TXT) w3m dump (www.rollingstone.com)
| LiquidSky wrote:
| Becoming?
| lainga wrote:
| SV Pivots from Bunkers in Nauru to Calling Enemies Murderers
|
| I mean, really? "effective accelerationism"?
| rightbyte wrote:
| "Deaths that were preventable by the AI that was prevented from
| existing is a form of murder."
|
| Is he trying to dehumanize critics?
| JohnFen wrote:
| It's just a pure emotional manipulation attempt. The statement
| itself is illogical on its face, and presents the tech as
| having only upsides. It ignores the meaning of deaths caused by
| AI.
|
| Any time I see appeals like this, my takeaway is that the
| person has no actual argument.
| eli_gottlieb wrote:
| Well yeah, it's just plain moral blackmail. If it wasn't so
| common in the discourse I'd say he should actually be stopped
| from doing it.
| add-sub-mul-div wrote:
| I have murdered so many people by not becoming a doctor.
| floren wrote:
| Give me money or you're a murderer.
| AnimalMuppet wrote:
| Two can play that game. "Taking all this money you're wasting
| on AI and not giving it to buy mosquito nets is a form of
| murder."
| EA-3167 wrote:
| Without an affective death spiral, how can an overpromised,
| overfunded technology keep a dominant market position? Think of
| the shareholders!
| lumost wrote:
| One emerging challenge with AI is that we are not seeing strong
| adoption outside of Chat applications. While every company has
| pivoted to trying to leverage GenAI - applications outside of
| OpenAI have ... mixed adoption.
|
| This is not dissimilar to mobile in 2008, or cloud circa 2006.
| It's ok that new applications take time to emerge, but the
| valuation of firms implies that these technologies have _already_
| emerged. Perhaps this is fine for core AI development, but
| placing these valuations on SaaS firms could be problematic in
| the future.
| floren wrote:
| Driving 101 through the peninsula, it's shocking how many
| billboards are advertising AI <something>. I think the article
| correctly diagnoses it as a desperate attempt to tap VC money
| that's getting scarcer and scarcer.
| kaycebasques wrote:
| I was expecting more analysis of specific cult-like behavior,
| which I agree there is quite a lot of. After a close call with a
| borderline cult in the 2010s I spent a fair bit of time
| processing that experience and reading up on cult dynamics. Some
| of the stuff OpenAI employees were saying on Twitter during the
| weekend crisis raised a lot of yellow flags for me.
|
| We should not fool ourselves into thinking that cult-like
| behavior is isolated to AI, though. It often pops up alongside
| new tech. On that note this is a pretty remarkable quote:
|
| > "If your product isn't amenable to spontaneously producing a
| cult, it's probably not impactful enough."
| EA-3167 wrote:
| I'm curious, given your experiences, if you've read 'Terror,
| Love and Brainwashing' by Stein? And if so, what did you think
| of it?
| kaycebasques wrote:
| Have not read it, will check it out
| RGamma wrote:
| People are scary :/
| fjoireoipe wrote:
| Can we finally talk about this site, and how it relates to these
| cults? There's a number of lesswrong, e/acc, and other pseudo
| "rationalist" blogs that get shared and upvoted on this site.
| Most of their assumptions go unchallenged. Not saying they
| shouldn't be read, or debated. But their writing should be viewed
| in context - it's fringe stuff, written by people from a peculiar
| subculture with values out-of-whack with most people.
|
| I'd like to make some modest assertions, to push back on fringe
| ideas I've seen repeated here:
|
| 1. For "the singularity" to happen, we probably need something
| more to happen than just chatGPT to ingest more data or use more
| processing power.
|
| 2. Even if, somehow, chatGPT turned into skynet, we'd hopefully
| be able to unplug the computer.
|
| 3. If you want to save lives, it's probably more useful to think
| about lives saved today than hypothetical lives you could save
| 100 years from now. Not that you shouldn't consider the long
| term, but your ability to predict the future gets worse and worse
| the farther you project out.
|
| 4. If you want to save lives, it's probably more useful to save
| actual lives, than say, hypothetical simulated lives that exist
| inside of a computer.
|
| 5. The argument that "we're killing more people by delaying time
| inventing the hypothetical life saving technology" is not very
| useful either, because you can't actually say how many lives
| would be saved versus harmed. And mostly it just sounds like a
| front for "give me more money and less regulations".
|
| 6. Reading a bunch of science fiction and getting into fights on
| an internet forum is not a substitute for education and
| experience. Unless you've spent a good time studying viruses, you
| are not a virologist, and while the consensus among virologists
| can be wrong, you should have the intellectual humility to
| realize that you are probably not equipped to tell, unless you
| have expertise in the field.
|
| 7. Anything that smacks of eugenics is most likely pseudoscience.
|
| 8. If someone talks like a racist / sexist / nazi, or acts like a
| racist / sexist / nazi, they probably are one. It's probably not
| a joke, or a test.
| AnimalMuppet wrote:
| Well, this site often deals with fringe stuff. Take Lisp, for
| example. It is a fringe language, no matter how zealous its
| advocates are. So is Haskell. But you see articles about them
| posted here far out of proportion to their real-world use and
| impact.
|
| And I disagree that "most of their assumptions go
| unchallenged". I see challenges to the assumptions of
| lesswrong, e/acc, and other such stuff often here. (Maybe less
| often than is warranted, but still fairly often.)
|
| "It is the mark of an educated mind to be able to entertain a
| thought without accepting it." - often attributed to Aristotle,
| but apparently not actually from him. Still a good thought. We
| can post about this stuff, and discuss it, without buying it.
|
| I agree with your pushbacks.
| two_in_one wrote:
| > 1. For "the singularity" to happen, we probably need
| something more to happen than just chatGPT to ingest more data
| or use more processing power.
|
| It's not actually clear what "the singularity" is? Is it
| something running out of control or it's still controllable?
| There is a blurry line. People are afraid because they think
| it's sort of uncontrollable explosion.
|
| The second question is about AGI. What is it? Is it something
| 'alive' or just a generic AI calculator with no 'creature'
| features. Like self preservation at least.
|
| I think our view of these two things will change soon as we get
| a close up picture. Pretty much like Turing test doesn't look
| great anymore. As even dumb chatbots can pass.
| com2kid wrote:
| > If you want to save lives, it's probably more useful to think
| about lives saved today than hypothetical lives you could save
| 100 years from now. Not that you shouldn't consider the long
| term, but your ability to predict the future gets worse and
| worse the farther you project out.
|
| This reasoning, taken to its logical extent, would mean
| completely defunding all medical research and instead
| redeploying all research to best utilizing our current
| technology.
|
| That is, hopefully, an obviously faulty decision.
|
| We must balance research with using current findings to help
| people. Stopping all drug development and telling people with
| currently untreatable diseases "haha, too bad, we aren't even
| going to try", is cruel.
|
| The ultimate goal of many technology utopians is the end of all
| death. We have examples from the natural world of incredibly
| long lived higher life forms, and simple life forms that are
| immortal, so it isn't scientifically impossible. The number of
| lives saved is literally, infinite. It is really hard to argue
| with "infinite upside".
|
| (The absolute societal shit show that such a technology would
| bring about is a rather important question...)
|
| > "we're killing more people by delaying time inventing the
| hypothetical life saving technology" is not very useful either,
| because you can't actually say how many lives would be saved
| versus harmed.
|
| For many future inventions, of course estimates can be made of
| the number of lives saved.
|
| The more pie in the sky research is, the larger those error
| bars become. "LLMs may one day help us sift through research
| and cure all forms of cancer" is, technically true, but the
| error bars on it are so high that you don't want to devote all
| the world's resources to LLMs just on the hopes of eliminating
| all forms of cancer 50+ years from now.
|
| "This company is working on AI Vision+Robotics technology so
| people don't have to do this super hazardous job that claims a
| bunch of lives. They are an estimated 5 years from productizing
| and they already have purchase contracts in place" is a much
| different statement.
| frogamel wrote:
| I've worked in many research scientist/MLE roles over the years
| and I haven't met any IC who has this much of a fixation on AI
| being a moral evil/good. The ones who do are inevitably
| nontechnical hucksters, usually just trying to get money or self-
| promote.
| ChrisArchitect wrote:
| "Related" rollingstone, from 3 years ago:
|
| _Welcome to the Church of Bitcoin_
|
| https://news.ycombinator.com/item?id=27871038
___________________________________________________________________
(page generated 2024-01-30 23:01 UTC)