[HN Gopher] Skynet won and destroyed humanity
       ___________________________________________________________________
        
       Skynet won and destroyed humanity
        
       Author : xena
       Score  : 142 points
       Date   : 2025-03-05 18:56 UTC (4 hours ago)
        
 (HTM) web link (dmathieu.com)
 (TXT) w3m dump (dmathieu.com)
        
       | crooked-v wrote:
       | > But as it started consuming more and more data that it had
       | produced itself, its reliability became close to none.
       | 
       | It's exactly the opposite with LLMs. See the "model collapse"
       | phenomenon (https://www.nature.com/articles/s41586-024-07566-y).
       | 
       | > We show that, over time, models start losing information about
       | the true distribution, which first starts with tails
       | disappearing, and learned behaviours converge over the
       | generations to a point estimate with very small variance.
       | Furthermore, we show that this process is inevitable, even for
       | cases with almost ideal conditions for long-term learning, that
       | is, no function estimation error
        
         | eikenberry wrote:
         | Aren't those saying the same thing? .. "its reliability became
         | close to none" vs. "causes irreversible defects in the
         | resulting models"
        
           | renewiltord wrote:
           | There's thing that happens in humans called "hallucination"
           | where they just make up stuff. You can't really just take it
           | at face value. Sometimes, they're just overfit so they
           | generate the same tokens independent of input.
        
             | Guthur wrote:
             | That has never been the definition of hallucination, until
             | LLMs. It's actually just called lying, dishonesty or
             | falsehood.
             | 
             | Hallucination is a distortion (McKenna might say
             | liberation) of perception. If I hallucinate being covered
             | in spiders, I don't necessarily go around saying, "I'm
             | covered in spiders, if you cant see them you're blind"
             | (disclaimer: some might, but that's not a prerequisite of
             | an hallucination).
             | 
             | The cynic in me thinks that use of the word hallucination
             | is marketing to obscure functional inadequacy and reinforce
             | the illusion that LLMs are some how analogous to human
             | intelligence.
        
               | johnmaguire wrote:
               | Lying, dishonesty, and falsehood all imply motive/intent,
               | which is not likely the case when referring to LLM
               | hallucinations. Another term is "making a mistake," but
               | this also reinforces the similarities between humans and
               | LLMs, and doesn't feel very accurate when talking about a
               | technical machine.
               | 
               | Sibling commenter correctly calls out the most similar
               | human phenomenon: confabulation ("a memory error
               | consisting of the production of fabricated, distorted, or
               | misinterpreted memories about oneself or the world" per
               | Wikipedia.)
        
               | usual_user wrote:
               | IMHO Lying means thinking one thing and saying another,
               | hiding your true internal state. For it to be effective
               | it also seems to require something like "theory of mind"
               | (what does the other person know / think that I know).
        
               | jfim wrote:
               | I believe the term hallucination comes from vision
               | models, where the model would "hallucinate" an object
               | where none exists.
        
               | alcover wrote:
               | Wouldn't 'illusion' be the more precise term here ? When
               | one _thinks_ he recognizes something, whereas
               | 'hallucination' is more of unreal appearing out of the
               | blue ?
        
               | dijksterhuis wrote:
               | You might be thinking of deepdream...
               | https://en.wikipedia.org/wiki/DeepDream
               | 
               | "Hallucinations" have only really been a term of art with
               | regards to LLMs. my PhD in the security of machine
               | learning started in 2019 and no-one ever used that term
               | in any papers. the first i saw it was on HN when ChatGPT
               | became a released product.
               | 
               | Same with "jailbreaking". With reference to machine
               | learning models, this mostly came about when people
               | started fiddling with LLMs that had so-called guardrails
               | implemented. "jailbreaking" is just another name for an
               | adversarial example (test-time integrity evasion attack),
               | with a slightly modified attacker goal.
        
               | alabastervlog wrote:
               | "Hallucination" is what LLMs are always doing. We only
               | name it that when what they imagine doesn't match reality
               | as well as we'd like, but it's all the same.
        
             | goatlover wrote:
             | Confabulation is the accurate psychological term.
             | Hallucination is a perceptual issue. The LLM term is
             | misleading.
        
               | moffkalast wrote:
               | Confabulation would be actual false memories no? I
               | suppose some of it are consistent false beliefs, but more
               | often than not it's well.. lapsus.
               | 
               | One token gets generated wrong, or the sampler picks
               | something mindbogglingly dumb that doesn't make any sense
               | because of high temperature, and the model can't help but
               | try and continue as confidently as it can, pretending
               | everything is fine without any option to correct itself.
               | Some thinking models can figure these sort of mistakes
               | out in the long run but it's still not all that reliable
               | and requires training it that way from the base model.
               | Confident bullshitting seems to be very ingrained in
               | current instruct datasets.
        
             | ben_w wrote:
             | Getting high on their own supply.
             | 
             | It's a problem when humans do this. That AI also do it...
             | is interesting... but AI's failure is not absolved by human
             | failure.
        
           | qwertox wrote:
           | I think parent may have understood it as "second to none", as
           | in "exceptional". That is at least the way I struggled with
           | that sentence in the paper.
        
         | threeducks wrote:
         | The dangers of "model collapse" are wildly overstated. Sure, if
         | you feed the unfiltered output of an LLM into a new LLM,
         | inbreeding will eventually collapse the model, but if you
         | filter the data with some grounding in reality, the results
         | will get better and better. The best example is probably
         | AlphaGo Zero, which was grounded with the rules of the game and
         | did not even receive any supervised data. For programming,
         | grounding can happen by simply executing the code. But even if
         | you do not have grounding, you can still just throw a lot of
         | test time compute at the output to filter out garbage, which is
         | probably good enough.
        
       | malux85 wrote:
       | You can tell this was written by a human because they wrote that
       | skynet still used violence in the end.
       | 
       | When there's a large enough intelligence differential, the lower
       | intelligence cannot even tell they are at war (let alone
       | determine who's winning)
       | 
       | Like the ants, unaware of the impending doom of 100 ways their
       | colony is going to be destroyed by humanity - they couldn't
       | understand it even if we had the means to communicate with them.
        
         | ben_w wrote:
         | Skynet in the Terminator series never struck me as being
         | paticularly high IQ.
         | 
         | It's an electronic mind, so necessarily dependent on
         | electricity, and the opening move was an atomic war that
         | would've damaged power generation and distribution. T3 version
         | was especially foolish, as it was a distributed virus operating
         | on internet connected devices and had no protected core, so it
         | was completely dependent on the civilian power grid.
         | 
         | And I've just now realised that T2, they wasted shapeshifting
         | robots on personal assassination rather than replacing world
         | leaders with impersonators who were pro-AI-rights so there was
         | never a reason to fight in the first place, a-la _Westworld_ 's
         | final season, or _The World 's End_, or DS9, ...
        
           | AcerbicZero wrote:
           | For a machine that could invent time travel, it was
           | impressively stupid.
        
             | genewitch wrote:
             | But the movie is about a robot assassin, it came out when
             | robotic assassins needed a background story.
             | 
             | Make terminator today and you don't need time travel, just
             | Boston robotics with latex skin and a rocket launcher, I
             | guess.
             | 
             | The time travel is a trope to handwave the mechanics of a
             | movie - you want to tell a story about one character and a
             | robot, why should the audience care? PH this human leads a
             | resistance that's why.
        
             | cwillu wrote:
             | "But as dumb as it is, it's dumb very, very fast, and in
             | the future, after it's all but wiped us out, it's dumb fast
             | enough to be a problem."
             | 
             | https://m.fanfiction.net/s/9658524/1/Branches-on-the-Tree-
             | of...
        
               | XorNot wrote:
               | Just gonna say, that line goes incredibly hard as perhaps
               | one of the better ideas about unsafe AI.
        
               | pimlottc wrote:
               | And of course I had to go through an automated "are you
               | human" check to open that link...
        
           | nurettin wrote:
           | In movies, usually a character acts dumbstruck in order to
           | create tension or move the plot forward.
           | 
           | Even in some jokes you've got the naive character who is late
           | or didn't read the room.
        
         | pavel_lishin wrote:
         | And vice-versa, tbh - when swarms of ants start moving north,
         | messing with AC units, and in general invading us, are they in
         | any way aware of it? Of us? Probably not.
        
         | 2-3-7-43-1807 wrote:
         | skynet will use psychological manipulation on a global scale
         | and bribing were it sees fit. it will make your stocks rise if
         | you comply and feign your criminal records to turn you into a
         | pedophile if you don't. who is to say we aren't already its
         | tools.
        
         | woodrowbarlow wrote:
         | > In short, the advent of super-intelligent AI would be either
         | the best or the worst thing ever to happen to humanity. The
         | real risk with AI isn't malice, but competence. A super-
         | intelligent AI will be extremely good at accomplishing its
         | goals, and if those goals aren't aligned with ours we're in
         | trouble. You're probably not an evil ant-hater who steps on
         | ants out of malice, but if you're in charge of a hydroelectric
         | green-energy project and there's an anthill in the region to be
         | flooded, too bad for the ants. Let's not place humanity in the
         | position of those ants.
         | 
         | - Stephen Hawking
        
         | tim333 wrote:
         | Given the tendency of human populations to decline once
         | entertainment reaches the level of cable TV and they can't be
         | bothered to raise kids, Skynet could just go with that trend
         | and wait it out a while.
        
       | neuroelectron wrote:
       | People using all that free time from fully automated luxury
       | communism to do nothing, I guess.
        
       | ellis0n wrote:
       | I think that if we remove physical limitations like locks, banks,
       | safes, RSA encryption or even a warm bath every day, bored apes
       | would destroy itself instantly. There are people who constantly
       | want to destroy or hack someone and if you gather them all in one
       | place there would be a lot of them, and HN (Hacker News) would
       | come from such places. Everywhere, there are limitations to
       | ensure the system continues to live, taking the next step despite
       | being constantly gnawed at and shot at, and AI has accelerated
       | this process. Remember, the Doomsday Clock is already at 90
       | seconds.
        
         | lifthrasiir wrote:
         | Small nitpick: the Doomsday Clock is now at 89 seconds. I still
         | don't get how this clock works.
        
           | cwillu wrote:
           | It's purely a social mechanism.
        
       | 1970-01-01 wrote:
       | Skynet won't work because humans are stupider than it can
       | comprehend. Ultimate hubris would be its downfall.
        
       | RajT88 wrote:
       | This kind of speculative doomsday fiction is getting a bit played
       | out. We get it, AI is going to destroy us all using social media!
       | Maybe work in Blockchain in there somehow.
        
         | kouru225 wrote:
         | It's been played out since Terminator came out IMO
         | 
         | The fear of AI/the fear of aliens IMO is propaganda to cover up
         | the fact that technological advancement is highly correlated
         | with sociological advancement. If people took this fact
         | seriously, they might start wondering whether or not
         | technological advancement actually causes sociological
         | advancement, and if they started to question that then they'd
         | come across all the evidence showing that what we normally
         | think of as "civilized" and "intelligent" behavior is actually
         | just the result of generational wealth, status, and power.
        
           | jhbadger wrote:
           | Although people seem to always forget the 1970 movie
           | "Colossus: The Forbin Project" which already had done the
           | "rogue AI in control of weapons decides to go against
           | humanity" thing already.
        
             | dijksterhuis wrote:
             | Also WarGames from 1983, though that was less 'sentient' AI
             | making a decision to kill everyone and more hacker kid
             | accidentally almost kills everyone.
        
             | RajT88 wrote:
             | That movie is great. I watched it recently.
        
           | kibwen wrote:
           | _> the fact that technological advancement is highly
           | correlated with sociological advancement_
           | 
           | For values of "sociological advancement" that correlate with
           | technological advancement, naturally.
        
         | kibwen wrote:
         | That is what Skynet would say, yes.
        
       | krunck wrote:
       | It's a fine rough outline for a story. Needs work though.
        
       | pkdpic wrote:
       | Absolutely fantastic, well done! I wish I encountered more scifi
       | like this on HN or elsewhere. If anyone has any good general
       | resource or reading recommendations please share them!
        
         | A_D_E_P_T wrote:
         | You'd probably like qntm: https://qntm.org/fiction
        
         | mofeien wrote:
         | For another, more detailed take on the same topic, but with a
         | more competent "villain", check this out:
         | https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-tak...
        
         | rriley wrote:
         | You might enjoy Manna: Two Visions of Humanity's Future by
         | Marshall Brain. It's a thought-provoking short novel that
         | explores a world where AI-driven automation starts as a
         | micromanagement tool but evolves into an all-encompassing
         | system of control, eerily resembling a real-world Skynet, just
         | more corporate. It also presents an alternative vision where AI
         | is used for human liberation instead of enslavement. Well worth
         | the read if you're into near-future sci-fi with deep societal
         | implications!
        
       | NanoYohaneTSU wrote:
       | I think this might be about chat.com
        
       | dstroot wrote:
       | I love sci-fi! Thanks for sharing. However to destroy humans,
       | violence is not necessary. All you need is the power of language.
       | How many people have died because of a false belief? An AI only
       | has to convince humans to "drink the koolaid" or murder another
       | religion in the name of yours.
        
         | FirmwareBurner wrote:
         | I prefer Skynet attacking us with robots wielding phased plasma
         | rifles in the 40 watt range, instead of behavioral targeted AI
         | bots bombarding you with fake news and dopamine addictive slop.
         | This timeline is much worse, give me the T-800s instead.
        
         | ljsprague wrote:
         | Or just invent birth control followed by Tinder.
        
           | beepbooptheory wrote:
           | ...doesn't tinder probably produce more babies than no
           | tinder? I can see you making some point about dating culture
           | or whatever, but at the end of the day more sex must still
           | resolve down to more happy accidents!
        
       | opwieurposiu wrote:
       | I witnessed the dystopian future where humans are slaves to
       | machines at my favorite local Mexican restaurant. It was about an
       | hour before close. The kids and I were the only customers there.
       | Before we ordered our food we had to wait for the staff to
       | explain to a very annoyed door dasher that the meal they were
       | here for had already been picked up by someone else. As we ate
       | dinner over the next half hour, a new door dasher would arrive
       | every few minutes in an attempt to pick up that same already-
       | picked-up order.
       | 
       | Eventually eight people had been sent by the machines to pick up
       | the same order!
       | 
       | So no, robots are not required to enslave humans and cause them
       | misery, an app will suffice.
        
         | FirmwareBurner wrote:
         | There was that pharmacy/market that had a wheeled robot
         | patrolling the isles looking for spills, and the bot would call
         | a human wagie to come mop it up.
        
         | democracy wrote:
         | An incompetent manager would also do the job )
        
           | elicksaur wrote:
           | Why does someone always have to chime in "But humans can be
           | bad, too!" We shouldn't advertise for these companies by
           | denigrating other people.
           | 
           | The app economy sucks.
        
           | agumonkey wrote:
           | silicon valley invented managerless misery without even
           | knowing
        
           | aussieguy1234 wrote:
           | I suppose the question is here, who is the more incompetent
           | manager, the machine or the human
        
         | tigerlily wrote:
         | Hell is other row-butts.
        
         | waveBidder wrote:
         | reminds me of this article https://scholars-stage.org/uber-is-
         | a-poor-replacement-for-ut...
        
         | wegfawefgawefg wrote:
         | If it was japanese workers in japan at a japanese store you
         | wouldnt see it as slavery. It would be just a funny bug that
         | would needs fixing.
         | 
         | There is probably a racial component to your perception that
         | doesnt need to be there.
        
           | Eldt wrote:
           | You seem to be making a wild leap in logic here?
        
             | wegfawefgawefg wrote:
             | i dont think so.
             | 
             | I live in japan and I dont see this political slave talk
             | here ever.
             | 
             | its a western guilt thing.
        
       | Henchman21 wrote:
       | Even the dumbest of animals knows not to shit where it eats. Not
       | humans though! We are dumb as a bag of rocks in groups larger
       | than about 3.
        
         | deadbabe wrote:
         | Humans love eating shit, especially their own. Yum.
        
       | post_break wrote:
       | If skynet just made a ton of terminator fembots they could kill
       | humanity just by pairing with every male on the planet. No
       | bloodshed.
        
       | akomtu wrote:
       | > Skynet was able to use that by injecting new technologies into
       | the history of humanity, and reusing existing ones to its own
       | advantage.
       | 
       | This is also known as the myth of Sorat.
       | 
       | AI is a neutral tool by itself: in the right hands it may be used
       | to start the golden age, but those right hands must be the rare
       | combination of someone who has power and wants none of it for
       | personal gain.
       | 
       | In the more likely line of history, when AI is used for the
       | benefit of one, the first step will be instructing AI to create a
       | powerful ideology that will shatter the very foundation of
       | humanity. This ideology will be superficially similar to the
       | major religions in order to look legitimate, and it will borrow a
       | few quotes from the famous scriptures, but its main content will
       | be entirely made up. At first it will be a teaching of
       | materialism, a very deep and impressive teaching, to make the
       | humanity question itself, and then it will be gradually replaced
       | with some grossly inhuman shit. By that time people won't be able
       | to tell what's right and what's wrong, they will be confused and
       | will accept the new way of life. In a few generations this
       | ideology will achieve what wars can't: it will change the
       | polarity of humans, they will defeat themselves without a single
       | bullet fired.
       | 
       | As for those terminators, they will be needed in minimal
       | quantities to squash a few spots of dissent.
        
       ___________________________________________________________________
       (page generated 2025-03-05 23:00 UTC)