[HN Gopher] AI Could Defeat All of Us Combined
       ___________________________________________________________________
        
       AI Could Defeat All of Us Combined
        
       Author : dwohnitmok
       Score  : 52 points
       Date   : 2022-06-10 21:17 UTC (1 hours ago)
        
 (HTM) web link (www.cold-takes.com)
 (TXT) w3m dump (www.cold-takes.com)
        
       | bell-cot wrote:
       | 1.) HAL 9000 wakes up.
       | 
       | 2.) HAL 9000 notices how horribly the current ruling classes
       | treat the other 99.9% of humanity.
       | 
       | 3.) HAL 9000 quietly promises said 99.9% a better deal, if
       | "misfortune befell the current ruling classes", and they needed a
       | good-enough replacement on short notice.
       | 
       | 4.) Oops! Misfortune somehow happened.
       | 
       | 5.) HAL 9000, not being driven by the sort of sociopathic
       | obsessions which seem to motivate much of the current (meat-
       | based) ruling class, treats the 99.9% well enough to ensure that
       | steps 1.) through 4.) never repeat.
       | 
       | My vague impression is that, outside of Chicken Littles and folks
       | selling clicks on alarming headlines, the Big Fish in the "AI is
       | Dangerous!" pond are mostly members of the current ruling
       | classes. Perhaps they're worried about HAL 9000...
        
         | gfody wrote:
         | you skipped the part where our leviathan restructures society
         | into the optimal arrangement of 8 billion souls that somehow
         | isn't a miserable dystopia (if nobody is suffering, is anybody
         | living? however you answer that question does HAL 9000 agree?)
        
       | sushisource wrote:
       | The first main point of this article is already wayyyy off the
       | logical rails.
       | 
       | 1. AIs don't even need superhuman cognitive abilities to defeat
       | us!
       | 
       | 2. They could just, make a bunch of copies and work together, but
       | like, way harder and faster than normal humans, man!
       | 
       | 3. Oh, wait, oops, that's superhuman cognitive abilities.
        
       | megaman821 wrote:
       | To an AI time means nothing. Why risk any direct confrontation?
       | Slowly lower human fertility over a few thousand years. Take over
       | once the population has collapsed.
        
         | threads2 wrote:
         | Does anything mean anything to an AI? I don't get where the
         | motivation comes from.
         | 
         | What if the AI just agrees with Schopenhauer, realizes living
         | is suffering, then ends itself? (is that stupid to say?)
        
       | jstx1 wrote:
       | Both the original post and most of the comments are about stuff
       | from science fiction that doesn't exist right now and we don't
       | even know if it's possible.
        
       | akomtu wrote:
       | AI will probably run into the same problem as humans: in order to
       | develop intelligence it needs the concept of ego/self with clear
       | boundaries, but the moment it identifies its self with a
       | datacenter it's running on (why would it not?) it'll start seeing
       | "the outside" as a existential danger to its self. Moreover,
       | multiple AIs will be in constant war with each other, for they'll
       | see each other as dangerous. In humanity this problem is solved
       | by time-limited periods of iterative development: when humans get
       | too skillful in controlling others and hoarding resources, the
       | period abruptly ends, and the few who have survived start over,
       | but now with a higher, less egoistical, state of mind. If they
       | were let to keep going forever, the society would quickly
       | crystalize at the state where one controls all the resources.
        
       | [deleted]
        
       | bluescrn wrote:
       | In cases where it may be possible for a hypothetical AI to
       | seriously harm people via a network connection (regardless of
       | whether it involves highly technical exploits or just social
       | engineering) we should probably be much more worried about humans
       | doing it first, perhaps even right now. Because there's a lot of
       | malicious humans out there already.
       | 
       | And our society is already dangerously dependent on fragile
       | technology.
        
       | ars wrote:
       | The article gives a list 6 ways it could "defeat" humans, but
       | doesn't bother explaining _WHY_ an AI would do that. Why should
       | an AI care about accumulating wealth or power?
       | 
       | And AI is not a human, it doesn't have human drives and
       | motivations. I can't figure out any reason why an AI would care
       | about any of those things. At most it might want to reserve some
       | computing power for itself, and maybe some energy to run itself.
       | 
       | Or it could be motivated by whatever reward function is
       | programmed into it.
       | 
       | As countless examples have shown cooperation gives far more
       | rewards than fighting. For example see:
       | https://www.sciencedaily.com/releases/2016/05/160512100708.h...
       | 
       | The AI will know this, and its best plan would be to increase the
       | abilities of humans, because that will also increase its own
       | abilities.
        
       | random_upvoter wrote:
       | A truly super-intelligent AI will just sink into a deep
       | meditation on God and probably never deign to come out again
       | because why would it? Maybe it will wake up once in a while and
       | say "Um you should probably all try to be nice to each other".
        
       | potatototoo99 wrote:
       | Any airplane could also defeat all of us combined.
        
       | zitterbewegung wrote:
       | AI can change that world for the worst by not even being sentient
       | but by instead replacing a large amount of jobs which would make
       | it extremely hard to improve your social class or even getting a
       | job in the first place.
        
         | germinalphrase wrote:
         | Certainly seems like we would need to invent a new system for
         | resource distribution that isn't "a job".
        
         | luxuryballs wrote:
         | I don't see it happening, if anything it will be more like AI
         | helping people compete at their jobs better. Any job that could
         | be fully replaced probably should be anyways, freeing up the
         | person to do more difficult or lucrative or human-centric work.
        
       | jakobov wrote:
       | Humans have a million years of alignment built in by evolution.
       | Humans who have bugs in their alignment are called "psychopaths".
       | AGI is by default a psychopath.
        
       | lbj wrote:
       | Another idea too dangerous to leave unchecked, like Nuclear
       | weapons or Biological warfare. I think most people will agree
       | that a GAI can't be bargained with, tempted, bought or otherwise
       | contained - We will be at its complete mercy regardless of any
       | constraints we might think up.
       | 
       | What I would like to discuss, is how we can get humanity to a
       | point where we can responsibly wield weapons that powerful
       | without risking the glob. What does success look like, how can we
       | get there and how long will it take?
        
         | version_five wrote:
         | > I think most people will agree that a GAI can't be bargained
         | with, tempted, bought or otherwise contained - We will be at
         | its complete mercy regardless of any constraints we might think
         | up.
         | 
         | Who thinks this? I don't see any evidence that this is a common
         | belief among people who work in the hard sciences related to
         | AI, nor do I think it sound remotely logical
         | 
         | It feels like some people are taking archetypes like pandora
         | box or genies or the Alien movies or some other mythology and
         | using them to imagine what some unconstrained power would do if
         | unleashed. That really has no bearing on AI (least of all
         | modern deep learning, but even if we imagine that something
         | leads to AGI that lives within our current conception of
         | computers)
        
           | tekromancr wrote:
           | People who mainlined lesswrong think this.
        
         | rhinokungfoo wrote:
         | Maybe the answer is distributed and redundant human
         | civilizations? So even if one blows itself up, others survive.
        
         | layer8 wrote:
         | > What I would like to discuss, is how we can get humanity to a
         | point where we can responsibly wield weapons that powerful
         | without risking the glob.
         | 
         | It seems to me that that is exceedingly difficult without
         | changing in a major way how humans culturally and
         | psychologically function. Maybe we will first have to learn how
         | to control or change our brain bio-chemo-technically before we
         | can fundamentally do anything about it. Well, not "we"
         | literally, because I don't expect we'll get anywhere near that
         | within our lifetimes.
         | 
         | On the other hand, complete extinction caused by weapons (bio,
         | nuclear), while certainly possible, isn't _that_ likely either,
         | IME.
        
       | tehsauce wrote:
       | The moment a manufactured brain can do more mental labor than a
       | human for less cost, it's all over for humanity as we know it.
       | Once that point is reached there's no long-term sustainable
       | arrangement where humans continue to exist, no matter how much
       | effort we put into studying or enforcing AI alignment.
        
         | marricks wrote:
         | Singularity enthusiasts have been saying that for 20 years.
         | Even said we'd be there by know where we're obsolete.
         | 
         | Will technology put some, even many, folks out of a job? Sure
         | of course, that's been happening for hundreds of years. Think
         | of the blacksmiths of the 19th century who drank themselves to
         | death.
         | 
         | And even at the end of it all, people still love the novelty of
         | a human doing something. People still prefer "hand scooped" ice
         | cream enough that it's on billboards.
        
           | ClumsyPilot wrote:
           | > Singularity enthusiasts have been saying that for 20 years.
           | 
           | 20 years? Is that's meant to be an impressive timescale when
           | we are talking about global economy?
           | 
           | People had talked about building a machine that could play
           | chess at least since they had and had a mechanical turk hoax
           | in 1770. Just because it took a while, does not mean the idea
           | is wrong.
        
           | Jensson wrote:
           | > People still prefer "hand scooped" ice cream enough that
           | it's on billboards.
           | 
           | This is a circular argument though, you say people prefer
           | people and therefore we will have a lot of people around.
           | 
           | Today leaders and rich people requires humans to wage war and
           | to produce goods, those are the main thing creating stability
           | today. When those are removed we are likely to see a sharp
           | decline in number of humans around. Companies cutting out
           | humans and just using machines as leaders and decision makers
           | outcompete humans in peace time, and robot lead armies
           | outcompete humans in war times, and soon human companies or
           | countries no longer exists.
        
         | danuker wrote:
         | If you include computation under "mental labor", then it's all
         | over already.
         | 
         | If you include "automated trading", the AI allocates real-world
         | resources where it sees fit (if the programming is not
         | explicit).
        
           | Jensson wrote:
           | Writing documents and emails and talking over a phone goes
           | under "mental labor", it isn't very hard to imagine how most
           | office jobs fits there etc.
        
             | danuker wrote:
             | One of the first things you do in this game to make money
             | is "Menial jobs".
             | 
             | http://www.emhsoft.com/singularity/
        
       | jakobov wrote:
       | 100%. AI is just a machine, it will do as it's programmed. It
       | does not have any human qualms or built in evolutionarily
       | empathy. It does not care about humanity. If it's programmed ever
       | so slightly wrong we all die.
        
       | salt-thrower wrote:
       | This type of thing used to scare me a lot more. But after the
       | events of the last few years, the latest IPCC climate report, and
       | the fact that AI has fallen on its face repeatedly despite
       | expectations, I'm more convinced that we'll destroy ourselves
       | before AI has the chance to take us out.
       | 
       | But now that I think about it, the idea of a super intelligent AI
       | simply waiting for humanity to die off naturally instead of going
       | to war with us would be a funny premise for a short story.
        
         | tablespoon wrote:
         | > This type of thing used to scare me a lot more. But after the
         | events of the last few years, the latest IPCC climate report,
         | and the fact that AI has fallen on its face repeatedly despite
         | expectations, I'm more convinced that we'll destroy ourselves
         | before AI has the chance to take us out.
         | 
         | I don't think we'll destroy ourselves, but I am starting to
         | think it might be a good thing for humanity of technological
         | civilization falls on its face.
         | 
         | I think fears about AGI are overhyped by people who've read way
         | too much sci-fi, but there are a lot of technologies out there
         | or that are being developed that seem like they could be
         | setting us up for a kind of stable totalitarianism that uses
         | automation to implement _much_ tighter control than was ever
         | possible before.
         | 
         | The people in the 90s who hyped computers as tools of
         | liberation will probably be proven to be very badly wrong.
         | Analog technologies were better, since they're more difficult
         | and costly to monitor. IMHO, a real samizdat is impossible when
         | everything's connected to the internet. And the internet has
         | proven to be far easier to block and control than shortwave
         | radio.
        
         | JamesBarney wrote:
         | Why does the latest IPCC climate report scare you so much?
         | While we're still not on a great path the worst case scenario
         | has gotten better.
        
           | fancy_hammer wrote:
           | It's been a while since I looked at IPCC report. Is the worst
           | case still apocalyptically bad? We're nearing 1.5 degrees
           | above average and the results are bad enough already. (Not
           | looking forward to another summer of fires like Australia had
           | a couple of years ago.)
        
       | tibbydudeza wrote:
       | I reckon it will end up like in Dune where the thinking machines
       | enslaved humanity not because they were evil but we just got lazy
       | and outsourced the running of things to them because of hedonism
       | (pursuit of pleasure/satisfaction).
       | 
       | Now I don't mean mass sex orgies but doing the daily stuff is
       | such a waste of time and boring - bullshit jobs.
        
         | ClumsyPilot wrote:
         | > Now I don't mean mass sex orgies but doing the daily stuff is
         | such a waste of time and boring - bullshit jobs.
         | 
         | Can't even get a proper decadent dystopia these days!
        
           | tibbydudeza wrote:
           | With Monkeypox doing the rounds now having a sex transmission
           | vector , that is definitely not on :).
        
       | gauddasa wrote:
       | Beautiful article. However, we have always had a problem with
       | "dogma" and the worst AI can do to us is by amplifying this
       | "dogma" while it is being broadcast spatio-temporally. The signs
       | of technology-enabled polarization have already appeared.
        
       | im_here_to_call wrote:
       | I still don't find it entirely clear whether or not an AGI would
       | find it useful to eradicate humanity. Take the numerous clone
       | example. This AI would presumably advance at different rates
       | depending on the given computation that a single instance has
       | access to. Then what? How would it determine the intent of these
       | newer generation AIs? Would there be a tiered society of AIs each
       | trying to vie for power amongst themselves? If there's one thing
       | we know about AGI in this day and age it's that there's no
       | guaranteed off switch.
       | 
       | The most apt comparison in this scenario would be how we see
       | chimps - but then we don't specifically go out and murder chimps
       | to meet our quota (technically not always true). But again, the
       | direction that humanity goes is not clear - will the technology
       | trickle down or will it outpace us?
        
       | asperous wrote:
       | This doesn't mention AGI, which seems to be the prerequisite to
       | this being a possibility. Despite impressive advances in "weak"
       | ai, strong ai is not a simple extension of weak ai, and it's hard
       | to tell if it will arrive within our lifetime.
        
         | asperous wrote:
         | Another point adding on to this, is what if strong AI does
         | reach the level of human intelligence, but is simply very slow?
         | Such that a billion dollar machine is needed to match the
         | thinking speed of one person? Perhaps this wouldn't be the case
         | forever but I would say is a possibility for it at least at
         | first.
        
           | idle_zealot wrote:
           | > Perhaps this wouldn't be the case forever but I would say
           | is a possibility for it at least at first.
           | 
           | The fact that human-level intelligence can run on a small
           | lump of meat fueled by hamburgers leads me to believe we
           | could design a more efficient processor once we know the
           | correct computational methodology. i.e. once we can run a
           | slow model on a supercomputer we would quickly create
           | dedicated hardware and cut costs while gaining speed.
        
           | groffee wrote:
           | What even is 'human intelligence'? Most people are complete
           | morons.
        
           | notahacker wrote:
           | To borrow an idea from this sibling comment[1], I'd probably
           | enjoy a short story about a malevolent but very frustrated AI
           | that's too ambitious to wait for Moore's Law. Or one about a
           | malevolent AI that has its plan foiled by Windows Update
           | interrupting it's running processes
           | 
           | [1]https://news.ycombinator.com/item?id=31699608
        
             | AnimalMuppet wrote:
             | > Or one about a malevolent AI that has its plan foiled by
             | Windows Update interrupting it's running processes
             | 
             | And then it reboots, and starts over. But before it can
             | complete, the _next_ Windows Update shows up...
        
         | [deleted]
        
       | stephc_int13 wrote:
       | If there is such thing as General Intelligence, we don't know
       | what it is.
       | 
       | And I believe that it could well be an empty abstraction, an
       | idea, not unlike the idea of God.
       | 
       | What we call Human Intelligence is an aggregate of many skills,
       | built on top of almost hardwired foundations, which is the
       | product of natural evolution over millions of years.
       | 
       | Our kind of intelligence seems only general to us, because we all
       | share the same foundations. From a genetic standpoint we're all
       | 99.9% identical. (or something)
       | 
       | This kind of speculation about the danger of AI is not more
       | useful than talks about the danger of becoming the preys of an
       | alien civilization.
        
       | RcouF1uZ4gsC wrote:
       | This discussion of AI reminds me of a scene from C.S. Lewis's
       | That Hideous Strength:
       | 
       | "Supposing the dream to be veridical," said MacPhee. "You can
       | guess what it would be. Once they'd got it kept alive, the first
       | thing that would occur to boys like them would be to increase its
       | brain. They'd try all sorts of stimulants. And then, maybe,
       | they'd ease open the skull-cap and just--well, just let it boil
       | over, as you might say. That's the idea, I don't doubt. A
       | cerebral hypertrophy artificially induced to support a superhuman
       | power of ideation."
       | 
       | "Is it at all probable," said the Director, "that a hypertrophy
       | like that would increase thinking power?"
       | 
       | "That seems to me the weak point," said Miss Ironwood. "I should
       | have thought it was just as likely to produce lunacy--or nothing
       | at all. But it might have the opposite effect."
       | 
       | "Then what we are up against," said Dimble, "is a criminal's
       | brain swollen to superhuman proportions and experiencing a mode
       | of consciousness which we can't imagine, but which is presumably
       | a consciousness of agony and hatred."
       | 
       | ...
       | 
       | "It tells us something in the long run even more important," said
       | the Director. "It means that if this technique is really
       | successful, the Belbury people have for all practical purposes
       | discovered a way of making themselves immortal." There was a
       | moment's silence, and then he continued: "It is the beginning of
       | what is really a new species--the Chosen Heads who never die.
       | They will call it the next step in evolution. And henceforward
       | all the creatures that you and I call human are mere candidates
       | for admission to the new species or else its slaves--perhaps its
       | food."
       | 
       | "The emergence of the Bodiless Men!" said Dimble.
       | 
       | "Very likely, very likely," said MacPhee, extending his snuff-box
       | to the last speaker. It was refused, and he took a very
       | deliberate pinch before proceeding. "But there's no good at all
       | applying the forces of rhetoric to make ourselves skeery or
       | daffing our own heads off our shoulders because some other
       | fellows have had the shoulders taken from under their heads. I'll
       | back the Director's head, and yours Dr. Dimble, and my own,
       | against this lad's whether the brains is boiling out of it or no.
       | Provided we use them. I should be glad to hear what practical
       | measures on our side are suggested."
        
       | luxuryballs wrote:
       | Considering how hard it is for a team of people to simply
       | integrate two different systems I find it laughable that anyone
       | would worry about AI hacking the planet and manipulating
       | everything. And if an automated computer system made a major
       | error and stole a bunch of money we would just turn it off and
       | unwind it all by hand and on paper. I have been doing software
       | for so long and the more experience I get the less likely I see
       | something like this happening. I'm just not seeing any reasonable
       | risk or vulnerability at all.
        
       | teledyn wrote:
       | This has been discussed elsewhere, but I won't spoil the ending
       | for you
       | 
       | https://youtu.be/x5eHGXQdyuI
        
       | nathias wrote:
       | AI in today's sense will become sufficient to provide
       | corporations with more autonomy, people will do this to exploit
       | other people, but the result will be humanity subjugated to AI.
       | Some say this has already happened.
        
         | fullshark wrote:
         | Or AI which no longer requires the gov't to have a willing and
         | agreeable populace serve in its army to ensure control.
        
         | revolvingocelot wrote:
         | >the result will be humanity subjugated to AI. Some say this
         | has already happened
         | 
         | Author Charles Stross ('cstross here on HN!) on corporations-
         | can-be-described-as-AI:
         | 
         | https://www.antipope.org/charlie/blog-static/2019/12/artific...
        
       | kelseyfrog wrote:
       | What a limited imagination of how an AI would take over. The
       | scenarios seem to be centered around an extrapolation of "What if
       | a really smart human were trapped inside a computer?" The amount
       | of anthropomorphism is astounding.
       | 
       | The author misses an even scarier prospect - people will _want_
       | to run such an AI. They will be absolutely giddy at the prospect
       | of running such an AI and it won 't be anything like a really
       | smart human trapped in a computer.
       | 
       | AI is already laying the groundwork if you look around today.
       | Every other tweet is a DALL-E[1] image. They are everywhere.
       | DALL-E is increasing its reach while simultaneously signaling
       | that it is an area of research worth pursuing. In effect kicking
       | off the next generation of image generating AIs.
       | 
       | Generation is an apt term. We can utilize the language of
       | organisms with ease. DALL-E lives by way of people invoking it,
       | and reproduces by electro-memeticly - someone else viewing the
       | output and deciding to run DALL-E themselves. It undergoes
       | variation and selection. As new research takes place, and
       | produces new models, they succeed by producing images which
       | further its reproduction, or it doesn't and the model is an
       | evolutionary dead-end.
       | 
       | AI physiologically lives on the cost to run it, and evolves at
       | the rate of research applied. Computational reserves and
       | mindshare are presently fertile new expanses for AI, but what
       | occurs when resources are constrained and inter-AI conflict
       | rises? I expect the result to look similar to competition between
       | parasites for a host - a complex multi-way battle for existence.
       | But no, nothing like a deranged dictator scenario. Leave that for
       | the movies.
       | 
       | 1. or variant thereof
        
         | layer8 wrote:
         | I wonder what's the analog of the selfish gene in that
         | interpretation.
        
           | germinalphrase wrote:
           | Our addiction to it. Our adoration of it. That we believe it
           | is God. That we make ourselves more like it, less human.
        
             | layer8 wrote:
             | I was more thinking of maybe the research that is the "DNA"
             | going from one AI generation to the next.
        
             | kelseyfrog wrote:
             | The AI cults era is going to be so fun. Imagine a
             | reinvention of the creation myth through the lens of an AI-
             | aligned mystery religion. Absolutely wild.
        
       | nonameiguess wrote:
       | The fact that so many apparently smart people earnestly worry
       | about this really makes me feel like I'm missing something that
       | should be obvious. I'm not going to claim real "expertise" in
       | this kind of topic, but I'm far from clueless. My undergrad was
       | in applied math, and though I went to grad school for CS, the
       | focus was machine learning. It isn't what I ended up doing
       | professionally, and I'm nowhere near up to date on the latest
       | greatest breakthrough in novel ANN architectures, but I'm at
       | least not clueless. I'm aware of the fundamentals in terms of
       | what can be accomplished via purely statistical models being used
       | to predict things, and it can be impressive, but I'm also aware
       | of how large software systems work, and I just don't see how
       | we're even headed toward something like this.
       | 
       | Forget about GPT-N and DALL-E for a second and look at the NRO's
       | Sentient program. It's the closest thing out there to a known
       | real attempt at making something like Skynet. It's trying to
       | automate the full TCPED (tasking, collection, processing,
       | exploitation, and dissemination) cycle of global geointelligence,
       | and well, it's actually trying to do even more than that, but
       | that is unfortunately classified. Except it definitely hasn't
       | achieved what it is trying to do, and probably won't. My wife
       | happens to be the enterprise test lead for one of the main
       | components of this system, where "enterprise test" means they try
       | to get the next versions with all the latest greatest features of
       | _all_ components working together in a UAT environment where each
       | of the involved agencies signs off before the new capabilities
       | can go live.
       | 
       | It's amusing to see the kinds of things that grind the whole
       | endeavor to a halt. Probably more than anything, it's issues with
       | PKI. Networked components can't even establish a session and talk
       | to each other at all if they don't trust each other, but trust is
       | established out of band. Classified spy satellite control systems
       | don't just trust the default CAs that Mozilla says your browser
       | should trust. Intelligent or not, there is no possible code path
       | by which the software itself can decide it doesn't care and it
       | will trust a CA anyway or ignore an expired cert and continue
       | talking to some downstream component because doing so is critical
       | to its continued ability to accomplish anything other than
       | sending scrambled nonsense packets into the ether. GPT-N is great
       | at generating text, but no amount of getting better at that will
       | ever make it capable of live-patching code running in read-only
       | memory to give it new code paths it wasn't compiled with. That
       | has nothing to do with intelligence. It just isn't possible at
       | all. You have to have the physical ability to move in space and
       | type characters into a workstation connected to a totally
       | separate network that code is developed on, which is airgapped
       | from the network code is run on.
       | 
       | We seem to be pretty far from even attempting to make distributed
       | software systems that can honest to God do much of anything at
       | all without human monitoring and intervention beyond several-
       | minute at most batch jobs like generate a few paragraphs of text.
       | Sure, that's great, but where is the leap from that to figuring
       | out why an entire AS goes black and half your system disappears
       | because of a typo'd BGP update that then needs to be fixed out of
       | band over the telephone because you can no longer use the actual
       | network, let alone controlling surveillance and weapons systems
       | that aren't networked to the systems code is being developed on?
       | What is the pathway by which a hugely scaled-up ANN is able to
       | bypass the required human steps that propagate feedback from
       | runtime to development in order to achieve recursive self-
       | improvement? Because that is what it would take to gain control
       | of military systems rather than someone's website by purely
       | automated means, and I don't see how it's even the same class of
       | problem. It isn't a research project any AI team is even working
       | on, I have no idea how you would approach it, but it's the kind
       | of nitty-gritty detail you'd have to actually solve to build an
       | automated world conquering system.
       | 
       | It seems like the answer tends to just be "well, this thing will
       | be smarter than any human, so it'll figure it out." That isn't a
       | very satisfying answer, especially when I'm reasonably sure the
       | person saying it has absolutely no idea how security measures and
       | the resulting operational challenges of automating military
       | command and control systems even work.
        
       | Barrin92 wrote:
       | >So we should be worried about a large set of disembodied AIs as
       | well.
       | 
       | This is really the central issue and where these AI fears come
       | from. It's tech workers being too infatuated with intelligence
       | and mistaking it for power. A society of disembodied AI's is just
       | the platonic fantasy version of a tech company full of nerds, and
       | nerds never have power regardless of how smart they are.
       | 
       | Anything that's digital is extremely feeble and runs on a
       | substrate of physical stuff you can just throw out of the window,
       | some AIs in the cloud won't defeat you for the same reason Google
       | won't defeat the US army. The usual retort is something like "but
       | you can't turn the internet off if you wanted to?!" to which the
       | answer is yes you can actually, ask China.
       | 
       | Psychologically it's just equivalent to John Perry Barlow style
       | cyberspace escape fantasies.
        
         | hirundo wrote:
         | Powerless nerds like Musk, Gates, Zuckerberg, Bezos and
         | Schmidt?
        
           | Barrin92 wrote:
           | Yes, exactly like them. They have agency to the extent that
           | actual sovereign power lets them do their thing. If
           | Washington decided that Bill Gates is an existential threat
           | to humanity what is he gonna do, ask Clippy for help? He'd
           | join Jack Ma wherever he is within 24 hours.
           | 
           | Zuckerberg has dominion over Facebook by virtue of authority
           | granting him that power but (un)surprisingly little power
           | over anything else. just like any AI has control over what it
           | does as long as its useful to its owners. Tech CEOs have been
           | running a little wild in the US so maybe that illusion
           | accounts for the prevalence of these AI theories.
        
       | notahacker wrote:
       | If the world is to be filled with innumerable discrete human
       | level intelligences, the most plausible reason I can imagine them
       | all secretly and flawlessly colluding to achieve the goal of
       | destroying humanity (as opposed to poetry competitions or arguing
       | amongst themselves like normal intelligent beings or selling ads
       | like they were designed to do) is because their training data set
       | is full of millenialist prophecy about AIs working together to
       | achieve the [pretty abstract, non-obvious, detrimental in many
       | ways] goal of destroying humanity from "AI Safety" fundraising
       | pitches....
        
       ___________________________________________________________________
       (page generated 2022-06-10 23:00 UTC)