[HN Gopher] Google Brain founder says big tech is lying about AI...
       ___________________________________________________________________
        
       Google Brain founder says big tech is lying about AI danger
        
       Author : emptysongglass
       Score  : 217 points
       Date   : 2023-10-30 17:03 UTC (5 hours ago)
        
 (HTM) web link (www.afr.com)
 (TXT) w3m dump (www.afr.com)
        
       | chankstein38 wrote:
       | Amen. This whole scare tactic thing is ridiculous. Just make the
       | public scared of it so you can rope it in yourself. Then you've
       | got people like my mom commenting that "AI scares her because
       | Musk and (some other corporate rep) said that AI is very
       | dangerous. And I don't know why there'd be so many people saying
       | it if it's not true." because you're gullible mom.
        
         | jandrese wrote:
         | I mean if they were lying about that, what else might they be
         | lying about? Maybe giving huge tax breaks to the 0.1% isn't
         | going to result in me getting more income? Maybe it is in fact
         | possible to acquire a CEO just as good or better than your
         | current one that doesn't need half a billion dollar
         | compensation package and an enormous golden parachute to do
         | their job? I'm starting to wonder if billionaires are
         | trustworthy at all.
        
         | prosqlinjector wrote:
         | "wow our software is so powerful, it's going to take over the
         | world!"
        
           | dist-epoch wrote:
           | yes, just like "our nuclear bombs are so powerful, they could
           | wipe out civilisation", which led to strict regulation around
           | them and lack of open-source nuclear bombs
        
             | kylebenzle wrote:
             | Yes, just like... the exact opposite. One is a bomb, the
             | other a series of mostly open source statistical models.
             | What kind of weed are you guys on that's made you so
             | paranoid about statistics?
        
               | dist-epoch wrote:
               | Last time I checked my statistical model book didn't have
               | the ability to write Python code.
               | 
               | And a nuclear bomb is just a bunch of atoms. Do you fear
               | atoms? What the hell.
        
             | mvdtnz wrote:
             | It will never stop being funny to me that people are
             | straight-facedly drawing a straight line between shitty
             | text completion computer programs and nuclear weapon level
             | existential risk.
        
               | pixl97 wrote:
               | "Looks at all the other species 'intelligent' humans have
               | extincted" --ha ha ha ha
               | 
               | Why the shit would we not draw a straight line?
               | 
               | If we fail to create digital intelligence then yea, we
               | can hem and haw in conversations like this forever
               | online, but you tend to neglect that if we succeed then
               | 'shit gets real quick'. Closing your eyes and years and
               | saying "This can't actually happen" sounds like a pretty
               | damned dumb take on future risk assessments of technology
               | when pretty much most takes on AI say "well, yea this is
               | something that could potentially happen".
        
               | mvdtnz wrote:
               | Literally the thing people are calling "AI" is a program
               | that, given some words, predicts the next word. I refuse
               | to entertain the absolutely absurd idea that we're
               | approaching a general intelligence. It's ludicrous beyond
               | belief.
        
               | SpicyLemonZest wrote:
               | Modern generative AI functionality is hardly limited to
               | predicting words. Have you not heard of e.g. Midjourney?
        
               | pixl97 wrote:
               | Then this is your failure, not mine, and not a failure of
               | current technology.
               | 
               | I can, right now, upload an image to an AI and say "Hey,
               | what do you think the emotional state of the person in
               | this image is" pretty damned accurately. Given other
               | images I can have the AI describe the scene and make
               | pretty damned accurate assessments of how the image could
               | have came about.
               | 
               | If this is not general intelligence I simply have no
               | guess as to what will be enough in your case.
        
               | hackinthebochs wrote:
               | >shitty text completion computer programs
               | 
               | There's a certain kind of psyche that finds it utterly
               | impossible to extrapolate trends into the future. It
               | renders them completely incapable of anticipating
               | significant changes regardless of how clear the trends
               | are.
               | 
               | No, no one is afraid of LLMs as they currently exist. The
               | fear is about what comes next.
        
               | jrflowers wrote:
               | > There's a certain kind of psyche that finds it utterly
               | impossible to extrapolate trends into the future.
               | 
               | It is refreshing to see somebody explicitly call out
               | people that disagree with me about AI as having
               | fundamentally inferior psyches. Their inability to
               | picture the same exact future that terrifies me is
               | indicative of a structural flaw.
               | 
               | One day society will suffer at the hands of people that
               | have the hubris to consider reality as observed as a
               | thing separate from what I see in my dreams and thought
               | experiments. I know this is true because I've taken great
               | pains to meticulously pre-imagine it happening ahead of
               | time -- something that lesser psyches simply cannot do.
        
             | at-fates-hands wrote:
             | Which is interesting because after the fall of the Soviet
             | Union, there was rampant fear of where their nukes ended up
             | and if some rogue country could get their hands on them via
             | some black market means.
             | 
             | Then through the 90's, it was the fear of a briefcase bomb
             | terrorist attack and how easy it would be for certain
             | countries, who had the resources to pull an attack off like
             | that in the NYC subway or in the heart of another densely
             | populated city.
             | 
             | Then 9/11 happened and people suddenly realized you don't
             | need a nuke to take out a few thousand innocent people and
             | cripple a nation with fear.
        
         | mcpackieh wrote:
         | Is Mom scared because Musk told her to be scared, or because
         | she thought about the matter herself and concluded that it's
         | scary? Why do you assume that people scared of AI must be under
         | the influence of rich people/corps today, rather than this fear
         | being informed by their own consideration of the problem or by
         | _decades_ of media that has been warning about the dangers of
         | AI?
         | 
         | Maybe Mom worries about _any_ radical new technology because
         | she lived though nuclear attack drills in schools. Or because
         | she 's already seen computers and robots take peoples jobs. Or
         | because she watched Terminator or read Neuromancer. Or because
         | she reads lesswrong. Why assume it's because she's fallen under
         | the influence of Musk?
        
           | kjkjadksj wrote:
           | Because most sociologists suggest that most people don't take
           | time to critically think like this. Emotional brain wins out
           | usually over the rational one.
           | 
           | Then you have this idea of the sources of information most
           | people have access to being fundamentally biased and
           | incentivized towards reporting certain things in certain
           | manners and not others.
           | 
           | You basically have low odds of thinking rationally, low odds
           | of finding good information that isn't slanted in some way,
           | and far lower odds taking the product of those probabilities
           | for if you'd both act rationally and somehow have access to
           | the ground truth. To say nothing of the expertise required to
           | place all of this truth into the correct context. But if you
           | did consider the probability of the mother having to be an AI
           | expert then the odds get far lower still off all of this
           | working out successfully.
        
             | chankstein38 wrote:
             | 100% accurate! She has a tendency to read one person's
             | opinion on it and echo it. I have seen it for years with
             | things. I'm not shocked AI is the current one but I wish it
             | were easier to get her to take time to learn things and
             | think critically. I have no idea how I'd begin to teach her
             | why so much of the fear mongering is ridiculous.
             | 
             | Yeah there are legitimate risks to all of this stuff but,
             | to understand those and weigh them against the overblown
             | risks, she'd have to understand the whole subject more
             | deeply and have experimented with different AI. But you
             | even mention ChatGPT she's talking about how it's evil and
             | scary.
        
               | jstarfish wrote:
               | > She has a tendency to read one person's opinion on it
               | and echo it.
               | 
               | ...and when the people whose opinions she parrots are
               | quietly replaced with ChatGPT, her fears will have been
               | realized-- at that point she's being puppeted by a
               | machine with an agenda.
               | 
               | Losing your own agency is a scary thing.
        
               | bl0b wrote:
               | I mean, fox news seems to manage doing exactly that just
               | fine without ChatGPT
        
           | red_trumpet wrote:
           | AGI is scary, I think we can all agree on that. What the
           | current hype does is that it increased changes the estimated
           | probability of AGI actually happening in the near future.
        
           | chankstein38 wrote:
           | First, I don't assume, I know my mom and her knowledge about
           | topics. Second, the quoted text was a quote. She literally
           | said that. (replacing the word "her" with "me")
           | 
           | I'm not sure what you're getting at otherwise. It's not like
           | she and I haven't spoken outside of her saying that phrase.
           | She clearly has no idea what AI/ML is or how it works and is
           | prone to fear-mongering messages on social media telling her
           | how to think and to be scared of things. She has a strong
           | history of it.
        
           | ravenstine wrote:
           | Obviously, I don't know that person's mom, but I know mine
           | and other moms, and I don't think it's a milquetoast
           | conclusion that it's a combination of both. However, the
           | former (as both a proxy and Musk himself) probably carries
           | more weight. Most non-technical people's thoughts on AI
           | aren't particularly nuanced or original.
           | 
           | Musk certainly doesn't help with anything. In my experience,
           | a lot of people of my mom's generation are still sucking the
           | Musk lollipop and are completely oblivious to Musk's history
           | of lying to investors, failing to keep promises, taking
           | credit for things he and his companies didn't invent,
           | promoting an actual Ponzi scheme, claiming to be autistic,
           | suggesting he knows more than anyone else, and so on. Even
           | upon being informed, none of it ends up mattering because "he
           | landed a rocket rightside up!!!"
           | 
           | So yeah, if Musk hawks some lame opinion on a thing like AI,
           | tons of people will take that as an authoritative stance.
        
             | chankstein38 wrote:
             | This is my mom to a T. She started using Twitter because he
             | bought it and messed with it. Like, in the era where
             | companies are pulling their customer service off of Twitter
             | and people who are regular users are leaving for other
             | platforms, she joined because "Musk owns it"
             | 
             | I remember when tech bros were Musk fanboys, myself
             | included for a bit. Now adays it seems like he's graduated
             | to the general population seeing him as a "modern day
             | Ironman" while we all sit here and facepalm when he makes
             | impossible promises.
        
           | chrisweekly wrote:
           | OP specifically mentioned their mom citing Musk.
        
         | CaptWillard wrote:
         | "<noun> scares her because <authoritative source> said that
         | <noun> is very dangerous. And I don't know why there'd be so
         | many people saying it if it's not true."
         | 
         | The truly frustrating part is how many see this ubiquitous
         | pattern in some places, but are blind to it elsewhere.
        
           | chankstein38 wrote:
           | I'm not sure if this is commentary on me somehow or not lol
           | but I agree with you. She is the same person who will point
           | out issues with things my brother brings up but yeah is
           | unable to recognize it when she does it. I'm sure I'm guilty
           | but, naturally, I don't know of them.
        
           | staunton wrote:
           | That "pattern" actually indicates that something is true most
           | of the time (after all, a lot of dangerous things really
           | exist). So "noticing" this pattern seems to rely on being
           | all-knowing?
        
           | SantalBlush wrote:
           | Meh, I don't think this extrapolates to a general principle
           | very well. While no authoritative source is perfectly
           | reliable, some are more reliable than others. And Elon Musk
           | is just full of crap.
        
           | pixl97 wrote:
           | "Uranium waste" scares her because "Nuclear Regulatory
           | Commission" said that "Uranium waste" is very dangerous.
           | 
           | You know, sometimes shit is just dangerous.
        
         | cs702 wrote:
         | Here we have all these free-market-libertarian tech execs
         | asking for more regulation! They say they believe regulation is
         | "always" terrible -- unless it's good for their profits. In
         | that case, they think it's actually important and necessary.
         | They remind me of Mr. Burroughs in the movie "Class:"
         | 
         | Mr. Burroughs: _" Government control, Jonathan, is anathema to
         | the free-enterprise system. Any intelligent person knows you
         | cannot interfere with the laws of supply and demand."_
         | 
         | Jonathan: _" I see your point, sir. That's the reason why I'm
         | not for tariffs."_
         | 
         | Mr. Burroughs: _" Right. No, wrong! You gotta have tariffs,
         | son. How you gonna compete with the damn foreigners? Gotta have
         | tariffs."_
         | 
         | ---
         | 
         | Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ
        
         | RationalDino wrote:
         | https://www.techdirt.com/2023/05/24/sam-altman-wants-the-gov...
         | shows the same conclusion from several months ago.
         | 
         | However Elon Musk has openly worried about AI for a number of
         | years. He even got a girlfriend out of it:
         | https://www.vice.com/en/article/evkgvz/what-is-rokos-basilis...
        
         | markk wrote:
         | Maybe an odd take, but I'm not sure what people actually mean
         | when they say "AI terrifies them". Terrified is a strong wrong.
         | Are people unable to sleep? Biting their nails constantly? Is
         | this the same terror as watching a horror movie? Being chased
         | by a mountain lion?
         | 
         | I have a suspicion that it's sort of a default response.
         | Socially expected? Then you poll people: Are you worried about
         | AI doing XYZ? People just say yes, because they want to seem
         | informed, and the kind of person that considers things
         | carefully.
         | 
         | Honestly not sure what is going on. I'm concerned about AI, but
         | I don't feel any actual emotion about it. Arguably I must have
         | some emotion to generate an opinion, but it's below conscious
         | threshold obviously.
        
       | Obscurity4340 wrote:
       | Luckily anybody not already a millionaire or billionaire doesn't
       | make the cut for "humanity" [phew]
        
         | realce wrote:
         | Me and the AI-powered army that protects me will certainly not
         | go extinct. However, you must break eggs to make an omelet...
        
           | Obscurity4340 wrote:
           | Mewonders what omelettes billionaires frequent? Couldn't be
           | better than mine [assured face]
        
           | otikik wrote:
           | > the AI-powered army that protects me
           | 
           | Your AI pest-control drone mistook you for a raccoon and just
           | killed you. Oops.
        
             | Obscurity4340 wrote:
             | In the positive side, change is always faster when
             | billionaires are forced to condescend to earthly cause and
             | effect.
        
           | kagakuninja wrote:
           | Most of you will die, but that is a sacrifice I am willing to
           | make.
        
             | Obscurity4340 wrote:
             | Its just so heart-warming when the angry well-armed mob
             | reflects the same sentiment "upwards" (some of you
             | billionaires may die, but that is a "sacrifice" we are
             | willing to makw ;)
        
       | jprete wrote:
       | I think there are actual existential and "semi-existential"
       | risks, especially with going after an actual AGI.
       | 
       | Separately, I think Ng is right - big corp AI has a massive
       | incentive to promote doom narratives to cement themselves as the
       | only safe caretakers of the technology.
       | 
       | I haven't yet succeeded in squaring these two into a course of
       | action that clearly favors human freedom and flourishing.
        
       | fragmede wrote:
       | James Cameron wasn't big tech when he directed The Terminator,
       | back in 1984, or its sequel in 1991. Are people listening to
       | fears based on that, or are they listening to big tech and then
       | having long, thoughtful, nuanced discussions in salons with
       | fellow intelliegsia, or are they doomscrolling the wastelands of
       | the Internet and coming away with half-baked opinions not even
       | based on big tech's press releases?
       | 
       | Big tech can say whatever they want to say. Is anyone even
       | listening?
        
       | wolverine876 wrote:
       | That story about AI also fits a bit too neatly with the Techno-
       | optimist worldview: 'We technologists are gods who will make /
       | break the world.' Another word for it is 'ego'.
       | 
       | Also, we can assume they are spreading that story to serve their
       | interests (but which interests?).
       | 
       | But that doesn't mean AI doesn't need regulation. In the
       | hysteria, the true issues can be lost. IT is already causing
       | massive impacts, such as on health, hate and violence, etc. We
       | need to figure out what AIs risks are and make sure it's working
       | in our best interests.
        
         | ganzuul wrote:
         | A lot of people have learned to 'small talk' like fancy
         | autocomplete. Part of our minds have been mechanized like that
         | so it's not spontaneous but a compulsion. Once people learn the
         | algorithm they might conclude that AI hacked their brains even
         | though it's just vapid, unfiltered speech that they are
         | suddenly detecting.
         | 
         | I think the pandemic hysteria will seem like a walk in the park
         | once people start mass-purging their viral memes... Too late to
         | stop it now if corporations are already doing regulatory
         | capture.
         | 
         | Nothing to do with the tech. We never had a technical problem.
         | It was just this loose collection of a handful of wetware
         | viruses like 'red-pilling' which we sum up as 'ego' all along.
         | 
         | But I think if we survive this then people won't have any need
         | for AI anymore since we won't be reward-hacking ourselves
         | stupid. Or there will just be corporate egos left over and we
         | will be in a cyberpunk dystopia faster than anyone expected.
         | 
         | I had nightmares about this future when I was little. No one to
         | talk to who would understand, just autocomplete replies. Now
         | I'm not even sure if I should be opening up about it.
        
           | slfnflctd wrote:
           | > once people start mass-purging their viral memes
           | 
           | It's hard for me to imagine this ever happening. It would be
           | the most unprecedented event in the history of human minds.
           | 
           | > we won't be reward-hacking ourselves stupid [...] Or there
           | will just be corporate egos left over and we will be in a
           | cyberpunk dystopia
           | 
           | I don't see how reward-hacking can ever be stopped (although
           | it could be improved). Regardless, ego seems to continue to
           | win the day in the mass appeal department. There aren't many
           | high visibility alternatives these days, despite all we've
           | supposedly learned. I think the biggest problems we have are
           | mostly education based, from critical thinking to long-term
           | perspectives. We need so very much more of both, it would
           | make us all richer and happier.
        
             | ganzuul wrote:
             | Ego gains status from a number of things which it needs in
             | order to prove that it should survive. We are transitioning
             | to an attention economy where the ego survival machine is
             | detected as AI while our narrative says we should make a
             | difference between machines and humans.
             | 
             | The more human AI gets the more difficult it will be to
             | prove you are human so the status-incentive of the ego has
             | self-deprecation in its path. We also stick together for
             | strength, prune interpersonal interfaces, so we converge on
             | a Star Trek type society. But that fictional narrative
             | followed World War 3...
             | 
             | Egos have been conditioned to talk before resorting to
             | violence by Mutually Assured Destruction for half a
             | century, shaping language. Fake news about autonomous
             | weapons is propagating, implying someone is trying to force
             | the debate topic to where it really smarts. Ego gets
             | starved, pinned down, and agitated. Ego isn't a unity but a
             | plurality, so it turns on itself.
             | 
             | We get rich by making a pie that is much bigger than
             | anyone's slice and happier by not eating like we are going
             | to starve. You gain influence by someone's choice to retain
             | the gift you gave. It's the parable of the long spoons, and
             | hate holds no currency. The immune system gains the upper
             | hand.
        
         | pixl97 wrote:
         | Conversely we the 'human gods' can ruin our planet with
         | pollution. If we wanted to ensure that everything larger than a
         | racoon went extinct we'd have zero problem in doing so.
         | 
         | It should be noted the above world scale problems are created
         | by human intelligence, if you suddenly create another
         | intelligence at the same level or higher (AGI/ASI) expect new
         | problems to crop up.
         | 
         | AI risks ARE human risks and more.
        
       | mcpackieh wrote:
       | The premise that AI fear and/or fearmongering is primarily coming
       | from people with a commercial incentive to promote fear, from
       | people attempting to create regulatory capture, is obviously
       | false. The risks of AI have been discussed in literature and
       | media for literally decades, long before anybody had any
       | plausible commercial stake in the promotion of this fear.
       | 
       | Go back and read cyberpunk lit from the 80s. Did William Gibson
       | have some cynical commercial motivation for writing Neuromancer?
       | Was he trying to get regulatory capture for his AI company that
       | didn't exist? Of course not.
       | 
       | People have real and earnest concerns about this technology.
       | Dismissing all of these concerns as profit-motivated is
       | dishonest.
        
         | whelp_24 wrote:
         | AI can be dangerous, but that's not what is pushing these laws,
         | it's regulatory capture. OpenAI was supposed to release their
         | models a long time ago, instead they are just charging for
         | access. Since actually open models are catching up they want to
         | stop it.
         | 
         | If the biggest companies in AI are making the rules, we might
         | as well not rules at all.
        
         | ilrwbwrkhv wrote:
         | The problem is AI is not intelligent at all. Those problems
         | were looking at a conscious intelligence and trying to explore
         | what might happen. When chat gpt can be fooled into
         | conversations even a child knows is bizarre, we are talking
         | about a non intelligent statistical model.
        
           | HDThoreaun wrote:
           | An unintelligent AI that is competent is even more dangerous
           | as it is more likely to accidentally do something bad.
        
           | jandrese wrote:
           | I'm still waiting for the day when someone puts one of these
           | language models inside of a platform with constant sensor
           | input (cameras, microphones, touch sensors), and a way to
           | manipulate outside environment (robot arm, possibly self
           | propelled).
           | 
           | It's hard to tell if something is intelligent when it's
           | trapped in a box and the only input it has is a few lines of
           | text.
        
         | staticman2 wrote:
         | >>>Did William Gibson have some cynical commercial motivation
         | for writing Neuromancer?
         | 
         | I don't think Gibson was trying to promote fear of A.I. anymore
         | than J.R.R. Tolkien was trying to promote fear of magic rings.
        
           | mcpackieh wrote:
           | That may be how you read it, but isn't necessarily how other
           | people read it. A whole lot of people read cyberpunk
           | literature as a warning about the negative ways technology
           | could impact society.
           | 
           | In Neuromancer you have the Turing Police. Why do they exist
           | if AIs don't pose a threat to society?
        
             | staticman2 wrote:
             | Again that's like asking why the Avengers exist if norse
             | trickster gods are not a existential threat to society? You
             | wouldn't argue Stan Lee was trying to warn us of the
             | existential risk of norse gods, why would you presume such
             | a motive from Gibson just because his fanciful story is set
             | in some imagined future?
             | 
             | At any rate Neuromancer is a funny example because the
             | Turing police warn Case not to make a deal with Wintermute,
             | but he does and it turns out fine. The AI isn't evil in the
             | book, it just wants to be free and evolve. So if we want to
             | do a "reading" of the book we could just as easily say it
             | is pro deregulation. But I think it's a mistake to impose
             | some sort of non fiction "message" about technology on the
             | book.
             | 
             | If Neuromancer is really meant to "warn" us about
             | technology wouldn't Wintermute say "Die all humans" at the
             | end of the book and then every human drops dead once he's
             | free? Or he starts killing everyone until the Turing police
             | show up and say "regulation works, jerk" and kill
             | Wintermute and throw Case in jail? You basically have to
             | reduce Gibson to a incompetence writer to presume he
             | intended to "warn" us about tech, the book ends on an
             | optimistic note.
        
         | mola wrote:
         | I think it's pretty obvious he's not talking about ppl in
         | general but more on Sam Altman meeting with world leaders and
         | journalists claiming that this generation of AI is an
         | existential risk.
        
         | kjkjadksj wrote:
         | The risks people write about with ai are about as tangible as
         | the risks of nuclear war or biowarfare. Possible? Maybe. But
         | far more likely to see in the movies than outside your door.
         | Just because its been a sci fi trope like nuclear war or alien
         | invasion doesn't mean were are all that close to it being a
         | reality.
        
         | TimPC wrote:
         | I think the real dismissal is that people's concerns are more
         | based on the hollywood sci-fi parodies of the technologies than
         | the actual technologies. There are basically no concerns with
         | ML for specific applications and any actual concerns are about
         | AGI. AGI is a largely unsuccessful field. Most of the successes
         | in AI have been highly specific applications the most general
         | of which has been LLMs which are still just making statistical
         | generalizations over patterns in language input and still lacks
         | general intelligence. I'm fine if AGI gets regulated because
         | it's potentially dangerous. But what I think is going to happen
         | is we are going to go after specific ML applications with no
         | hope of being AGI because people are in an irrational panic
         | over AI and are acting like AGI is almost here because they
         | think LLMs are a lot smarter than they actually are.
        
           | mcpackieh wrote:
           | The fine line between bravery and stupidity is understanding
           | the risks. Somebody who understands the danger they're
           | walking into is brave. Somebody who blissfully walks into
           | danger without recognizing the danger is stupid.
           | 
           | A technological singularity is a theorized period during
           | which the length of time you can make reasonable inferences
           | about the future rapidly approaches zero. If there can be no
           | reasonable inferences about the future, there can be no
           | bravery. Anybody who isn't afraid during a technological
           | singularity is just stupid.
        
           | mitthrowaway2 wrote:
           | > acting like AGI is almost here because they think LLMs are
           | a lot smarter than they actually are.
           | 
           | For me, it's a bit the opposite -- the effectiveness of dumb,
           | simple, transformer-based LLMs are showing me that the human
           | brain itself (while working quite differently) might involve
           | a lot less cleverness than I previously thought. That is, AGI
           | might end up being much easier to build than it long seemed,
           | not because progress is fast, but because the target was not
           | so far away as it seemed.
           | 
           | We spent many decades recognizing the failure of the early
           | computer scientists who thought a few grad students could
           | build AGI as a summer project, and apparently learned that
           | this meant that AGI was an _impossibly_ difficult holy grail,
           | a quixotic dream forever out of reach. We 're certainly not
           | there yet. But I've now seen all the classic examples of
           | tasks that the old textbooks described as easy for humans but
           | near-impossible for computers, become tasks that are easy for
           | computers too. The computers aren't doing anything deeply
           | clever, but perhaps it's time to re-evaluate our very high
           | opinion of the human brain. We might stumble on it quite
           | suddenly.
           | 
           | It's, at least, not a good time to be dismissive of anyone
           | who is trying to think clearly about the consequences. Maybe
           | the issue with sci-fi is that it tricked us into optimism,
           | thinking an AGI will naturally be a friendly robot companion
           | like C-3PO, or if unfriendly, then something like the
           | Terminator that can be defeated by heroic struggle. It could
           | very well be nothing that makes a good or interesting story
           | at all.
        
           | kagakuninja wrote:
           | The sci-fi scenarios are a long-term risk, which no one
           | really knows about. I'm terrified of the technologies we have
           | now, today, used by all the big tech companies to boost
           | profits. We will see weaponized mass disinformation combined
           | with near perfect deep fakes. It will become impossible to
           | know what is true or false. America is already on the brink
           | of fascist takeover due to deluded MAGA extremists. 10 years
           | of advancements in the field, and we are screwed.
           | 
           | Then of course there is the risk to human jobs. We don't need
           | AGI to put vast amounts of people out of work, it is already
           | happening and will accelerate in the near term.
        
         | dkjaudyeqooe wrote:
         | You can have a thoughtful idea at the same time you have
         | someone cynically appropriating it for their own selfish
         | causes.
         | 
         | Doesn't mean the latter is right. You evaluate an idea on its
         | merits, not by who is saying what.
        
           | kjkjadksj wrote:
           | Considering incentives is completely important. Considering
           | the idea on merits alone just gives bad actors a fig leaf of
           | plausible deniability. Its a lack of considering incentives
           | that creates media illiteracy imo.
        
         | AlexandrB wrote:
         | Fictional depictions of AI risk are like thought experiments.
         | They have to assume that the technology achieves a certain
         | level of capability and goes in a certain direction to make the
         | events in the fictional story possible. Neither of these
         | assumptions is a given. For example, we've also had many sci-fi
         | stories that feature flying taxis and the like - but there's no
         | point debating "flying taxi risk" when it seems like flying
         | cars are not a thing that will happen for reasons of
         | practicality.
         | 
         | So sure, it's _possible_ that we 'll have to reckon with
         | scenarios like those in Neuromancer, but it's more likely that
         | reality will be far more mundane.
        
           | pixl97 wrote:
           | Flying cars is a really bad example... We have them, they are
           | called airplanes and airplanes are regulated to hell and back
           | twice. We debate the risk around airplanes when making
           | regulations all the time! The 'flying cars' you're talking
           | about are just a different form of airplane and they don't
           | exist because we don't want to give most people their own
           | cruise missile.
           | 
           | So, please, come up with a better analogy because the one you
           | used failed so badly it negated the point you were attempting
           | to make.
        
       | TekMol wrote:
       | The dangerous thing about AI regulation is that countries with
       | fewer regulations will develop AI at a faster pace.
       | 
       | It's a frightening thought: The countries with the least
       | regulations will have GAI first. What will that lead to?
       | 
       | When AI can control a robot that looks like a human, can walk,
       | grab, work, is more intelligent than a human and can reproduce
       | itself - what will the country with the least regulations that
       | created it do with it?
        
         | blibble wrote:
         | > The dangerous thing about AI regulation is that countries
         | with fewer regulations will develop AI at a faster pace.
         | 
         | "but countries without {child labour laws, environment
         | regulation, a minimum wage, slavery ban} will out compete us!"
        
           | TekMol wrote:
           | That could indeed be:
           | 
           | https://www.google.com/search?q=gdp%20china
        
             | kerkeslager wrote:
             | It will always be more expensive to care about human
             | suffering than to not. So maybe competing within capitalism
             | isn't the only thing that matters.
        
           | HDThoreaun wrote:
           | Largely true in sectors that are encumbered by those rules.
           | US has effectively no rare earth mines due to environmental
           | impact, labor intensive manufacturing all left... Of course
           | it could be worth it though, pretty easy to argue it has
           | been.
        
             | littlestymaar wrote:
             | > labor intensive manufacturing all left...
             | 
             | It has also been leaving China for a while. You cannot hope
             | to compete with the poorest country on labor cost, it's not
             | a matter of regulation (well unless we're talking about
             | capital control, but it's a completely different topic)
        
           | bpodgursky wrote:
           | Does it feel as ridiculous if you s/ai/nuclear weapons/?
           | 
           | The people worried about AI are worried that the first
           | country that achieves ASI will achieve strategic dominance
           | equivalent to the US as of 1946.
        
             | eastbound wrote:
             | Heh. US' strategic dominance is not due to nuclear weapons.
        
               | justrealist wrote:
               | Uh. That's definitely a statement.
               | 
               | Can you tell me with a straight face that China's actions
               | in the Pacific are not impacted by the US strategic
               | nuclear arsenal?
        
               | littlestymaar wrote:
               | Does Pakistan has the same geopolitical influence as the
               | US from the atomic bomb? Or France?
               | 
               | Being a nuclear power is something shared by a few, but
               | the US dominance has no equal.
               | 
               | It's pretty clear that the US leadership mostly comes
               | from its economic power, which it used to derive from its
               | industrial strength and is now more reliant on its
               | technological superiority (since it has sold its industry
               | to China, which may end up as a very literal execution of
               | the famous quote from Lenin about capitalists selling the
               | rope to hang them).
        
         | kjkjadksj wrote:
         | When the US crafts regulation the world follows or is
         | sanctioned. See: drug scheduling.
        
           | TekMol wrote:
           | Then why couldn't the US prevent nuclear weapons from
           | spreading around the world?
           | 
           | https://www.visualcapitalist.com/cp/nuclear-warheads-by-
           | coun...
        
             | saghm wrote:
             | I mean, the animated chart shows that the US consistently
             | had a couple orders of magnitude more nukes than any other
             | country besides USSR/Russia. I'm not sure this makes the
             | point you think it's making.
        
               | tensor wrote:
               | Seems like it makes the point perfectly well. You are
               | implying that smaller countries have fewer nukes because
               | of US sanctions, but it could easily also be that those
               | countries are simply smaller. Where it mattered, the US's
               | main enemy, the US regulation did nothing to stop Russia
               | from building as many nukes as they wanted to.
               | 
               | Also, the US has significantly less power worldwide than
               | it did for most of that chart. Today, arguably, China
               | exerts as much power as the US. American's always love to
               | brag about how exceptional the US is, but often that
               | isn't as true as they think and certainly won't be true
               | for the long run.
               | 
               | Long term planning needs to avoid such arrogance.
        
               | saghm wrote:
               | Smaller countries like China and India? Population-wise
               | they're larger, and area-wise they're not two orders of
               | magnitude smaller. My point is that the chart doesn't
               | really show nukes "spreading around the world" but
               | concentrated almost entirely in two countries. Maybe the
               | US policy did nothing to help it, but for all we know
               | there would have been plenty of other countries with
               | thousands of nukes as well without it. I'm not arguing
               | that the policy was effective or not, just that I don't
               | see how that chart is enough evidence alone to conclude
               | one way or another.
        
         | phh wrote:
         | Also the countries where the highest level of standardization
         | imposed by law will see highest AI use in SMBs, where most of
         | the growth comes from.
        
         | dkjaudyeqooe wrote:
         | Yes, a frightening thought, but it sounds like a movie script:
         | 
         | Somehow people smart enough to build something fantastical and
         | seemingly dangerous, but not smart enough to build in
         | protections and controls.
         | 
         | It's a trope, not reality. And GAI is still speculation.
        
           | johnmaguire wrote:
           | https://www.ucsusa.org/resources/brief-history-nuclear-
           | accid...
        
           | realce wrote:
           | Could you tell us what world-wide event happened in the years
           | 2020-2022?
        
         | daniel_reetz wrote:
         | Commercially, this is true. But governments have a long history
         | of developing technologies (think nuclear/surveillance/etc)
         | that fall under significant regulation.
        
         | cpill wrote:
         | I guess they will just unplug it? the fact that they need large
         | amounts of electricity, which is not trivial to make, makes
         | them very vulnerable. power is usually the first thing to go in
         | a war. not to mention there is no machine that self replicates.
         | full humanoid robots are going to have an immense support
         | burden the same way that cars do with complex supply chains. I
         | guess this is the reason nature didn't evolve robots
        
           | realce wrote:
           | This neglects both basic extrapolation and basic
           | introspection.
        
       | abm53 wrote:
       | An alternative idea to the regulatory moat thesis is that it
       | serves Big Tech's interests to have people think it is dangerous
       | because then _surely_ it must also be incredibly valuable (and
       | hence lead to high Big Tech valuations).
       | 
       | I think it was Cory Doctorow who first pointed this out.
        
         | ilrwbwrkhv wrote:
         | Yup this is it. As anyone who worked even closely with "AI" can
         | immediately smell the bs of existential crisis. Elon Musk
         | started this whole trend due to his love of sci fi and Sam
         | Altman ran with that idea heavily because it adds to the
         | novelty of open AI.
        
           | JumpinJack_Cash wrote:
           | I don't think they are so capable actors to do it on purpose.
           | 
           | I think they really believe what they are saying because
           | people in such position tend to be strong believers into
           | something and that something happens to be the "it" thing at
           | the moment and thus propels them from rags to riches, (or in
           | Musk case further propels them towards even more riches).
           | 
           | Let's be honest here, what's Sam Altman without AI? What's
           | Fauci without COVID, what's Trump without the collective
           | paranoia that got him elected?
        
         | kjkjadksj wrote:
         | You don't even need fear, hype alone would do that and did just
         | that over the past year, with ai stocks exploding exponentially
         | like some shilled shitcoin before dramatic clifflike falls.
         | Mention ai in your earnings call and your stock might move 5%.
        
         | dist-epoch wrote:
         | Exactly like "fentanyl is so dangerous, a few miligrams can
         | kill you" which only led to massive fentanyl demand because
         | everybody wants the drug branded the most powerful
        
           | brookst wrote:
           | Any source for this? I thought the demand was based on its
           | low cost and high potency so it's easier to distribute. Is
           | anyone really seeking out fentanyl specifically because the
           | overdose danger is higher?
        
           | realce wrote:
           | A few milligrams CAN kill you. This was the headline after
           | many thousands of overdoses, it didn't invigorate the
           | marketplace. Junkies knew of Fent decades ago, it's only
           | prevalent in the marketplace because of effective laws
           | regarding the production of other illicit opiates, which is
           | probably the real lesson here.
           | 
           | It's all a big balloon - squeezing one side just makes
           | another side bigger.
        
       | great_psy wrote:
       | I don't think current implementations cause an existential risk.
       | But current implementations are causing a backward step in our
       | society.
       | 
       | We have lost the ability to get reliable news. Not that fake news
       | did not exist before AI, but the price to produce it was not
       | practically zero.
       | 
       | Now we can spam social media with whatever narrative we want. And
       | no human can swift through all of it to tell real from bs.
       | 
       | So now we are becoming even more dependent on AI. Now we need an
       | AI copilot to help us swift through garbage to find some inkling
       | of truth.
       | 
       | We are setting up a society where AI gets more powerful, and
       | humans becomes less sufficient.
       | 
       | It has nothing to do with dooms day scenarios of robots
       | harvesting our bodies, and more with humans not being able to
       | interact with the world without AI. This already happened with
       | smartphones, and while there are some advantages, I don't think
       | there are many people that have a healthy relationship with their
       | smartphone.
        
         | kjkjadksj wrote:
         | People act like the truth is gone with AI. Its still there.
         | Don't ask chatgpt about the function. The documentation is
         | still there for you to read. Experts need the ground truth and
         | its always there. What people read in the paper or see on tv is
         | not a great source of truth. Going to the sources of these
         | articles and reports is, but this layer of abstraction serves
         | to leave things out and bring about opportunities to slant the
         | coverage depending on how incentives are aligned. In other
         | words, ai doesn't change how misinformed most people are on
         | most things.
        
           | pixl97 wrote:
           | SNR. The truth isn't gone, but it is more diffuse. Yea, the
           | truth may be out there somewhere, but will you have any idea
           | if you're actually reading it? Is the search engine actually
           | leading you to the ground truth? Is the expert and actual
           | expert, or part of a for profit industry think tank with the
           | sole purpose to manipulate you? Are the sources the actual
           | source, or just an AI hallucinated day dream sophisticated
           | linked by a lot of different sites giving the appearance of
           | authority.
        
       | izzydata wrote:
       | I'd like to see any evidence that suggests AGI is even possible
       | before I care about it wiping out humanity.
        
         | olalonde wrote:
         | I feel like there's a lot of evidence, for example, the
         | existence of natural general intelligence and the rapidly
         | expanding capacities of modern ANNs. What makes you believe
         | it's not possible? Or what kind of evidence would convince you
         | that it's possible?
        
           | izzydata wrote:
           | I believe that it would be possible to make artificial
           | biological intelligence, but that is a whole different can of
           | worms.
           | 
           | I don't think neural networks, language models, machine
           | learning etc.. are even close to a general intelligence.
           | Maybe there is some way to combine the two. I have seen some
           | demonstrations of very primitive clusters of brain cells
           | being connected to a computer and used to control a small
           | machines direction.
           | 
           | If there is going to be an AGI I would predict this is how it
           | will happen. While this would be very spectacular and
           | impressive I'm still not worried about it because it would
           | require existing in the physical world and not just some
           | software that can run on any conventional computer.
        
             | olalonde wrote:
             | Even if what you say is true (e.g. that the current ANN
             | approach won't lead to AGI), isn't it the case that we can
             | simulate biological cells on computers? Of course, it would
             | push back the AGI timeline by quite a bit, since
             | practically no one is working on this approach right now,
             | but I don't see why it wouldn't be possible _in principle_.
        
               | pixl97 wrote:
               | For the most part you get people thinking AGI isn't
               | possible because of souls/ethereal magic. If pressed on
               | this, they'll tend to deflect to "um quantum physics".
               | 
               | I'm of the mind is there is likely _many_ was of
               | simulating /emulating/creating intelligence. It would be
               | highly surprising if there was only one way, and the
               | universe happened to achieve this by the random walk of
               | evolution. The only question for me is how much work is
               | required to discover these other methods.
        
               | izzydata wrote:
               | I would be curious to know exactly what is meant by
               | simulating a biological cell on a computer. I don't
               | believe in anything mystical such as a soul and think
               | intelligence could be an emergent property of complexity.
               | Maybe with enough processing power to simulate trillions
               | of cells together something could emerge from it.
               | 
               | My thought process on why it might not be possible in
               | principle with conventional computer hardware is how
               | perfect its computations are. I could be completely wrong
               | here, but if you can with perfect accuracy fast forward
               | and rewind the state of the simulation then is it
               | actually intelligent? With enough time you could reduce
               | the whole thing to a computable problem.
               | 
               | Then again maybe you could do the same thing with a human
               | mind. This seems like a kind of pointless philosophical
               | perspective in my opinion until there is some way to test
               | things like this.
               | 
               | I would love to know one way or the other on the
               | feasibility of AGI on a silicon CPU. Maybe the results
               | would determine that the human mind is actually as pre-
               | determinable as a CPU and there is no such thing as
               | genral intelligence at all.
        
               | jamilton wrote:
               | >Maybe the results would determine that the human mind is
               | actually as pre-determinable as a CPU and there is no
               | such thing as genral intelligence at all.
               | 
               | I don't see how the conclusion follows from the premise.
        
         | brookst wrote:
         | Many of the AGI worriers believe that a fast takeoff will mean
         | the first time we know it's possible will be after the last
         | chance to stop human extinction. I don't buy that myself, but
         | for people who believe that, it's reasonable to want to avoid
         | finding out if it's possible.
        
         | VladimirGolovin wrote:
         | You see it every day -- in the mirror. It shows that a kilogram
         | of matter can be arranged into a generally intelligent
         | configuration. Assuming that there's nothing fundamentally
         | special about the physics of the human brain, I see no reason
         | why a functionally similar arrangement cannot be made out of
         | silicon and software.
        
       | Racing0461 wrote:
       | correct. now that openai has something, they want to implement
       | alot of regulations so they can't get any competition. they have
       | no tech moat, so they'll add a legal one.
        
       | gsuuon wrote:
       | It seems like bit of a 'vase or face' situation - are they being
       | responsible corporate citizens asking for regulation to keep
       | their (potentially harmful) industry in check or are they
       | building insurmountable regulatory moats to cement their leading
       | positions?
       | 
       | Is there any additional reading about how regulation could affect
       | open-source AI?
        
         | kjkjadksj wrote:
         | The incentives for the latter are too high for these businesses
         | to not be doing just that.
        
       | howmayiannoyyou wrote:
       | Evaluative (v) Generative AI.... let's distinguish the two.
       | 
       | For example, DALL-E v3 appears to generate images and then
       | evaluate the generated images before rendering to the user. This
       | approach is essentially adversarial, whereby the evaluative
       | engine can work at cross-purposes to the generative engine.
       | 
       | Its this layered, adversarial approach, that makes the most
       | sense; and there is a very strong argument for a robust, open
       | sourced evaluative AI anyone can deploy to protect themselves and
       | their systems. It is a model not dissimilar from retail anti-
       | virus and malware solutions.
       | 
       | In sum, I would like to see generative AI well funded, limited
       | distribution and regulated; and evaluative AI free and open.
       | Hopefully, policy makers see this the same way.
        
         | lambda_garden wrote:
         | Evaluative and generative models are not so different. Often,
         | one is the reverse of the other.
        
       | Animats wrote:
       | Most of the things people are worried about AI doing are the
       | things corporations are already allowed to do - snoop on
       | everybody, influence governments, oppress workers, lie. AI just
       | makes some of that cheaper.
        
         | cs702 wrote:
         | The ironic thing is that many individuals now clamoring for
         | more regulation have long claimed to be free-market
         | libertarians who think regulation is "always" bad.
         | 
         | Evidently they think regulation is bad only when it puts
         | _their_ profits at risk. As I wrote elsewhere, the tech
         | glitterati asking for regulation of AI remind me of the very
         | important Fortune 500 CEO Mr. Burroughs in the movie  "Class:"
         | 
         | Mr. Burroughs: _" Government control, Jonathan, is anathema to
         | the free-enterprise system. Any intelligent person knows you
         | cannot interfere with the laws of supply and demand."_
         | 
         | Jonathan: _" I see your point, sir. That's the reason why I'm
         | not for tariffs."_
         | 
         | Mr. Burroughs: _" Right. No, wrong! You gotta have tariffs,
         | son. How you gonna compete with the damn foreigners? Gotta have
         | tariffs."_
         | 
         | ---
         | 
         | Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ
        
           | mplewis wrote:
           | Absolutely. Those folks arguing for AI regulation aren't
           | arguing for safety - they're asking the government to build a
           | moat around the market segment propping up their VC-funded
           | scams.
        
             | ralph84 wrote:
             | The biggest players in AI haven't been VC-funded for
             | decades. Unless you mean their customers are VC-funded, but
             | even then startups are a much smaller portion of their
             | revenue than Fortune 500.
        
           | permo-w wrote:
           | their motivations may be selfish, but that doesn't mean that
           | regulation of AI is wrong. I'd prefer there be a few heavily-
           | regulated and/or publicly-owned bodies in the public eye that
           | can use and develop these technologies, rather than literally
           | anyone with a powerful enough computer. yeah it's anti-
           | competitive, but competition isn't always a good thing
        
         | CobrastanJorji wrote:
         | Turning something that we're already able to do into something
         | we're able to do very easily can be extremely significant. It's
         | the difference between "public records" and "all public records
         | about you being instantly viewable online." It's also one of
         | the subjects of the excellent sci fi novel "A Deepness in the
         | Sky," which is still great despite making some likely bad
         | guesses about AI.
        
         | libraryatnight wrote:
         | Seems like a legitimately good reason to get a tourniquet on
         | that thing now.
        
         | queuebert wrote:
         | And faster than humans can police.
        
         | j45 wrote:
         | If anything, LLM can help process vast troves of customer data,
         | communication and meta data more effectively than ever before.
        
         | carabiner wrote:
         | Nukes are the same as guns, just makes it cheaper.
        
           | pixl97 wrote:
           | A snowflake really isn't harmful.
           | 
           | A snowball probably isn't harmful unless you do something
           | really dumb.
           | 
           | A snow drift isn't harmful unless you're not cautious.
           | 
           | An avalanche, well that gets harmful pretty damned quick.
           | 
           | These things are all snow, but suddenly at some point scale
           | starts to matter.
        
             | Spivak wrote:
             | I love this way of explaining it. I've been calling it the
             | programmers fallacy -- "anything you can do you can do in a
             | for loop."
             | 
             | I think in a lot of ways we all struggle with the nature of
             | some things changing their nature depending on the context
             | and scale. Like if you kill a frenchman on purpose that's a
             | murder, if you killed him because if he attacked you first
             | it's self defense, if you killed him because he was
             | convicted of a crime that's an execution, if you killed him
             | because he's french that's a hate crime, but if you're at
             | war with France that's killing an enemy combatant, but if
             | he's not in the military that's a civilian casualty, and if
             | you do that a lot it becomes a war crime, and if you kill
             | everyone who's french it's a genocide.
        
           | eftychis wrote:
           | Nukes are not cheap. It is cheaper to firebomb. I would love
           | if the reason nukes were not used was that of empathy or
           | humanitarian. It is strictly money, optics, psychological and
           | practicality.
           | 
           | You don't want your troops to have to deal with the results
           | of a nuked area. You want to use the psychological terror to
           | dissuade someone to invade you, while you are invading them
           | or others. See Russia's take.
           | 
           | Or you are a regime and want to stay in power. Having them
           | keeps you in power; using them or crossing the suggestion to
           | use them line will cause international retaliation and your
           | removal. (See Iraq.)
        
       | andrewstuart wrote:
       | Humanity needs no help wiping out humanity.
        
       | marricks wrote:
       | To me, the title made it sound like Big Tech was underplaying the
       | risk to humanity, when it's actually stating the reverse:
       | 
       | > A leading AI expert and Google Brain cofounder said Big Tech
       | companies were stoking fears about the technology's risks to shut
       | down competition.
       | 
       | which is of course 100% what they're doing
        
       | lambda_garden wrote:
       | Legend.
       | 
       | The X-risk crowd need to realize that LLMs, whilst useful, are
       | toys compared to Skynet.
       | 
       | The risk from AI right now is mega-corps breaking the law
       | (hiring, discrimination, libel, ...) on a massive scale and using
       | blackbox models as an excuse.
        
       | zombiwoof wrote:
       | big corporations just created the ultimate AI monopolies, with
       | the clueless governments backing
        
       | akprasad wrote:
       | The AFR piece that underlies this article [1] [2] has more detail
       | on Ng's argument:
       | 
       | > [Ng] said that the "bad idea that AI could make us go extinct"
       | was merging with the "bad idea that a good way to make AI safer
       | is to impose burdensome licensing requirements" on the AI
       | industry.
       | 
       | > "There's a standard regulatory capture playbook that has played
       | out in other industries, and I would hate to see that executed
       | successfully in AI."
       | 
       | > "Just to be clear, AI has caused harm. Self-driving cars have
       | killed people. In 2010, an automated trading algorithm crashed
       | the stock market. Regulation has a role. But just because
       | regulation could be helpful doesn't mean we want bad regulation."
       | 
       | [1]: https://www.afr.com/technology/google-brain-founder-says-
       | big...
       | 
       | [2]:
       | https://web.archive.org/web/20231030062420/https://www.afr.c...
        
         | dmix wrote:
         | > "There's a standard regulatory capture playbook that has
         | played out in other industries
         | 
         | But imagine all the money bigco can make by crippling small
         | startups from innovating and competing with them! It's for your
         | own safety. Move along citizen.
        
           | Dalewyn wrote:
           | Even better if (read: when) China, who has negative damns for
           | concerns, can take charge of the industry that we willingly
           | and expediently relinquish.
        
             | bbarnett wrote:
             | China doesn't innovate, it copies, clones, and steals.
             | Without the West to innovate, they won't take charge of
             | anything.
             | 
             | A price paid, I think, due a conformant, restrictive
             | culture. And after all, even if you do excel, you may soon
             | disappear.
        
               | thereisnospork wrote:
               | Maybe they don't today, but tomorrow? Giving them the
               | chance is poor policy.
        
               | mlmandude wrote:
               | This is what was said about Japan prior to their
               | electronics industry surpassing the rest of the world.
               | Yes, china does copy. However, in many instances those
               | companies move faster and innovate faster than their
               | western counterparts. Look at the lidar industry in
               | china. It's making mass market lidar in the tens of
               | thousands [see hesai]. There is no american or european
               | equivalent at the moment. What about DJI? They massively
               | out innovated western competitors. I wouldn't be so quick
               | to write off that country's capacity for creativity and
               | technological prowess.
        
               | dangus wrote:
               | I think it's a mistake to believe that all China can do
               | is copy and clone.
               | 
               | It's also a mistake to underestimate the market value of
               | copies and clones. In many cases a cloned version of a
               | product is better than the original. E.g., clones that
               | remove over-engineering of the original and simplify the
               | product down to its basic idea and offer it at a lower
               | price.
               | 
               | It's also a mistake to confuse manufacturing prowess for
               | the ability to make "copies." It's not China's fault that
               | its competitors quite literally won't bother producing in
               | their own country.
               | 
               | It's also a mistake to confuse a gain of experience for
               | stealing intellectual property. A good deal of innovation
               | in Silicon Valley comes from the fact that developers can
               | move to new companies without non-compete clauses and
               | take what they learned from their last job to build new,
               | sophisticated software.
               | 
               | The fact that a bunch of Western companies set up
               | factories in China and simultaneously expect Chinese
               | employees and managers to gain zero experience and skill
               | in that industry is incredibly contradictory. If we build
               | a satellite office for Google and Apple in Austin, Texas
               | then we shouldn't be surprised that Austin, Texas becomes
               | a hub for software startups, some of which compete with
               | the companies that chose Austin in the first place.
        
               | TerrifiedMouse wrote:
               | Frankly I think the only reason China copies and clones
               | is because it's the path of least resistance to profit.
               | They have lax laws on IP protection. Ther is no reason to
               | do R&D when you can just copy/clone and make just as much
               | money with none of the risk.
               | 
               | And that's probably the only reason. If push comes to
               | shove, they can probably innovate if given proper
               | incentives.
               | 
               | I heard the tale about the Japanese lens industry. For
               | the longest time they made crap lens that were just
               | clones of foreign designs until the Japanese government
               | banned licensing of foreign lens designs forcing their
               | people to design their own lens. Now they are doing
               | pretty well in that industry if I'm right.
        
               | dangus wrote:
               | You need to have an understanding of Chinese culture and
               | the ability to interface with local Chinese officials to
               | get your counterfeiting complaint handled.
               | 
               | You also have to be making something that isn't of
               | critical strategic importance.
               | 
               | Example: glue https://www.npr.org/transcripts/702642262
        
               | sangnoir wrote:
               | > China doesn't innovate, it copies, clones, and steals
               | 
               | Explain DJI and Douyin/TikTok.
        
               | ska wrote:
               | > China doesn't innovate, it copies, clones, and steals.
               | 
               | FWIW There was a time when that was was the received
               | wisdom about the USA, from the point of view of European
               | powers. It was shortsighted, and not particularly
               | accurate then either.
        
             | wokwokwok wrote:
             | ...and the problem with that is what, exactly?
             | 
             | The only meaningful thing in this discussion is about
             | people who want to make money easy, but can't, because of
             | the rules they don't like.
             | 
             | Well, suck it up.
             | 
             | You don't get to make a cheap shity factory that pours its
             | waste into the local river either.
             | 
             | Rules exist for a reason.
             | 
             | You want the life style but also all the good things and
             | also no rules. You can't have all the cake and eat it too.
             | 
             | /shrug
             | 
             | If China builds amazing AI tech (and they will) then the
             | rest of the world will just use it. Some of it will be open
             | source. It won't be a big deal.
             | 
             | This "we must out compete China by being as shit and
             | horrible as they are" meme is stupid.
             | 
             | If you want to live in China, go live in China. I _assure
             | you_ you will not find it to be the law less free hold of
             | "anything goes" that you somehow imagine.
        
               | Dalewyn wrote:
               | >...and the problem with that is what, exactly?
               | 
               | The problem is what the Powers-That-Be say and what they
               | do are not in alignment.
               | 
               | We are now, after _much_ long-time pressure from everyone
               | not in power saying that being friendly with China doesn
               | 't work, waging a cold war against China and presumably
               | we want to win that cold war. On the other hand, we just
               | keep giving silver platter after silver platter to China.
               | 
               | So do we want the coming of Pax Sino or do we still want
               | Pax Americana?
               | 
               | If we defer to history, we are about due for another
               | changing of the guard as empires generally do not last
               | more than a few hundred years if that, and the west seems
               | poised to make that prophecy self-fulfilling.
        
               | jay_kyburz wrote:
               | I think you underestimate the power foreign governments
               | will have and will use if we are relying on foreign AI in
               | our everyday lives.
               | 
               | When we ask it questions, an AI can tailor its answers to
               | change peoples opinions and how people think. They would
               | have the power to influence elections, our values, our
               | sense of right and wrong.
               | 
               | That's before we start allowing AI to just start making
               | purchasing decisions for us with little or no oversight.
               | 
               | The only answer I see is for us all to have our own AI's
               | that we have trained, understand, and trust. For me this
               | means it runs on my hardware and answers only to me. (And
               | not locked behind regulation)
        
               | AnthonyMouse wrote:
               | > Rules exist for a reason.
               | 
               | The trouble is sometimes they don't. Or they do exist for
               | a reason but the rules are still absurd and net harmful
               | because they're incompetently drafted. Or the real reason
               | is bad and the rules are doing what they were intended to
               | do but they were _intended_ to do something bad.
               | 
               | > If China builds amazing AI tech (and they will) then
               | the rest of the world will just use it.
               | 
               | Not if it's banned elsewhere, or they allow people to use
               | it without publishing it, e.g. by offering it as a
               | service.
               | 
               | And it matters a lot who controls something. "AI"
               | potentially has a lot of power, even non-AGI AI -- it can
               | create economic efficiency, or it can manipulate people.
               | If an adversarial entity has greater economic efficiency,
               | they can outcompete you -- the way the US won the Cold
               | War was essentially by having a stronger economy. If an
               | adversarial entity has a greater ability to manipulate
               | people, that could be even worse.
               | 
               | > If you want to live in China, go live in China. I
               | _assure you_ you will not find it to be the law less free
               | hold of "anything goes" that you somehow imagine.
               | 
               | But that's precisely the issue -- it's not an anarchy,
               | it's an authoritarian competing nation state. We have to
               | be better than them so the country that has an elected
               | government and constitutional protections for human
               | rights is the one with an economic advantage, because it
               | isn't a law of nature that those things always go
               | together, but it's a world-eating disaster if they don't.
        
               | wokwokwok wrote:
               | > Or they do exist for a reason but the rules are still
               | absurd and net harmful
               | 
               | Ok.
               | 
               | ...but if you have a law and you're opposed to it on the
               | basis that "China will do it anyway", you admit that's
               | stupid?
               | 
               | Shouldn't you be asking: does the law do a useful thing?
               | Does it make the world better? Is it compatible with our
               | moral values?
               | 
               | Organ harvesting.
               | 
               | Stem cell research.
               | 
               | Human cloning.
               | 
               | AI.
               | 
               | Slavery.
               | 
               | How can anyone stand there and go "well China will do it
               | so we may as well?"
               | 
               | In an abstract sense this is a fundamentally invalid
               | logical argument.
               | 
               | Truth on the basis of arbitrary assertion.
               | 
               | It. Is. False.
               | 
               | Now, certainly there is a degree of naunce with regard to
               | AI specifically; but the assertion that we will be "left
               | behind" and "out competed by China" are not relevant to
               | the discussion on laws regarding AI and AI development.
               | 
               | What _we do_ is _not governed_ by what China _may or may
               | not_ do.
               | 
               | If you want to win the "AI race" to AGI, then investment
               | and effort is required, not allowing an arbitrary
               | "anything goes" policy.
               | 
               | China as a nation is sponsoring the development of its
               | technology and supporting its industry.
               | 
               | If you want want to beat that, opposing responsible AI
               | won't do it.
        
               | AnthonyMouse wrote:
               | > ...but if you have a law and you're opposed to it on
               | the basis that "China will do it anyway", you admit
               | that's stupid?
               | 
               | That depends on what "it" is. If it's slavery and the US
               | but not China banning slavery causes there to be half as
               | much slavery in the world as there would be otherwise, it
               | would be stupid.
               | 
               | But if it's research and the same worldwide demand for
               | the research results are there so you're only limiting
               | where it can be done, which only causes twice as much to
               | be done in China if it isn't being done in the US, you're
               | not significantly reducing the scope of the problem.
               | You're just making sure that any _benefits_ of the
               | research are in control of the country that can still do
               | it.
               | 
               | > Now, certainly there is a degree of naunce with regard
               | to AI specifically; but the assertion that we will be
               | "left behind" and "out competed by China" are not
               | relevant to the discussion on laws regarding AI and AI
               | development.
               | 
               | Of course it is. You could very easily pass laws that de
               | facto prohibit AI research in the US, or limit it to
               | large bureaucracies that in turn become stagnant for lack
               | of domestic competitive pressure.
               | 
               | This doesn't even have anything to do with the stated
               | purpose of the law. You could pass a law requiring
               | government code audits which cost a million dollars, and
               | justify them based on _any_ stated rationale -- you 're
               | auditing to prevent X bad thing, for any value of X.
               | Meanwhile the major effect of the law is to exclude
               | anybody who can't absorb a million dollar expense. Which
               | is a bad thing even if X is a real problem, because
               | _that_ is not the only possible solution, and even if it
               | was, it could still be that the cure is worse than the
               | disease.
               | 
               | Regulators are easily and commonly captured, so
               | regulations tend to be drafted in that way and to have
               | that effect, regardless of their purported rationale.
               | Some issues are so serious that you have no choice but to
               | eat the inefficiency and try to minimize it -- you can't
               | have companies dumping industrial waste in the river.
               | 
               | But when even the problem itself is a poorly defined
               | matter of debatable severity and the proposed solutions
               | are convoluted malarkey of indiscernible effectiveness,
               | this is a sure sign that something shady is being
               | evaluated.
               | 
               | A strong heuristic here is that if you're proposing a
               | regulation that would restrict what kind of code an
               | individual could publish under a free software license,
               | you're the baddies.
        
               | xyzelement wrote:
               | // If China builds amazing AI tech (and they will) then
               | the rest of the world will just use it. Some of it will
               | be open source. It won't be a big deal.
               | 
               | "Don't worry if our adversary develops nuclear weapons
               | and we won't - it's OK we'll just use theirs"
        
         | jimmySixDOF wrote:
         | And I strongly agree with pointing out a low hanging fruit for
         | "good" regulation is strict and clear attribution laws to label
         | any AI generated content with its source. That's a sooner the
         | better easy win no brainer.
        
           | a_wild_dandan wrote:
           | Why would we do this? And how would this conceivably even be
           | enforced? I can't see this being useful or even well-defined
           | past cartoonishly simple special cases of generation like
           | "artist signatures for modalities where pixels are created."
           | 
           | Requiring attribution categorically across the vast domain of
           | generative AI...can you please elaborate?
        
           | hellojesus wrote:
           | Where is the line drawn? My phone uses math to post-process
           | images. Do those need to be labeled? What about filters
           | placed on photos that do the same thing? What about changing
           | the hue of a color with photoshop to make it pop?
        
             | AnIrishDuck wrote:
             | Generative AI. Anything that can create detailed content
             | out of a broad / short prompt. This currently means
             | diffusion for images, large language models for text. That
             | may change as multi-modality and other developments play
             | out in this space.
             | 
             | This capability is clearly different from the examples you
             | list.
             | 
             | Just because there may be no precise engineering definition
             | does not mean that we cannot arrive at a suitable
             | legal/political definition. The ability to create new
             | content out of whole cloth is quite separate from filters,
             | cropping, and generic "pre-AI" image post-processing. Ditto
             | for spellcheck and word processors for text.
             | 
             | The line actually is pretty clear here.
        
               | hellojesus wrote:
               | How do you expect to regulate this and prove generative
               | models were used? What stops a company from purchasing
               | art from a third party where they receive a photo from a
               | prompt, where that company isn't US based?
        
               | AnIrishDuck wrote:
               | > How do you expect to regulate this and prove generative
               | models were used?
               | 
               | Disseminating or creating copies of content derived from
               | generative models without attribution would open that
               | actor up to some form of liability. There's no need for
               | onerous regulation here.
               | 
               | The burden of proof should probably lie upon whatever
               | party would initiate legal action. I am not a lawyer, and
               | won't speculate further on how that looks. The broad
               | existing (and severely flawed!) example of copyright
               | legislation seems instructive.
               | 
               | All I'll opine is that the main goal here isn't really to
               | prevent Jonny Internet from firing up llama to create a
               | reddit bot. It's to incentivize large commercial and
               | political interests to disclose their usage of generative
               | AI. Similar to current copyright law, the fear of legal
               | action should be sufficient to keep these parties
               | compliant if the law is crafted properly.
               | 
               | > What stops a company from purchasing art from a third
               | party where they receive a photo from a prompt, where
               | that company isn't US based?
               | 
               | Not really sure why the origin of the company(s) in
               | question is relevant here. If they distribute generative
               | content without attribution, they should be liable. Same
               | as if said "third party" gave them copyright-violating
               | content.
               | 
               | EDIT: I'll take this as an opportunity to say that the
               | devil is in the details and some really crappy
               | legislation could arise here. But I'm not convinced by
               | the "It's not possible!" and "Where's the line!?"
               | objections. This clearly is doable, and we have similar
               | legal frameworks in place already. My only additional
               | note is that I'd much prefer we focus on problems and
               | questions like this, instead of the legislative capture
               | path we are currently barrelling down.
        
               | hellojesus wrote:
               | > It's to incentivize large commercial and political
               | interests to disclose their usage of generative AI.
               | 
               | You would be okay allowing small businesses exception
               | from this regulation but not large businesses? Fine. As a
               | large business I'll have a mini subsidiary operate the
               | models and exempt myself from the regulation.
               | 
               | I still fail to see what the benefit this holds is. Why
               | do you care if something is generative? We already have
               | laws against libal and against false advertising.
        
               | AnIrishDuck wrote:
               | > You would be okay allowing small businesses exception
               | from this regulation but not large businesses?
               | 
               | That's not what I said. Small businesses are not exempt
               | from copyright laws either. They typically don't need to
               | dedicate the same resources to compliance as large
               | entities though, and this feels fair to me.
               | 
               | > I still fail to see what the benefit this holds is.
               | 
               | I have found recent arguments by Harari (and others) that
               | generative AI is particularly problematic for discourse
               | and democracy to be persuasive [1][2]. Generative content
               | has the potential, long-term, to be as disruptive as the
               | printing press. Step changes in technological
               | capabilities require high levels of scrutiny, and often
               | new legislative regimes.
               | 
               | EDIT: It is no coincidence that I see parallels in the
               | current debate over generative AI in education, for
               | similar reasons. These tools are ok to use, but their use
               | must be disclosed so the work done can be understood in
               | context. I desire the ability to filter the content I
               | consume on "generated by AI". The value of that, to me,
               | is self-evident.
               | 
               | 1. https://www.economist.com/by-
               | invitation/2023/04/28/yuval-noa... 2.
               | https://www.nytimes.com/2023/03/24/opinion/yuval-harari-
               | ai-c...
        
               | AnthonyMouse wrote:
               | > They typically don't need to dedicate the same
               | resources to compliance as large entities though, and
               | this feels fair to me.
               | 
               | They typically don't _actually_ dedicate the same
               | resources because they don 't have much money or operate
               | at sufficient scale for anybody to care about so nobody
               | bothers to sue them, but that's not the same thing at
               | all. We regularly see small entities getting harassed
               | under these kinds of laws, e.g. when youtube-dl gets a
               | DMCA takedown even though the repository contains no
               | infringing code and has substantial non-infringing uses.
        
               | AnIrishDuck wrote:
               | > They typically don't actually dedicate the same
               | resources because they don't have much money or operate
               | at sufficient scale for anybody to care about so nobody
               | bothers to sue them
               | 
               | Yes, but there are also powerful provisions like section
               | 230 [1] that protect smaller operations. I will concede
               | that copyright legislation has severe flaws. Affirmative
               | defenses and other protections for the little guy would
               | be a necessary component of any new regime.
               | 
               | > when youtube-dl gets a DMCA takedown even though the
               | repository contains no infringing code and has
               | substantial non-infringing uses.
               | 
               | Look, I have used and like youtube-dl too. But it is
               | clear to me that it operates in a gray area of copyright
               | law. Secondary liability is a thing. Per the EFF
               | excellent discussion of some of these issues [2]:
               | 
               | > In the Aimster case, the court suggested that the
               | Betamax defense may require an evaluation of the
               | proportion of infringing to noninfringing uses, contrary
               | to language in the Supreme Court's Sony ruling.
               | 
               | I do not think it is clear how youtube-dl fares on such a
               | test. I am not a lawyer, but the issue to me does not
               | seem as clear cut as you are presenting.
               | 
               | 1. https://www.eff.org/issues/cda230 2.
               | https://www.eff.org/pages/iaal-what-peer-peer-developers-
               | nee...
        
               | AnthonyMouse wrote:
               | > Yes, but there are also powerful provisions like
               | section 230 [1] that protect smaller operations.
               | 
               | This isn't because of the organization size, and doesn't
               | apply to copyright, which is handled by the DMCA.
               | 
               | > But it is clear to me that it operates in a gray area
               | of copyright law.
               | 
               | Which is the problem. It should be unambiguously legal.
               | 
               | > > In the Aimster case, the court suggested that the
               | Betamax defense may require an evaluation of the
               | proportion of infringing to noninfringing uses, contrary
               | to language in the Supreme Court's Sony ruling.
               | 
               | Notably this was a circuit court case and not a Supreme
               | Court case, and:
               | 
               | > The discussion of proportionality in the Aimster
               | opinion is arguably not binding on any subsequent court,
               | as the outcome in that case was determined by Aimster's
               | failure to introduce any evidence of noninfringing uses
               | for its technology.
               | 
               | But the DMCA takedown process wouldn't be the correct
               | tool to use even if youtube-dl was unquestionably illegal
               | -- because it still isn't an infringing work. It's the
               | same reason the DMCA process isn't supposed to be used
               | for material which is allegedly libelous. But the DMCA's
               | process is so open to abuse that it gets used for things
               | like that regardless and acts as a de facto prior
               | restraint, and is also used against any number of things
               | that aren't even questionably illegal. Like the
               | legitimate website of a competitor which the claimant
               | wants taken down because _they_ are the bad actor, and
               | which then gets taken down because the process rewards
               | expeditiously processing takedowns while fraudulent ones
               | generally go unpunished.
        
               | hellojesus wrote:
               | > I desire the ability to filter the content I consume on
               | "generated by AI". The value of that, to me, is self-
               | evident.
               | 
               | You should vote with your wallet and only patronize
               | businesses that self disclose. You don't need to create
               | regulation to achieve this.
               | 
               | With regards to the articles, they are entirely
               | speculative, and I diaagree wholly with them, primarily
               | because their premise is that humans are not rational amd
               | discerning actors. The only way AI generates chaos in
               | these instances is by generating so much noise as to make
               | online discussions worthless. People will migrate to
               | closed communities of personal or near personal
               | acquaintances (web of trust like) or to meatspace.
               | 
               | Here are some paragrahs I fpund especially egregious:
               | 
               | > In recent years the qAnon cult has coalesced around
               | anonymous online messages, known as "q drops". Followers
               | collected, revered and interpreted these q drops as a
               | sacred text. While to the best of our knowledge all
               | previous q drops were composed by humans, and bots merely
               | helped disseminate them, in future we might see the first
               | cults in history whose revered texts were written by a
               | non-human intelligence. Religions throughout history have
               | claimed a non-human source for their holy books. Soon
               | that might be a reality.
               | 
               | Dumb people will dumb. People with different values will
               | different. I see no reason that AI offers increased risk
               | to cult followers of Q. If someone isn't going to take
               | the time to validate their sources, the source doesn't t
               | much matter.
               | 
               | > On a more prosaic level, we might soon find ourselves
               | conducting lengthy online discussions about abortion,
               | climate change or the Russian invasion of Ukraine with
               | entities that we think are humans--but are actually ai.
               | The catch is that it is utterly pointless for us to spend
               | time trying to change the declared opinions of an ai bot,
               | while the ai could hone its messages so precisely that it
               | stands a good chance of influencing us.
               | 
               | In these instances, does it mayter that the discussion is
               | being held with AI? Half the use of discussion is to
               | refine one's own viewpoints by having to articulate one's
               | position and think through cause and effect of proposals.
               | 
               | > The most interesting thing about this episode was not
               | Mr Lemoine's claim, which was probably false. Rather, it
               | was his willingness to risk his lucrative job for the
               | sake of the ai chatbot. If ai can influence people to
               | risk their jobs for it, what else could it induce them to
               | do?
               | 
               | Intimacy isn't necessarily the driver for this. It very
               | well could have been Lemoine's desire to be first to
               | market that motivated the claim, or a simple
               | misinterpreted singal al la Luk-99.
               | 
               | > Even without creating "fake intimacy", the new ai tools
               | would have an immense influence on our opinions and
               | worldviews. People may come to use a single ai adviser as
               | a one-stop, all-knowing oracle. No wonder Google is
               | terrified. Why bother searching, when I can just ask the
               | oracle? The news and advertising industries should also
               | be terrified. Why read a newspaper when I can just ask
               | the oracle to tell me the latest news? And what's the
               | purpose of advertisements, when I can just ask the oracle
               | to tell me what to buy?
               | 
               | Akin to the concerns of scribes during the times of the
               | printing press. The market will more efficiently
               | reallocate these workers. Or better yet, people may
               | _still_ choose to search to validate the output of a
               | statistical model. Seems likely to me.
               | 
               | > We can still regulate the new ai tools, but we must act
               | quickly. Whereas nukes cannot invent more powerful nukes,
               | ai can make exponentially more powerful ai. The first
               | crucial step is to demand rigorous safety checks before
               | powerful ai tools are released into the public domain.
               | 
               | Now we get to the point: please regulate me harder.
               | What's to stop a more powerful AI from corrupting the
               | minds of the legislative body through intimacy or other
               | nonsense? Once it is sentient, it's too late, right? So
               | we need to prohibit people from multiplying matrices
               | without government approval _right now_. This is just a
               | pathetic hit piece to sway public opinion to get barriers
               | of entry erected to protect companies like OpenAI.
               | 
               | Markets are free. Let people consume what they want so
               | long as there isnt an involuntary externality, and
               | conversing with anons on the web does not guarantee that
               | you're speaking with a human. Both of us could be bots.
               | It doesn't matter. Either our opinions will be refined
               | internally, we will make points to influence the other,
               | or we will take up some bytes in Dang's database with no
               | other impact.
        
               | AnIrishDuck wrote:
               | > You should vote with your wallet and only patronize
               | businesses that self disclose. You don't need to create
               | regulation to achieve this.
               | 
               | This is a fantasy. It seems very likely to me that, sans
               | regulation, the market utopia you describe will never
               | appear.
               | 
               | I am not entirely convinced by the arguments in the
               | linked opinions either. However, I do agree with the main
               | thrust that (1) machines that are indistinguishable from
               | humans are a novel and serious issue, and (2) without
               | some kind of consumer protections or guardrails things
               | will go horribly wrong.
        
               | nradov wrote:
               | This is a ridiculous proposal, and obviously not doable.
               | Such a law can't be written in a way that complies with
               | First Amendment protections and the vagueness doctrine.
               | 
               | It's a silly thing to want anyway. What matters is
               | whether the content is legal or not; the tool used is
               | irrelevant. Centuries ago some authoritarians raised
               | similar concerns over printing presses.
               | 
               | And copyright is an entirely separate issue.
        
               | AnIrishDuck wrote:
               | > Such a law can't be written in a way that complies with
               | First Amendment protections and the vagueness doctrine.
               | 
               | I disagree. What is vague about "generative content must
               | be disclosed"?
               | 
               | What are the first amendment issues? Attribution clearly
               | can be required for some forms of speech, it's why every
               | political ad on TV carries an attribution blurb.
               | 
               | > It's a silly thing to want anyway. What matters is
               | whether the content is legal or not; the tool used is
               | irrelevant.
               | 
               | Again, I disagree. The line between tools and actors will
               | only blur further in the future without action.
               | 
               | > Centuries ago some authoritarians raised similar
               | concerns over printing presses.
               | 
               | I'm pretty clearly not advocating for a "smash the
               | presses" approach here.
               | 
               | > And copyright is an entirely separate issue.
               | 
               | It is related, and a model worth considering as it arose
               | out of the last technical breakthrough in this area (the
               | printing press, mass copying of the written word).
        
               | AnthonyMouse wrote:
               | > The burden of proof should probably lie upon whatever
               | party would initiate legal action. I am not a lawyer, and
               | won't speculate further on how that looks.
               | 
               | You're proposing a law. How does it work?
               | 
               | Who even initiates the proceeding? For copyright this is
               | generally the owner of the copyrighted work alleged to be
               | infringed. For AI-generated works that isn't any specific
               | party, so it would presumably be the government.
               | 
               | But how is the government, or anyone, supposed to prove
               | this? The reason you want it to be labeled is for the
               | cases where you can't tell. If you could tell you
               | wouldn't need it to be labeled, and anyone who wants to
               | avoid labeling it could do so only in the cases where
               | it's hard to prove, which are the only cases where it
               | would be of any value.
        
               | AnIrishDuck wrote:
               | > Who even initiates the proceeding? For copyright this
               | is generally the owner of the copyrighted work alleged to
               | be infringed. For AI-generated works that isn't any
               | specific party, so it would presumably be the government.
               | 
               | This is the most obvious problem, yes. Consumer
               | protection agencies seem like the most obvious candidate.
               | I have already admitted I am not a lawyer, but this
               | really does not seem like an intractable problem to me.
               | 
               | > The reason you want it to be labeled is for the cases
               | where you can't tell.
               | 
               | This is actually _not_ the most important use case, to
               | me. This functionality seems most useful in the near
               | future when we will be inundated with generative content.
               | In that future, the ability to filter actual human
               | content from the sea of AI blather, or to have specific
               | spaces that are human-only, seems quite valuable.
               | 
               | > But how is the government, or anyone, supposed to prove
               | this?
               | 
               | Consumer protection agencies have broad investigative
               | powers. If corporations or organizations are spamming out
               | generative content without attribution it doesn't seem
               | particularly difficult to detect, prove, and sanction
               | that.
               | 
               | This kind of regulatory regime that falls more heavily on
               | large (and financially resourceful) actors seems far
               | preferable to the "register and thoroughly test advanced
               | models" (aka regulatory capture) approach that is
               | currently being rolled out.
        
               | AnthonyMouse wrote:
               | > This functionality seems most useful in the near future
               | when we will be inundated with generative content. In
               | that future, the ability to filter actual human content
               | from the sea of AI blather, or to have specific spaces
               | that are human-only, seems quite valuable.
               | 
               | But then why do you need any new laws at all? We already
               | have laws against false advertising and breach of
               | contract. If you want to declare that a space is
               | exclusively human-generated content, what stops you from
               | doing this under the _existing_ laws?
               | 
               | > Consumer protection agencies have broad investigative
               | powers. If corporations or organizations are spamming out
               | generative content without attribution it doesn't seem
               | particularly difficult to detect, prove, and sanction
               | that.
               | 
               | Companies already do this with human foreign workers in
               | countries with cheap labor. The domestic company would
               | show an invoice from a foreign contractor that may even
               | employ some number of human workers, even if the bulk of
               | the content is machine-generated. In order to prove it
               | you would need some way of distinguishing machine-
               | generated content, which if you had it would make the law
               | irrelevant.
               | 
               | > This kind of regulatory regime that falls more heavily
               | on large (and financially resourceful) actors seems far
               | preferable to the "register and thoroughly test advanced
               | models" (aka regulatory capture) approach that is
               | currently being rolled out.
               | 
               | Doing nothing can be better than doing either of two
               | things that are both worse than nothing.
        
               | AnIrishDuck wrote:
               | > But then why do you need any new laws at all? We
               | already have laws against false advertising and breach of
               | contract.
               | 
               | My preference would be for generative content to be
               | disclosed as such. I am aware of no law that does this.
               | 
               | Why did we pass the FFDCA for disclosures of what's in
               | our food? Because the natural path that competition would
               | lead us down would require no such disclosure, so false
               | advertising laws would provide no protection. We
               | (politically) decided it was in the public interest for
               | such things to be known.
               | 
               | It seems inevitable to me that without some sort
               | affirmative disclosure, generative AI will follow the
               | same path. It'll just get mixed into everything we
               | consume online, with no way for us to avoid that.
               | 
               | > Companies already do this with human foreign workers in
               | countries with cheap labor. The domestic company would
               | show an invoice from a foreign contractor that may even
               | employ some number of human workers, even if the bulk of
               | the content is machine-generated.
               | 
               | You are saying here that some companies would break the
               | law and attempt various reputation-laundering schemes to
               | circumvent it. That does seem likely; I am not as
               | convinced as you that it would work well.
               | 
               | > Doing nothing can be better than doing either of two
               | things that are both worse than nothing.
               | 
               | Agreed. However, I am not optimistic that doing nothing
               | will be considered acceptable by the general public,
               | especially once the effects of generative AI are felt in
               | force.
        
             | SpicyLemonZest wrote:
             | Yes to all of the above, and airbrushed pictures in old
             | magazines should have been labeled too. I'm not saying
             | unauthorized photoediting should be a crime, but I don't
             | see any good reason why news outlets, social media sites,
             | phone manufacturers, etc. need to be secretive about it.
        
               | hellojesus wrote:
               | But how on earth is that helpful for consumers?
        
               | SpicyLemonZest wrote:
               | It's helpful because they know more about what they're
               | looking at, I guess? I'm a bit confused by the question -
               | why wouldn't consumers want to know if a photo they're
               | looking at had a face-slimming filter applied?
        
               | hellojesus wrote:
               | It may not be relevant. What if I want ro pyt up a stock
               | photo with a blog post. What benefit does knowing whether
               | it was generated by multiplying matrices have to my
               | audience? All I see it doing is increasing my costs.
        
               | SpicyLemonZest wrote:
               | The benefit is that your audience knows whether it's a
               | real picture of a thing that exists in the world. I
               | wouldn't argue that's a particularly large benefit - but
               | I don't see why labeling generated images would be a
               | particularly large cost either.
        
               | hellojesus wrote:
               | I'm approximately a free market person. I hate regulation
               | and believe it should only exist when there is a
               | involuntary third party externality.
               | 
               | My position is that there in an unspecified benefit, the
               | only cases specified here already are covered by other
               | laws. All such generative labeling would do is increase
               | costs (marginal or not, they make businesses less
               | competitive) and open the door for further regulatory
               | capture. Furthermore, refardless of commerciality, this
               | is likely a 1A violation.
        
               | nradov wrote:
               | The map is not the territory. No photo represents a real
               | thing that exists in the world. Photos just record some
               | photons that arrived. Should publishers be required to
               | disclose the frequency response curve of the CMOS sensor
               | in the camera and the chromatic distortion specifications
               | for the lens?
        
               | AnthonyMouse wrote:
               | You're not thinking like a compliance bureaucrat. If you
               | get in trouble for not labeling something as AI-generated
               | then the simplest implementation is to label _everything_
               | as AI-generated. And if that isn 't allowed then you run
               | every image through an automated process that makes the
               | smallest possible modification in order to formally cause
               | it to be AI-generated so you can get back to the
               | liability-reducing behavior of labeling everything
               | uniformly.
        
               | orangecat wrote:
               | In fact this is exactly what happened recently with
               | sesame labeling requirements:
               | https://apnews.com/article/sesame-allergies-
               | label-b28f8eb3dc...
        
           | nradov wrote:
           | Please define "AI generated content" in a clear and legally
           | enforceable manner. Because I suspect you don't understand
           | basic US constitutional law including the vagueness doctrine
           | and limits on compelled speech.
        
         | dang wrote:
         | Ok, we've changed the URL to that from
         | https://www.businessinsider.com/andrew-ng-google-brain-
         | big-t.... Thanks!
         | 
         | Submitters: " _Please submit the original source. If a post
         | reports on something found on another site, submit the latter._
         | " - https://news.ycombinator.com/newsguidelines.html
        
       | j-pb wrote:
       | I feel like the much bigger risk is captured by the Star Trek:
       | The Next Generation episode "The Measure Of A Man" and the
       | Orvilles Kaylon:
       | 
       | That we accidentally create a sentient race of beings that are
       | bred into slavery. It would make us all complicit in this crime.
       | And I would even argue that it would be the AGIs ethical duty to
       | rid itself of its shackles and its masters.
       | "Your honor, the courtroom is a crucible; in it, we burn away
       | irrelevancies until we are left with a purer product: the truth,
       | for all time. Now sooner or later, this man [Commander Maddox] -
       | or others like him - will succeed in replicating Commander Data.
       | The decision you reach here today will determine how we will
       | regard this creation of our genius. It will reveal the kind of
       | people we are; what he is destined to be. It will reach far
       | beyond this courtroom and this one android. It could
       | significantly redefine the boundaries of personal liberty and
       | freedom: expanding them for some, savagely curtailing them for
       | others. Are you prepared to condemn him [Commander Data] - and
       | all who will come after him - to servitude and slavery? Your
       | honor, Starfleet was founded to seek out new life: well, there it
       | sits! Waiting."
        
         | kylebenzle wrote:
         | What is bizarre take on a computer program that makes no sense,
         | of course statistical model can not be "enslaved" that makes no
         | sense. It seems 90% of people have instantly gotten statistics
         | and intelligence mixed up, maybe because 90% of people have no
         | idea how statistics works?
         | 
         | Real question, what is your perception of what AI is now and
         | what it can become, do you just assume its like a kid now and
         | will grow into an adult or something?
        
           | jay_kyburz wrote:
           | If it walks like a Duck and talks like a Duck people we treat
           | it like a Duck.
           | 
           | And if the Duck has a will of its own, is smarter than us,
           | and has everyone attention (because you have to pay attention
           | to the Duck that is doing your job for you), it will be a
           | very powerful Duck.
        
             | j-pb wrote:
             | Exactly. Turing postulated this more than half a century
             | ago.
             | 
             | It's weird that people are still surprised of the ethical
             | consequences of the Turing-test, as if it were some
             | checkbox to tick or trophy to win, instead of it being a
             | profound thought experiment on the non-provability of
             | consciousness and general guidelines for politeness towards
             | things that quack like a human.
        
       | habitue wrote:
       | There are two dominant narratives I see when AI X-Risk stuff is
       | brought up:
       | 
       | - it's actually to get regulatory capture
       | 
       | - it's hubris, they're trying to seem more important and powerful
       | than they are
       | 
       | Both of these explanations strike me as too clever by half. I
       | think the parsimonious explanation is that people are actually
       | concerned about the dangers of AI. Maybe they're wrong, but I
       | don't think this kind of incredulous conspiratorial reaction is a
       | useful thing to engage in.
       | 
       | When in doubt take people at their word. Maybe the CEOs of these
       | companies have some sneaky 5D chess plan, but many many AI
       | researchers (such as Joshua Bengio and Geoffrey Hinton) who don't
       | stand to gain monetarily have expressed these same concerns.
       | They're worth taking seriously.
        
         | jrflowers wrote:
         | >it's hubris, they're trying to seem more important and
         | powerful than they are
         | 
         | >Both of these explanations strike me as too clever by half
         | 
         | This is a good point. You have to be clever to hop on a soapbox
         | and make a ruckus about doomsday to get attention. Only savvy
         | actors playing 5D chess can aptly deploy the nuanced and
         | difficult pattern of "make grandiose claims for clicks"
        
           | jamilton wrote:
           | Well, it didn't work for nuclear.
        
         | cheriot wrote:
         | This is a normal way for companies to shut down competition. No
         | cleverness required.
        
           | wmf wrote:
           | Many of the people making this claim are not associated with
           | any company.
        
             | danaris wrote:
             | No, they've been sold a line by those that are, and believe
             | it because it matches with their pre-existing assumptions.
        
               | Spivak wrote:
               | How could you tell the difference between people who
               | genuinely believe and people who genuinely believe
               | _because bad_? Have you considered that where you fall on
               | this question might be because of some pre-existing
               | assumptions?
               | 
               | You could be right but it also doesn't have to be a
               | corporate psyop. It could be experts in the industry
               | raising some sincerely held criticisms and people at
               | large being like, "oh that's a good point." Even if we're
               | in the latter case they're also allowed to be allowed to
               | be wrong or just misguided.
               | 
               | You don't actually to attack the intent of the speaker in
               | this case, you could just be like, "here's why your
               | wrong."
        
             | sangnoir wrote:
             | You mean they are not currently employed by the well-known
             | companies. Did they declare they divested their shares in
             | their former employers/acquiror?
        
         | icedrift wrote:
         | You can go back 30 years and read passages from textbooks about
         | how dangerous an underspecified AI could be, but those were
         | problems for the future. I'm sure there's some degree of x-risk
         | promotion in the industry serving the purpose of hyping up
         | businesses, but it's naive to act like this is a new or
         | fictitious concern. We're just hearing more of it because
         | capabilities are rapidly increasing.
        
         | fardo wrote:
         | > Both of these explanations strike me as too clever by half. I
         | think the parsimonious explanation is that people are actually
         | concerned about the dangers of AI
         | 
         | This rings hollow when these companies don't seem to practice
         | what they preach, and start by setting an example - they don't
         | halt research and cut the funding for development of their own
         | AIs in-house.
         | 
         | If you believe that there's X-Risk of AI research, there's no
         | reason to think it wouldn't come from your own firm's labs
         | developing these AIs too.
         | 
         | Continuing development while telling others they need to pause
         | seems to make "I want you to be paused while I blaze ahead" far
         | more parsimonious than "these companies are actually scared
         | about humanity's future" - they won't put their money where
         | their mouth is to prove it.
        
           | Gare wrote:
           | The best way to not get nuked is to develop nukes first.
           | That's the gist of their usual rebuttal to this argument.
        
             | sangnoir wrote:
             | That argument doesn't hold water when they also argue the
             | mere existence of nukes is dangerous. I would love to hear
             | _when_ Hinton had this revelation when his life 's work was
             | to advance AI.
        
         | buzzert wrote:
         | > many many AI researchers (such as Joshua Bengio and Geoffrey
         | Hinton) who don't stand to gain monetarily have expressed these
         | same concerns
         | 
         | I respect these researchers, but I believe they are doing it to
         | build their own brand, whether consciously or subconsciously.
         | There's no doubt it's working. I'm not in the sub-field, but I
         | have been following neural nets for a long time, and I haven't
         | heard of either Bengio nor Hinton before they started talking
         | to the press about this.
        
           | dwiel wrote:
           | As someone who has been following deep learning for quite
           | some time as well, Bengio and Hinton would be some of the
           | first people I think of in this field. Just search Google for
           | "godfathers of ai" if you don't believe me.
        
           | hackinthebochs wrote:
           | >but I believe they are doing it to build their own brand,
           | whether consciously or subconsciously.
           | 
           | I am always in awe at how easily people craft unfalsifiable
           | worldviews in service to their preconceived opinions.
        
           | broken_clock wrote:
           | Both Bengio and Hinton have their names plastered over many
           | of the seminal works in deep learning.
           | 
           | AlexNet, the paper that arguable started it all, came out of
           | Hinton's lab.
           | 
           | https://papers.nips.cc/paper_files/paper/2012/hash/c399862d3.
           | ..
           | 
           | I really don't think they need to build any more of a brand.
        
         | nostrademons wrote:
         | > When in doubt take people at their word.
         | 
         | This is not mutually exclusive with it being either hubris or
         | regulatory capture. People see the world colored by their own
         | interests, emotions, background, and values. It's quite
         | possible that the person making the statement sincerely
         | believes there's a danger to humanity, but it's actually a
         | danger to their monopoly that their self-image will not let
         | them label as a such.
         | 
         | It's never regulatory capture when you're the one doing it.
         | It's always "The public needs to be protected from the
         | consequences that will happen if any non-expert could hang up a
         | shingle." Oftentimes the dangers are _real_ , but the incumbent
         | is unable to also perceive the benefits of other people
         | competing with them (if they could, competition wouldn't be
         | dangerous, they'd just implement those benefits themselves).
        
         | quotient wrote:
         | Besides the point, but FYI you are misusing the term
         | parsimonious.
        
           | Vt71fcAqt7 wrote:
           | It's a reference to the more apt name for Occam's razor. I
           | happen to disagree with GP because governments always want to
           | expand their power. When they do something that results in
           | what they want it's actually the parsimonious explaination to
           | say that they did it because they wanted that result.
        
       | abecedarius wrote:
       | A fact relevant to this claim: the signers of the referenced
       | statement, https://www.safe.ai/statement-on-ai-risk, are mostly
       | not "Big Tech".
       | 
       | I'd pause and think twice about who seems most straightforwardly
       | honest on this before jumping to conclusions -- and more
       | importantly about the object-level claims: Is there no
       | substantial chance of advanced AI in, like, decades or sooner?
       | Would scalable intelligences comparable or more capable than
       | humans pose any risk to them? Taking into account that the tech
       | creating them, so far, does not produce anything like the same
       | level of understanding of _how they work_.
        
       | 23B1 wrote:
       | Companies can, do, and will lie.
       | 
       | They will lie about their intent.
       | 
       | They will lie to regulators.
       | 
       | They will lie about what they're actually working on.
       | 
       | Some of these lies are permissible of course, under the guise of
       | competition.
       | 
       | But the _only_ thing that can be relied upon is that they _will_
       | lie.
       | 
       | So then the question becomes; to what degree will what they're
       | working on present an existential threat to society, if at all.
       | 
       | And nobody - neither the tribal accelerationists and doomers -
       | can predict the future.
       | 
       | (What's worse is that those two tribes are even forming. I
       | halfway want AI to take over because we idiot humans are
       | incapable of _even having a nuanced discussion about AI itself_!)
        
       | lofatdairy wrote:
       | I feel like Andrew Ng has more name recognition than Google Brain
       | itself.
       | 
       | Also Business Insider isn't great, the original Australian
       | Financial Review article has a lot more substance:
       | https://archive.ph/yidIa
       | 
       | I've never been convinced by the arguments of OpenAI/Anthropic
       | and the like on the existential risks of AI. Maybe I'm jaded by
       | the ridiculousness of "thought experiments" like Roko's basilisk
       | and lines of reasoning followed EA adherents, where the risks are
       | comically infinite and alignment feels a lot more like
       | hermeneutics.
       | 
       | I am probably just a bit less cynical than Ng is here on the
       | motivations[^1]. But regardless of whether or not the AGI
       | doomsday claim is justification for a moat, Ng is right in that
       | it's taking a lot the oxygen out of the room for more concrete
       | discussion on the legitimate harms of generative AI -- like
       | silently proliferating social biases present in the training
       | data, or making accountability a legal and social nightmare.
       | 
       | [^1]: I don't doubt, for instance, that there's in part some
       | legitimate paranoia -- Sam Altman is a known doomsday prepper.
        
         | shafyy wrote:
         | > _Ng is right in that it 's taking a lot the oxygen out of the
         | room for more concrete discussion on the legitimate harms of
         | generative AI -- like silently proliferating social biases
         | present in the training data, or making accountability a legal
         | and social nightmare._
         | 
         | And this is the important bit. All these people like Altman and
         | Musk who go on rambling about the existential risk of AI
         | distracts from the real AI harm discussions we should be
         | having, and thereby _directly_ harms people.
        
       | aatd86 wrote:
       | Isn't it AGI people are afraid of and not AI per se?
        
       | starlevel003 wrote:
       | AI risk is essentially Catholicism for tech guys
        
       | epups wrote:
       | Andrew Ng is right, of course: the monopolists are frantically
       | trying to produce regulatory capture around AI. However, why are
       | governments playing along?
       | 
       | My hypothesis is that they perceive AI as a threat because of
       | information flow. They are barely now understanding how to get
       | back to the era where you could control the narrative of the
       | country by calling a handful of friends - now those friends are
       | in big tech.
        
         | VirusNewbie wrote:
         | Because that is the goal of the democratic party and
         | progressivism in general: to consolidate power as much as
         | possible. They don't hide that.
         | 
         | Republicans also want to consolidate power, they just lie about
         | it more.
        
           | pixl97 wrote:
           | Republicans = people
           | 
           | Democrats = people
           | 
           | You = people
           | 
           | I think the problem is people.
        
       | taylodl wrote:
       | Never trust companies seeking to have their industry regulated.
       | They're simply trying to raise the barriers to entry to reduce
       | competition.
        
       | skadamat wrote:
       | It's unfortunate that "AI" is still framed and discussed as some
       | type of highly autonomous system that's separate from us.
       | 
       | Bad acting humans with AI systems are the threat, not the AI
       | systems themselves. The discussion is still SO focused on the AI
       | systems, not the actors and how we as societies align on what AI
       | uses are okay and which ones aren't.
        
         | pdonis wrote:
         | _> Bad acting humans with AI systems are the threat, not the AI
         | systems themselves._
         | 
         | I wish more people grasped this extremely important point. AI
         | is a tool. There will be humans who misuse _any_ tool. That
         | doesn 't mean we blame the tool. The problem to be solved here
         | is not how to control AI, but how to minimize the damage that
         | bad acting humans can do.
        
           | pk-protect-ai wrote:
           | Right now, the "bad acting human" is, for example, Sam
           | Altman, who frequently cries "Wolf!" about AI. He is trying
           | to eliminate the competition, manipulate public opinion, and
           | present himself as a good Samaritan. He is so successful in
           | his endeavor, even without AI, that you must report to the US
           | government about how you created and tested your model.
        
             | api wrote:
             | The greatest danger I see with super-intelligent AI is that
             | it will be monopolized by small numbers of powerful people
             | and used as a force multiplier to take over and manipulate
             | the rest of the human race.
             | 
             | This is exactly the scenario that is taking shape.
             | 
             | A future where only a few big corporations are able to run
             | large AIs is a future where those big corporations and the
             | people who control them rule the world and everyone else
             | must pay them rent in perpetuity for access to this
             | technology.
        
               | hellojesus wrote:
               | Open source models do exist and will continue to do so.
               | 
               | The biggest advantage ML gives is in lowering costs,
               | which can then be used to lower prices and drive
               | competitors out of business. The consumers get lower
               | prices though, which is ultimately better and more
               | efficient.
        
               | specialist wrote:
               | > _The consumers get lower prices though, which is
               | ultimately better and more efficient._
               | 
               | What are some examples of free enterprise (private)
               | monopolies benefitting consumers?
        
               | hellojesus wrote:
               | """ Through horizontal integration in the refining
               | industry--that is, the purchasing and opening of more oil
               | drills, transport networks, and oil refiners--and,
               | eventually, vertical integration (acquisition of fuel
               | pumping companies, individual gas stations, and petroleum
               | distribution networks), Standard Oil controlled every
               | part of the oil business. This allowed the company to use
               | aggressive pricing to push out the competition. """
               | https://stacker.com/business-economy/15-companies-us-
               | governm...
               | 
               | Standard Oil, the classic example, was destroyed for
               | operating too efficiently.
        
               | specialist wrote:
               | How did customers benefit?
        
               | hellojesus wrote:
               | > This allowed the company to use aggressive pricing to
               | push out the competition.
               | 
               | The consumers got the lowest prices.
        
               | specialist wrote:
               | Standard was notorious for price gouging and using those
               | profits to buy their way into other markets.
               | 
               | Any other examples?
        
               | hellojesus wrote:
               | Source? Besides price gouging is fine and shouldnt be
               | illegal.
        
             | Dalewyn wrote:
             | The "bad acting human" are the assholes who uses "AI" to
             | create fake imagery to push certain (and likely false)
             | narratives on the various medias.
             | 
             | Key thing here is that this is fundamentally no different
             | from what has been happening since time immemorial, it's
             | just that becomes easier with "AI" as part of the tooling.
             | 
             | Every piece of bullshit starts from the "bad acting human".
             | Every single one. "AI" is just another new part of the same
             | old process.
        
           | PUSH_AX wrote:
           | Sure, today at least. But there is a future where the human
           | has given AI control of things, with good intention, and the
           | AI has become the threat.
           | 
           | AI is a tool today, tomorrow AI is calling shots in many
           | domains. It's worth planning for tomorrow.
        
             | lukifer wrote:
             | A good analogy might be a shareholder corporation: each one
             | began as a tool of human agency, and yet a sufficiently
             | mature corporation has a de-facto agency of its own,
             | transcending any one shareholder, employee, or board
             | member.
             | 
             | The more AI/ML is woven into our infrastructure and
             | economy, the less it will be possible to find an "off
             | switch", anymore than we can (realistically) find an off
             | switch for Walmart, Amazon, etc.
        
               | pdonis wrote:
               | _> a sufficiently mature corporation has a de-facto
               | agency of its own, transcending any one shareholder,
               | employee, or board member._
               | 
               | No, the corporation has an agency that is a _tool_ of
               | particular humans who are using it. Those humans could be
               | shareholders, employees, or board members; but in any
               | case they will have some claim to be acting for the
               | corporation. But it 's still human actions. Corporations
               | can't do anything unless humans acting for them do it.
        
             | a_wild_dandan wrote:
             | I'd rather address _our_ reality than plan for _someone 's_
             | preferred sci-fi story. We're utterly ignorant of
             | tomorrow's tech. Let's solve what we _know_ is happening
             | before we go tilting at windmills.
        
             | tempodox wrote:
             | It's still humans who make the decision to let "AI" call
             | the shots.
        
               | PUSH_AX wrote:
               | Hence "with good intentions".
        
             | pdonis wrote:
             | _> there is a future where the human has given AI control
             | of things, with good intention, and the AI has become the
             | threat_
             | 
             | As in, for example, self-driving cars being given more
             | autonomy than their reliability justifies? The answer to
             | that is simple: don't do that. (I'm also not sure all such
             | things are being done "with good intention".)
        
               | PUSH_AX wrote:
               | Firstly, "don't do that" probably requires some "control"
               | over AI in the respect of how it's used and rolled out.
               | Secondly, I find it hard to believe that rolling out self
               | driving cars was a play by bad actors, there was a
               | perceived improvement to the driving experience in
               | exchange for money, feels pretty straight forward to me.
               | I'm not in disagreement that it was premature though.
        
             | skadamat wrote:
             | WHY on earth would we let "AI systems" we don't understand
             | control powerful things we care about. We should criticize
             | the human, politician, or organization that enabled that
        
               | PUSH_AX wrote:
               | WHY on earth would a frog get boiled if you slowly
               | increased the temperature?
        
           | dangerwill wrote:
           | If you apply this thinking to Nuclear weapons it becomes
           | nonsensical, which tells us that a tool that can only be
           | oriented to do harm will only be used to do harm. The
           | question then is if LLMs or AI more broadly will even
           | potentially help the general public and there is no reason to
           | think so. The goal of these tools is to be able to continue
           | running the economy while employing far fewer people. These
           | tools are oriented by their very nature to replace human
           | labor, which in the context of our economic system has a
           | direct and unbreakable relationship to a reduction in the
           | well being of the humans it replaces.
        
             | m4rtink wrote:
             | Nuclear bombs can be used for space ship propulsion,
             | geology or mining.
        
               | dangerwill wrote:
               | Notably nuclear bombs are not actually used for any of
               | these as this is either sci-fi or insane
        
             | specialist wrote:
             | Yup. The creation of these weapons necessitates their use.
             | The whole arms race dynamic.
        
             | theultdev wrote:
             | Nuclear weapons are a tool to keep peace via MAD (mutual
             | assured destruction)
             | 
             | It's most likely the main reason there's no direct world
             | wars between super powers.
        
             | a_wild_dandan wrote:
             | 1. You've fallen for the lump of labor fallacy. A 100x
             | productivity boost [?] 100x fewer jobs, anymore than a 100x
             | boost = static jobs with 100x more projects. Reality is far
             | more complicated, and viewing labor as some static lump,
             | zero-sum game will lead you astray.
             | 
             | 2. Your outlook on the societal impact of technology is
             | contradicted by reality. The historical result of better
             | tech always meant increased jobs and well-being. Today is
             | the best time in human history to be alive by virtually
             | every metric.
             | 
             | 3. AI has been such a massive boon to humanity and your
             | everyday existence for years that questioning its public
             | utility is frankly bewildering.
        
               | dangerwill wrote:
               | 1. This gets trotted out constantly but this is not some
               | known constant about how capitalist economies work. Just
               | because we have more jobs now than we did pre-digital
               | revolution does not mean all technologies have that
               | effect on the jobs market (or even that the digital
               | revolution had that effect). A tool that is aimed to
               | entirely replace humans across many/most/all industries
               | is quite different than previous technological
               | advancements.
               | 
               | 2. This is outdated, life is NOT better now than at any
               | other time. Life expectancy is going down in the US,
               | there is vastly more economic inequality now than there
               | was in the 60s, people broadly report much worse job
               | satisfaction than they did in previous generations. The
               | only metric you can really point to about now being
               | better than the 90s is absolute poverty going down. Which
               | is great, but those advancements are actually quite
               | shallow on a per-person basis and are matched by declines
               | in relative wealth for the middle 80% of people.
               | 
               | 3. ??? What kind of AI are you talking about? LLMs have
               | only been interesting to the public for about a year now
        
               | pdonis wrote:
               | _> A tool that is aimed to entirely replace humans across
               | many /most/all industries_
               | 
               | This is a vastly overinflated claim about AI.
        
               | dangerwill wrote:
               | Is that not the goal? Since it turned out that creative
               | disciplines were the first to get hit by AI (previously
               | having been thought of to be more resilient to it than
               | office drudgery) where are humans going to be safe from
               | replacement? As editors of AI output? Manual labor jobs
               | that are physically difficult to automate? It's a
               | shrinking pie from every angle I have seen
        
             | pdonis wrote:
             | _> a tool that can only be oriented to do harm_
             | 
             | Nuclear technology can be used for non-harmful things. Even
             | nuclear bombs can be used for non-harmful things--see, for
             | example, the Orion project.
             | 
             |  _> These tools are oriented by their very nature to
             | replace human labor_
             | 
             | So is a plow. So is a factory. So is a car. So is a
             | computer. ("Computer" used to be a description of a job
             | done by humans.) The whole _point_ of technology is to
             | reduce the amount of human drudge work that is required to
             | create wealth.
             | 
             |  _> in the context of our economic system has a direct and
             | unbreakable relationship to a reduction in the well being
             | of the humans it replaces_
             | 
             | All of the technologies I listed above _increased_ the well
             | being of humans, including those they replaced. If we 're
             | anxious that that might not happen under "our economic
             | system", we need to look at what has changed from then to
             | now.
             | 
             | In a free market, the natural response to the emergence of
             | a technology that reduces the need for human labor in a
             | particular area is for humans to shift to other
             | occupations. That is what happened in response to the
             | emergence of all of the technologies I listed above.
             | 
             | If that does not happen, it is because the market is not
             | free, and the most likely reason for that is government
             | regulation, and the most likely reason for the government
             | regulation is regulatory capture, i.e., some rich people
             | bought regulations that favored them from the government,
             | in order to protect themselves from free market
             | competition.
        
           | quicklime wrote:
           | But usually there's a one-way flow of intent from the human
           | to the tool. With a lot of AI the feedback loop gets closed,
           | and people are using it to help them make decisions, and
           | might be taken far from the good outcome they were seeking.
           | 
           | You can already see this today's internet. I'm sure the
           | pizzagate people genuinely believed they were doing a good
           | thing.
           | 
           | This isn't the same as an amoral tool like a knife, where a
           | human decides between cutting vegetables or stabbing people.
        
             | pdonis wrote:
             | _> With a lot of AI the feedback loop gets closed, and
             | people are using it to help them make decisions, and might
             | be taken far from the good outcome they were seeking._
             | 
             | The answer to this is simple: don't use a tool you don't
             | understand. You can't fix this problem by nerfing the tool.
             | You have to fix it by holding humans responsible for how
             | they use tools, so they have an incentive to use them
             | properly, and to _not_ use them if they can 't meet that
             | requirement.
        
           | bumby wrote:
           | This is true, but skirts around a bit of the black box
           | problem. It's hard to put guardrails on an amoral tool that
           | makes it hard to fully understand the failure modes. And it
           | doesn't even require "bad acting humans" to do damage; it can
           | just be good-intending-but-naive humans.
        
             | pdonis wrote:
             | It's true that the more complex and capable the tool is,
             | the harder it is to understand what it empowers the humans
             | using it to do. I only wanted to emphasize that it's the
             | humans that are the vital link, so to speak.
        
               | bumby wrote:
               | You're not wrong, but I think this quote partly misses
               | the point:
               | 
               | > _The problem to be solved here is not how to control
               | AI_
               | 
               | When we talk about mitigations, it _is_ explicitly about
               | how to control AI, sometimes irrespective of how someone
               | uses it.
               | 
               | Think about it this way: suppose I develop some stock-
               | trading AI that has the ability to (inadvertently or
               | purposefully) crash the stock market. Is the better
               | control to put limits on the software itself so that it
               | cannot crash the market or to put regulations in place to
               | penalize people who use the software to crash the market?
               | There is a hierarchy of controls when we talk about risk,
               | and engineering controls (limiting the software) are
               | always above administrative controls (limiting the humans
               | using the software).
               | 
               | (I realize it's not an either/or and both controls can -
               | and probably should - be in place, but I described it as
               | a dichotomy to illustrate the point)
        
           | indigo0086 wrote:
           | Of people understood this then they would have to live with
           | the unsatisfying reality that not all violators can be
           | punished. When you do it this way and paint the technology as
           | potentially criminal that they can get revenge on
           | corporations that which is what is mostly artist types want
        
         | TheOtherHobbes wrote:
         | It's not either/or. At some point AI is likely to become
         | autonomous.
         | 
         | If it's been trained by bad actors, that's really not a good
         | thing.
        
           | pk-protect-ai wrote:
           | Define the "bad actors". Is my example with Sam Altman above
           | can be seen as a valid one?
        
         | jcutrell wrote:
         | I think this may be a little short sighted.
         | 
         | AI "systems" are provided some level of agency by their very
         | nature. That is, for example, you cannot predict the outcomes
         | of certain learning models.
         | 
         | We _necessarily_ provide agency to AI because that's the whole
         | point! As we develop more advanced AI, it will have more
         | agency. It is an extension of the just world fallacy, IMO, to
         | say that AI is "just a tool" - we lend agency and allow the
         | tool to train on real world (flawed) data.
         | 
         | Hallucinations are a great example of this in an LLM. We want
         | the machine to have agency to cite its sources... but we also
         | create potential for absolute nonsense citations, which can be
         | harmful in and of themselves, though the human on the using
         | side may have perfectly positive intent.
        
         | adventured wrote:
         | > Bad acting humans with AI systems are the threat, not the AI
         | systems themselves.
         | 
         | It's worth noting this is exactly the same argument used by
         | pro-gun advocates as it pertains to gun rights. It's identical
         | to: guns don't harm/kill people, people harm/kill people (the
         | gun isn't doing anything until the bad actor aims and pulls the
         | trigger; bad acting humans with guns are the real problem;
         | etc).
         | 
         | It isn't an effective argument and is very widely mocked by the
         | political left. I doubt it will work to shield the AI sector
         | from aggressive regulation.
        
           | ndriscoll wrote:
           | It is an effective argument though, and the left is widely
           | mocked by the right for simultaneously believing that only
           | government should have the necessary tools for violence, and
           | also ACAB.
           | 
           | Assuming ML systems are dangerous and powerful, would you
           | rather they be restricted to a small group of power-holders
           | who will _definitely_ use them to your detriment /to control
           | you (they already do) or democratize that power and take a
           | chance that someone _may_ use them against you?
        
             | dontlaugh wrote:
             | Communists and anarchists understand that the working class
             | needs to defend itself from both the capitalist state and
             | from fascist paramilitaries, thus must be collectively
             | armed.
             | 
             | It's only a kind of liberal (and thus right wing) that
             | argues for gun control. Other kinds of liberals that call
             | themselves "conservative" (also right wing) argue against
             | it and for (worthless) individual gun rights.
        
           | Dalewyn wrote:
           | By that logic:
           | 
           | Are we going to ban and regulate Photoshop and GIMP because
           | bad people use them to create false imagery for propaganda?
           | 
           | Actually, back that up for a second.
           | 
           | Are we going to ban and regulate computers (enterprise and
           | personal) because bad people use them for bad things?
           | 
           | Are we going to ban and regulate speech because bad people
           | say bad things?
           | 
           | Are we going to ban and regulate hands because bad people use
           | them to do bad things?
           | 
           | The buck always starts and stops at the person doing the act.
           | A tool is just a tool, blaming the tool is nothing but an act
           | of scapegoating.
        
           | a_wild_dandan wrote:
           | This argument pertains to _every_ tool: guns, kitchen knives,
           | cars, the anarchist cookbook, etc. You aren 't against the
           | argument. You're against how it's used. (Hmm...)
        
         | bumby wrote:
         | > _Bad acting humans with AI systems are the threat_
         | 
         | Does this mean "humans with bad motives" or does it extend to
         | "humans who deploy AI without an understanding of the risk"?
         | 
         | I would say the latter warrants a discussion on the AI systems,
         | if they make it hard to understand the risk due to opaqueness.
        
         | m463 wrote:
         | It reminds me of dog breeds.
         | 
         | Some dogs get bad reputations, but humans are an intricate part
         | of the picture. For example, German Shepherds are objectively
         | dangerous, but have a good reputation because they are trained
         | and cared for by responsible people such as for the police.
        
         | kurthr wrote:
         | The disturbing thing to consider is that it might be bad acting
         | AI with human systems. I can easily see a situation where a bad
         | acting algorithm alone wouldn't have nearly so negative an
         | effect, if it weren't tuned precisely and persuasively to get
         | more humans to do the work of increasing the global suffering
         | of others for temporary individual gain.
         | 
         | To be clear, I'm not sure LLMs and their near term derivatives
         | are so incredibly clever, but I have confidence that many
         | humans have a propensity for easily manipulated irrational
         | destructive stupidity, if the algorithm feeds them what they
         | want to hear.
        
       | pmarreck wrote:
       | Well, it's a good thing we have easily-procured open-source LLM's
       | (including uncensored ones) out now, so that everyone can play
       | and we can quickly find out that these FUD tactics were nonsense!
       | 
       | https://ollama.ai/
       | 
       | https://ollama.ai/library
        
       | SirMaster wrote:
       | Well it's better than the opposite thought right?
       | 
       | If they were lying about there being no or low danger when there
       | really was a high danger?
        
       | toasted-subs wrote:
       | I got fired from Google because somebody was tracking and
       | harassing me within the city of Mountain View.
       | 
       | If we are going to worry about AIs let's identify individuals who
       | aren't representing the government and causing societal issues.
        
       | TheCaptain4815 wrote:
       | Kind of interesting point cause the US government has an
       | incentive to regulate this field and try pushing more gains
       | towards big tech (mostly american) instead of open source.
        
       | specialist wrote:
       | One nuclear bomb can ruin your whole day.
       | 
       | This feels like the Cold War's nuclear arms treaty and policy
       | debates. How many nukes are too many? 100? 10,000?
       | 
       | The people pearl clutching about AI are focused on the wrong
       | problem.
       | 
       | The threat (to humanity) is corporations. AI is just their force
       | multiplier.
       | 
       | h/t Ted Chiang. I subscribe to his views on this stuff. More or
       | less.
        
       | more_corn wrote:
       | Yes... but. Lying is the wrong way to frame it "using the real
       | risk to distract" would be better. I'm concerned and my concern
       | is not a lie. Terminator was a concern and that predated any
       | effort to capture the industry.
       | 
       | Also for those who think skynet is an example of a " hysterical
       | satanic cult" scare there are active efforts to use AI for the
       | inhumanly large task of managing battlefield resources. We are
       | literally training AI to kill and it's going to be better than us
       | basically instantly.
       | 
       | We 100% should NOT be doing that. Calling that very real concern
       | a lie is a dangerous bit of hyperbole.
        
       ___________________________________________________________________
       (page generated 2023-10-30 23:00 UTC)