[HN Gopher] Gavin Newsom vetoes SB 1047
       ___________________________________________________________________
        
       Gavin Newsom vetoes SB 1047
        
       Author : atlasunshrugged
       Score  : 171 points
       Date   : 2024-09-29 20:43 UTC (2 hours ago)
        
 (HTM) web link (www.wsj.com)
 (TXT) w3m dump (www.wsj.com)
        
       | JoeAltmaier wrote:
       | Perhaps worried that draconian restriction on new technology is
       | not gonna help bring Silicon Valley back to preeminence.
        
         | jprete wrote:
         | "The Democrat decided to reject the measure because it applies
         | only to the biggest and most expensive AI models and doesn't
         | take into account whether they are deployed in high-risk
         | situations, he said in his veto message."
         | 
         | That doesn't mean you're _wrong_ , but it's not what Newsom
         | signaled.
        
           | mhuffman wrote:
           | >and doesn't take into account whether they are deployed in
           | high-risk situations
           | 
           | Am I out of the loop here? What "high-risk" situations do
           | they have in mind for LLM's?
        
             | jeffbee wrote:
             | Imagine the only thing you know about AI came from the
             | opening voiceover of Terminator 2 and you are a state
             | legislator. Now you understand the origin of this bill
             | perfectly.
        
             | giantg2 wrote:
             | My guess is anything involving direct human safety -
             | medicine, defense, police... but who knows.
        
             | SonOfLilit wrote:
             | It's not about current LLMs, it's about future, much more
             | advanced models, that are capable of serious hacking or
             | other mass-casualty-causing activities.
             | 
             | o-1 and AlphaProof are proofs of concept for agentic
             | models. Imagine them as GPT-1. The GPT-4 equivalent might
             | be a scary technology to let roam the internet.
             | 
             | It would have no effect on current models.
        
               | tbrownaw wrote:
               | It looks like it would cover an ordinary chatbot than can
               | answer "how do I $THING" questions, where $THING is both
               | very bad and is also beyond what a normal person could
               | dig up with a search engine.
               | 
               | It's not based on any assumptions about the future models
               | having any capabilities beyond providing information to a
               | user.
        
               | whimsicalism wrote:
               | everyone in the safety space has realized that it is much
               | easier to get legislators/the public to care if you say
               | that it will be "bad actors using the AI for mass damage"
               | as opposed to "AI does damage on its own" which triggers
               | people's "that's sci-fi and i'm ignoring it" reflex.
        
               | SonOfLilit wrote:
               | Things you could dig up with a search engine are
               | explicitly not covered, see my other comment quoting the
               | bill (ctrl+f critical harm).
        
             | tmpz22 wrote:
             | Medical and legal industries are both trying to apply AI to
             | their administrative practices.
             | 
             | It's absolutely awful but they're so horny for profits
             | they're trying anyways.
        
             | tbrownaw wrote:
             | That concept does not appear to be part of the bill, and
             | was only mentioned in the quote from the governor.
             | 
             | Presumably someone somewhere has a variety of proposed
             | definitions, but I don't see any mention of any particular
             | ones.
        
           | JoshTriplett wrote:
           | Only applying to the biggest models is the _point_ ; the
           | biggest models are the inherently high-risk ones. The larger
           | they get, the more that running them _at all_ is the  "high-
           | risk situation".
           | 
           | Passing this would not have been a complete solution, but it
           | would have been a step in the right direction. This is a huge
           | disappointment.
        
             | jpk wrote:
             | > running them at all is the "high-risk situation"
             | 
             | What is the actual, concrete concern here? That a model
             | "breaks out", or something?
             | 
             | The risk with AI is not in just running models, the risk is
             | becoming overconfident in them, and then putting them in
             | charge of real-world stuff in a way that allows them to do
             | harm.
             | 
             | Hooking a model up to an effector capable of harm is a
             | deliberate act requiring assurance that it doesn't harm --
             | and if we should regulate anything, it's that. Without
             | that, inference is just making datacenters warm. It seems
             | shortsighted to set an arbitrary limit on model size when
             | you can recklessly hook up a smaller, shittier model to
             | something safety-critical, and cause all the havoc you
             | want.
        
               | pkage wrote:
               | There is no concrete concern past "models that can
               | simulate thinking are scary." The risk has always been
               | connecting models to systems which are safety critical,
               | but for some reason the discourse around this issue has
               | been more influenced by Terminator than OSHA.
               | 
               | As a researcher in the field, I believe there's no risk
               | beyond overconfident automation---and we already _have_
               | analogous legislation for automations, for example in
               | what criteria are allowable and not allowable when
               | deciding whether an individual is eligible for a loan.
        
               | KoolKat23 wrote:
               | Well it's a mix of concerns, the models are general
               | purpose, there are plenty of areas regulation does not
               | exist or is being bypassed. Can't access a prohibited
               | chemical, no need to worry the model can tell you how to
               | synthesize it from other household chemicals etc.
        
               | comp_throw7 wrote:
               | That is one risk. Humans at the other end of the screen
               | are effectors; nobody is worried about AI labs piping
               | inference output into /dev/null.
        
               | KoolKat23 wrote:
               | Well this is exactly why there's a minimum scale of
               | concern. Below a certain scale it's less complicated and
               | answers are more predictable and alignment can be
               | ensured. Bigger models how do you determine your
               | confidence if you don't know what's it's thinking?
               | There's already evidence in o1 red-teaming, the model was
               | trying to game the researcher's checks.
        
               | dale_glass wrote:
               | Yeah, but what if you take a stupid, below the "certain
               | scale" limit model and hook it up to something important,
               | like a nuclear reactor or a healthcare system?
               | 
               | The point is that this is a terrible way to approach
               | things. The model itself isn't what creates the danger,
               | it's what you hook it up to. A model 100 times larger
               | than the current available that's just sending output
               | into /dev/null is completely harmless.
               | 
               | A small, below the "certain scale" model used for
               | something important like healthcare could be awful.
        
             | jart wrote:
             | The issue with having your regulation based on fear is that
             | most people using AI are good. If you regulate only big
             | models then you incentivize people to use smaller ones.
             | Think about it. Wouldn't you want the people who provide
             | you services to be able to use the smartest AI possible?
        
           | comp_throw7 wrote:
           | He's dissembling. He vetoed the bill because VCs decided to
           | rally the flag; if the bill had covered more models he'd have
           | been more likely to veto it, not less.
           | 
           | It's been vaguely mindblowing to watch various tech people &
           | VCs argue that use-based restrictions would be better than
           | this, when use-based restrictions are vastly more intrusive,
           | economically inefficient, and subject to regulatory capture
           | than what was proposed here.
        
           | jart wrote:
           | If you read Gavin Newsom's statement, it sounds like he
           | agrees with Terrance Tao's position, which is that the
           | government should regulate the people deploying AI rather
           | than the people inventing AI. That's why he thinks it should
           | be stricter. For example, you wouldn't want to lead people to
           | believe that AI in health care decisions is OK so long as
           | it's smaller than 10^26 flops. Read his full actual statement
           | here: https://www.gov.ca.gov/wp-
           | content/uploads/2024/09/SB-1047-Ve...
        
             | Terr_ wrote:
             | > the government should regulate the people deploying AI
             | rather than the people inventing AI
             | 
             | Yeah, there's no point having system that is made the most
             | scrupulous of standards and then someone else deploys it in
             | an evil way. (Which in some cases can be done simply by
             | choosing to do the opposite of whatever a good model
             | recommends.)
        
         | m463 wrote:
         | Unfortunately he also veto'd AB3048 which allowed consumers a
         | direct way to opt-out of data sharing.
         | 
         | https://digitaldemocracy.calmatters.org/bills/ca_202320240ab...
        
       | brianjking wrote:
       | https://www.wsj.com/tech/ai/californias-gavin-newsom-vetoes-...
        
       | dang wrote:
       | Related. Others?
       | 
       |  _OpenAI, Anthropic, Google employees support California AI bill_
       | - https://news.ycombinator.com/item?id=41540771 - Sept 2024 (26
       | comments)
       | 
       |  _Y Combinator, AI startups oppose California AI safety bill_ -
       | https://news.ycombinator.com/item?id=40780036 - June 2024 (8
       | comments)
       | 
       |  _California AI bill becomes a lightning rod-for safety advocates
       | and devs alike_ - https://news.ycombinator.com/item?id=40767627 -
       | June 2024 (2 comments)
       | 
       |  _California Senate Passes SB 1047_ -
       | https://news.ycombinator.com/item?id=40515465 - May 2024 (42
       | comments)
       | 
       |  _California residents: call your legislators about AI bill SB
       | 1047_ - https://news.ycombinator.com/item?id=40421986 - May 2024
       | (11 comments)
       | 
       |  _Misconceptions about SB 1047_ -
       | https://news.ycombinator.com/item?id=40291577 - May 2024 (35
       | comments)
       | 
       |  _California Senate bill to crush OpenAI competitors fast tracked
       | for a vote_ - https://news.ycombinator.com/item?id=40200971 -
       | April 2024 (16 comments)
       | 
       |  _SB-1047 will stifle open-source AI and decrease safety_ -
       | https://news.ycombinator.com/item?id=40198766 - April 2024 (190
       | comments)
       | 
       |  _Call-to-Action on SB 1047 - Frontier Artificial Intelligence
       | Models Act_ - https://news.ycombinator.com/item?id=40192204 -
       | April 2024 (103 comments)
       | 
       |  _On the Proposed California SB 1047_ -
       | https://news.ycombinator.com/item?id=39347961 - Feb 2024 (115
       | comments)
        
       | voidfunc wrote:
       | It was a dumb law so... good on a politician for doing the smart
       | thing for once.
        
       | x3n0ph3n3 wrote:
       | Given what Scott Wiener did with restaurant fees, it's hard to
       | trust his judgement on any legislation. He clearly prioritizes
       | monied interests over the general populace.
        
         | gotoeleven wrote:
         | This guy is a menace. Among his other recent bills are ones to
         | require cars not be able to go more than 10mph over the speed
         | limit (watered down to just making a terrible noise when they
         | do) and to decriminalize intentionally giving someone AIDs. I
         | know this sounds like hyperbole.. how could this guy keep
         | getting elected?? But its not, it's california!
        
           | zzrzzr wrote:
           | And he's responsible for SB132, which has been awful for
           | women prisoners:
           | 
           | https://womensliberationfront.org/news/wolfs-plaintiffs-
           | desc...
           | 
           | https://womensliberationfront.org/news/new-report-shows-
           | cali...
           | 
           | https://www.youtube.com/playlist?list=PLXI-z2n5Dwr0BePgBNjJO.
           | ..
        
             | microbug wrote:
             | who could've predicted this?
        
           | johnnyanmac wrote:
           | Technically you can't go over 5mph of the speed limit. And
           | that's only because of radar accuracy.
           | 
           | Of course no one cares until you get a bored cop one day. And
           | with free way traffic you're lucky to hit half the speed
           | limit.
        
             | Dylan16807 wrote:
             | By "not be able" they don't mean legally, they mean GPS-
             | based enforcement.
        
           | deredede wrote:
           | I was surprised at the claim that intentionally giving
           | someone AIDS would be decriminalized, so I looked it up. The
           | AIDS bill you seem to refer to (SB 239) lowers penalties from
           | a felony to a misdemeanor (so it is still a crime), bringing
           | it in line with other sexually transmitted diseases. The
           | argument is that we now have good enough treatment for HIV
           | that there is no reason for the punishment to be harsher than
           | for exposing someone to hepatitis or herpes, which I think is
           | sound.
        
       | simonw wrote:
       | Also on The Verge:
       | https://www.theverge.com/2024/9/29/24232172/california-ai-sa...
        
       | davidu wrote:
       | This is a massive win for tech, startups, and America.
        
       | SonOfLilit wrote:
       | A bill laying the groundwork to ensure the future survival of
       | humanity by making companies on the frontier of AGI research
       | responsible for damages or deaths caused by their models, was
       | vetoed because it doesn't stifle competition with the big players
       | enough and because we don't want companies to be scared of
       | letting future models capable of massive hacks or creating mass
       | casualty events handle their customer support.
       | 
       | Today humanity scored a self-goal.
       | 
       | edit:
       | 
       | I'm guessing I'm getting downvoted because people don't think
       | this is relevant to our reality. Well, it isn't. This bill
       | shouldn't scare anyone releasing a GPT-4 level model:
       | 
       | > The bill he vetoed, SB 1047, would have required developers of
       | large AI models to take "reasonable care" to ensure that their
       | technology didn't pose an "unreasonable risk of causing or
       | materially enabling a critical harm." It defined that harm as
       | cyberattacks that cause at least $500 million in damages or mass
       | casualties. Developers also would have needed to ensure their AI
       | could be shut down by a human if it started behaving dangerously.
       | 
       | What's the risk? How could it possibly hack something causing
       | $500m of damages or mass casualties?
       | 
       | If we somehow manage to build a future technology that _can_ do
       | that, do you think it should be released?
        
         | atemerev wrote:
         | Oh come on, the entire bill was against open source models,
         | it's pure business. "AI safety", at least of the X-risk
         | variety, is a non-issue.
        
           | whimsicalism wrote:
           | > "AI safety", at least of the X-risk variety, is a non-
           | issue.
           | 
           | i have no earthly idea why people feel so confident making
           | statements like this.
           | 
           | at current rate of progress, you should have absolutely
           | massive error bars for what capabilities will like in 3,5,10
           | years.
        
           | SonOfLilit wrote:
           | I find it hard to believe that Google, Microsoft and OpenAI
           | would oppose a bill against open source models.
        
         | datavirtue wrote:
         | The future survival of humanity involves creating machines that
         | have all of our knowledge and which can replicate themselves.
         | We can't leave the planet but our robot children can. I just
         | wish that I could see what they become.
        
           | SonOfLilit wrote:
           | Sure, that's future survival. Is it of humanity though? Kinda
           | no by definition in your scenario. In general, depends at
           | least if they share our values...
        
           | johnnyanmac wrote:
           | Sounds like the exact opposite plot of Wall-E.
        
       | tbrownaw wrote:
       | https://legiscan.com/CA/text/SB1047/id/3019694
       | 
       | So this is the one that would make it illegal to provide open
       | weights for models past a certain size, would make it illegal to
       | sell enough compute power to train such a model without first
       | verifying that your customer isn't going to train a model and
       | then ignore this law, and mandates audit requirements to prove
       | that your models won't help people cause disasters and can be
       | turned off.
        
         | timr wrote:
         | The proposed law was so egregiously stupid that if you live in
         | California, you should _seriously_ consider voting for Anthony
         | Weiner 's opponent in the next election.
         | 
         | The man cannot be trusted with power -- this is far from the
         | first ridiculous law he has championed. Notably, he was behind
         | the (blatantly unconstitutional) AB2098, which was silently
         | repealed by the CA state legislature before it could be struck
         | down by the courts:
         | 
         | https://finance.yahoo.com/news/ncla-victory-gov-newsom-repea...
         | 
         | https://www.sfchronicle.com/opinion/openforum/article/COVID-...
         | 
         | (Folks, this isn't a partisan issue. Weiner has a long history
         | of horrendously bad judgment and self-aggrandizement via
         | legislation. I don't care which side of the political spectrum
         | you are on, or what you think of "AI safety", you should want
         | more thoughtful representation than this.)
        
           | johnnyanmac wrote:
           | >you should want more thoughtful representation than this.
           | 
           | Your opinion on what "thoughtful representation" is is what
           | makes this point partisan. Regardless, he's in until 2028 so
           | it'll be some time before that vote can happen.
           | 
           | Also, important Nitpick, it's Scott Weiner. Anthony Weiner
           | (no relation AFAIK) was in New York and has a much more...
           | Public controversy.
        
             | Terr_ wrote:
             | > Public controversy
             | 
             | I think you accidentally hit the letter "L". :P
        
           | dlx wrote:
           | you've got the wrong Weiner dude ;)
        
             | hn_throwaway_99 wrote:
             | Lol, I thought "How TF did Anthony Weiner get elected for
             | anything else again??" after reading that.
        
           | rekttrader wrote:
           | ** Anthony != Scott Weiner
        
           | GolfPopper wrote:
           | Anthony Weiner is a disgraced _New York_ Democratic
           | politician who does not appear to have re-entered politics
           | after his release from prison a few years ago. You mentioned
           | his name twice in your post, so it doesn 't seem to be an
           | accident that you mentioned him, yet his name does not seem
           | to appear anywhere in your links. I have no idea what message
           | you're trying to convey, but whatever it is, I think you're
           | failing to communicate it.
        
             | hn_throwaway_99 wrote:
             | He meant Scott Wiener but had penis on the brain.
        
         | akira2501 wrote:
         | > and mandates audit requirements to prove that your models
         | won't help people cause disasters
         | 
         | Audits cannot prove anything and they offer no value when
         | planning for the future. They're purely a retrospective tool
         | that offers insights into potential risk factors.
         | 
         | > and can be turned off.
         | 
         | I really wish legislators would operate inside reality instead
         | of a Star Trek episode.
        
           | whimsicalism wrote:
           | This snide dismissiveness around "sci-fi" scenarios, while
           | capabilities continue to grow, seems incredibly naive and
           | foolish.
           | 
           | Many of you saying stuff like this were the same naysayers
           | who have been terribly wrong about scaling for the last 6-8
           | years or people who only started paying attention in the last
           | two years.
        
             | akira2501 wrote:
             | > seems incredibly naive and foolish.
             | 
             | We have electrical codes. These require disconnects just
             | about everywhere. The notion that any system somehow
             | couldn't be "turned off" with or without the consent of the
             | operator is downright laughable.
             | 
             | > were the same naysayers
             | 
             | Now who's being snide and dismissive? Do you want to argue
             | the point or are you just interested in tossing ad hominem
             | attacks around?
        
               | whimsicalism wrote:
               | > We have electrical codes. These require disconnects
               | just about everywhere. The notion that any system somehow
               | couldn't be "turned off" with or without the consent of
               | the operator is downright laughable.
               | 
               | Not so clear when you are inferencing a distributed model
               | across the globe. Doesn't seem obvious that shutdown of a
               | distributed computing environment will always be trivial.
               | 
               | > Now who's being snide and dismissive?
               | 
               | Oh to be clear, nothing against being dismissive - just
               | the particular brand of dismissiveness of 'scifi' safety
               | scenarios is naive.
        
               | marshray wrote:
               | > The notion that any system somehow couldn't be "turned
               | off" with or without the consent of the operator is
               | downright laughable.
               | 
               | Does anyone remember Sen. Lieberman's "Internet Kill
               | Switch" bill?
        
             | zamadatix wrote:
             | I don't think GP is dismissing the scenarios themselves,
             | rather espousing their belief these answers will do nothing
             | to prevent said scenarios from eventually occuring anyways.
             | It's like if we invented nukes but found out they were made
             | out of having a lot of telephones instead of something
             | exotic like refining radioactive elements a certain way.
             | Sure - you can still try to restrict telephone sales... but
             | one way or another lots of nukes are going to be built
             | around the world (power plants too) and, in the meantime,
             | what you've regulated away is the convenience of having a
             | better phone from the average person as time goes on.
             | 
             | The same battle was/is had around cryptography - telling
             | people they can't use or distribute cryptography algorithms
             | on consumer hardware never stopped bad people from having
             | real time functionally unbreakable encryption.
             | 
             | The safety plan must be around somehow handling the
             | resulting problems when they happen, not hoping to make it
             | never occur even once for the rest of time. Eventually a
             | bad guy is going to make an indecipherable call, eventually
             | an enemy country or rogue operator is going to nuke a
             | place, eventually an AI is going to ${scifi_ai_thing}. The
             | safety of all society can't rest on audits and good
             | intention preventing those from ever happening.
        
               | marshray wrote:
               | It's an interesting analogy.
               | 
               | Nukes are a far more primitive technology (i.e.,
               | enrichment requires only more basic industrial
               | capabilities) than AI hardware, yet they are probably the
               | best example of tech limitations via international
               | agreements.
               | 
               | But the algorithms are mostly public knowledge,
               | datacenters are no secret, and the chips aren't even made
               | in the US. I don't see what leverage California has to
               | regulate AI broadly.
               | 
               | So it seems like the only thing such a bill would achieve
               | is to incentivize AI research to avoid California.
        
               | derektank wrote:
               | >So it seems like the only thing such a bill would
               | achieve is to incentivize AI research to avoid
               | California.
               | 
               | Which, incidentally, would be pretty bad from a climate
               | change perspective since many of the alternative
               | locations for datacenters have a worse mix of
               | renewables/nuclear to fossil fuels in their electricity
               | generation. ~60% of VA's electricity is generated from
               | burning fossil fuels (of which 1/12th is still coal)
               | while natural gas makes up less than 40% of electricity
               | generation in California, for example
        
             | nradov wrote:
             | That's a total non sequitur. Just because LLMs are scalable
             | doesn't mean this is a problem that requires government
             | intervention. It's only idiots and grifters who want us to
             | worry about sci-fi disaster scenarios. The snide
             | dismissiveness is completely deserved.
        
           | lopatin wrote:
           | > Audits cannot prove anything and they offer no value when
           | planning for the future. They're purely a retrospective tool
           | that offers insights into potential risk factors.
           | 
           | What if it audits your deploy and approval processes? They
           | can say for example, that if your AI deployment process
           | doesn't include stress tests against some specific malicious
           | behavior (insert test cases here) then you are in violation
           | of the law. That would essentially be a control on all future
           | deploys.
        
           | Loughla wrote:
           | >I really wish legislators would operate inside reality
           | instead of a Star Trek episode.
           | 
           | What are your thoughts about businesses like Google and Meta
           | providing guidance and assistance to legislators?
        
           | trog wrote:
           | > Audits cannot prove anything and they offer no value when
           | planning for the future. They're purely a retrospective tool
           | that offers insights into potential risk factors.
           | 
           | Uh, aren't potential risk factors things you want to consider
           | when planning for the future?
        
         | comp_throw7 wrote:
         | > this is the one that would make it illegal to provide open
         | weights for models past a certain size
         | 
         | That's nowhere in the bill, but plenty of people have been
         | confused into thinking this by the bill's opponents.
        
           | tbrownaw wrote:
           | Three of the four options of what an "artifical intelligence
           | safety incident" is defined as require that the weights be
           | kept secret. One is quite explicit, the others are just
           | impossible to prevent if the weights are available:
           | 
           | > (2) Theft, misappropriation, malicious use, inadvertent
           | release, unauthorized access, or escape of the model weights
           | of a covered model or covered model derivative.
           | 
           | > (3) The critical failure of technical or administrative
           | controls, including controls limiting the ability to modify a
           | covered model or covered model derivative.
           | 
           | > (4) Unauthorized use of a covered model or covered model
           | derivative to cause or materially enable critical harm.
        
         | Terr_ wrote:
         | Sounds like legislation that mis-indentifies the root issue as
         | "somehow maybe the computer is too smart" as opposed to, say,
         | "humans and corporations should be liable for using the tool to
         | do evil."
        
       | BaculumMeumEst wrote:
       | based
        
       | choppaface wrote:
       | The Apple Intelligence demos showed Apple is likely planning to
       | use on-device models for ad targeting, and Google / Facebook will
       | certainly respond. Small LLMs will help move unwanted computation
       | onto user devices in order to circumvent existing data and
       | privacy laws. And they will likely be much more effective since
       | they'll have more access and more data. This use case is just
       | getting started, hence SB 1047 is so short-sighted. Smaller LLMs
       | have dangers of their own.
        
         | jimjimjim wrote:
         | Thank you. For some reason I hadn't thought of the advertising
         | angle with local LLMs but you are right!
         | 
         | For example, why is Microsoft hell-bent on pushing Recall onto
         | windows? Answer: targeted advertising.
        
           | jart wrote:
           | [delayed]
        
       | seltzered_ wrote:
       | Is part of the issue the concern that runaway ai computing would
       | just happen outside of california?
       | 
       | There's another important county election in Sonoma happening
       | about CAFOs where part of the issue is that you may get
       | environmental progress locally, but just end up exporting the
       | issue to another state with lax rules:
       | https://www.kqed.org/news/12006460/the-sonoma-ballot-measure...
        
       | metadat wrote:
       | https://archive.today/22U12
        
       | worstspotgain wrote:
       | Excellent move by Newsom. We have a very active legislature, but
       | it's been extremely bandwagon-y in recent years. I support much
       | of Wiener's agenda, particularly his housing policy, but this
       | bill was way off the mark.
       | 
       | It was basically a torpedo against open models. Market leaders
       | like OpenAI and Anthropic weren't really worried about it, or
       | about open models in general. Its supporters were the also-rans
       | like Musk [1] trying to empty out the bottom of the pack, as well
       | as those who are against any AI they cannot control, such as
       | antagonists of the West and wary copyright holders.
       | 
       | [1] https://techcrunch.com/2024/08/26/elon-musk-unexpectedly-
       | off...
        
         | SonOfLilit wrote:
         | why would Google, Microsoft and OpenAI oppose a torpedo against
         | open models? Aren't they positioned to benefit the most?
        
           | worstspotgain wrote:
           | If there was just one quasi-monopoly it would have probably
           | supported the bill. As it is, the market leaders have the
           | competition from each other to worry about. Getting rid of
           | open models wouldn't let them raise their prices much.
        
             | SonOfLilit wrote:
             | So if it's not them, who is the hidden commercial interest
             | sponsoring an attack on open source models that cost
             | >$100mm to train? Or does Wiener just genuinely hate
             | megabudget open source? Or is it an accidental attack,
             | aimed at something else? At what?
        
           | CSMastermind wrote:
           | The bill included language that required the creators of
           | models to have various "safety" features that would severely
           | restrict their development. It required audits and other
           | regulatory hurdles to build the models at all.
        
             | llamaimperative wrote:
             | If you spent $100MM+ on training.
        
               | gdiamos wrote:
               | Advanced technology will drop the cost of training.
               | 
               | The flop targets in that bill would be like saying "640KB
               | of memory is all we will ever need" and outlawing
               | anything more.
        
               | llamaimperative wrote:
               | No, there are two thresholds and BOTH must be met.
               | 
               | One of those is $100MM in training costs.
               | 
               | The other is measured in FLOPs but is already larger than
               | GPT-4, so the "think of the small guys!" argument doesn't
               | make much sense.
        
           | hn_throwaway_99 wrote:
           | Yeah, I think the argument that "this just hurts open models"
           | makes no sense given the supporters/detractors of this bill.
           | 
           | The thing that large companies care the most about in the
           | legal realm is _certainty_. They 're obviously going to be a
           | big target of lawsuits regardless, so they want to know that
           | legislation is clear as to the ways they can act - their
           | biggest fear is that you get a good "emotional sob story" in
           | front of a court with a sympathetic jury. It sounded like
           | this legislation was so vague that it would attract a hoard
           | of lawyers looking for a way they can argue these big
           | companies didn't take "reasonable" care.
        
             | SonOfLilit wrote:
             | Sob stories are definitely not covered by the text of the
             | bill. The "critical harm" clause (ctrl-f this comment
             | section for a full quote) is all about nuclear weapons and
             | massive hacks and explicitly excludes "just" someone dying
             | or getting injured with very clear language.
        
           | benreesman wrote:
           | Some laws are just _bad_. When the API-mediated /closed-
           | weights companies agree with the open-weight/operator-aligned
           | community that a law is bad, it's probably got to be pretty
           | awful. That said, though my mind might be playing tricks on
           | me, I seem to recall the big labs being in favor at one time.
           | 
           | There are a number of related threads linked, but I'll
           | personally highlight Jeremy Howard's open letter as IMHO the
           | best-argued case against SB 1047.
           | 
           | https://www.answer.ai/posts/2024-04-29-sb1047.html
        
             | SonOfLilit wrote:
             | > The definition of "covered model" within the bill is
             | extremely broad, potentially encompassing a wide range of
             | open-source models that pose minimal risk.
             | 
             | Who are these wide range of >$100mm open source models he's
             | thinking of? And who are the impacted small businesses that
             | would be scared to train them (at a cost of >$100mm)
             | without paying for legal counsel?
        
           | wrsh07 wrote:
           | I would note that Facebook and Google were opposed to eg gdpr
           | although it gave them a larger share of the pie.
           | 
           | When framed like that: why be opposed, it hurts your
           | competition? The answer is something like: it shrinks the pie
           | or reduces the growth rate, and that's bad (for them and
           | others)
           | 
           | The economics of this bill aren't clear to me (how large of a
           | fine would Google/Microsoft pay in expectation within the
           | next ten years?), but they maybe also aren't clear to
           | Google/Microsoft (and that alone could be a reason to oppose)
           | 
           | Many of the ai safety crowd were very supportive, and I would
           | recommend reading Zvi's writing on it if you want their take
        
         | Cupertino95014 wrote:
         | > We have a very active legislature, but it's been extremely
         | bandwagon-y in recent years
         | 
         | "It's been a clown show."
         | 
         | There. Fixed it for you.
        
       | SonOfLilit wrote:
       | I wondered if the article was over-dramatizing what risks were
       | covered by the bill, so I read the text:
       | 
       | (g) (1) "Critical harm" means any of the following harms caused
       | or materially enabled by a covered model or covered model
       | derivative: (A) The creation or use of a chemical, biological,
       | radiological, or nuclear weapon in a manner that results in mass
       | casualties. (B) Mass casualties or at least five hundred million
       | dollars ($500,000,000) of damage resulting from cyberattacks on
       | critical infrastructure by a model conducting, or providing
       | precise instructions for conducting, a cyberattack or series of
       | cyberattacks on critical infrastructure. (C) Mass casualties or
       | at least five hundred million dollars ($500,000,000) of damage
       | resulting from an artificial intelligence model engaging in
       | conduct that does both of the following: (i) Acts with limited
       | human oversight, intervention, or supervision. (ii) Results in
       | death, great bodily injury, property damage, or property loss,
       | and would, if committed by a human, constitute a crime specified
       | in the Penal Code that requires intent, recklessness, or gross
       | negligence, or the solicitation or aiding and abetting of such a
       | crime. (D) Other grave harms to public safety and security that
       | are of comparable severity to the harms described in
       | subparagraphs (A) to (C), inclusive. (2) "Critical harm" does not
       | include any of the following: (A) Harms caused or materially
       | enabled by information that a covered model or covered model
       | derivative outputs if the information is otherwise reasonably
       | publicly accessible by an ordinary person from sources other than
       | a covered model or covered model derivative. (B) Harms caused or
       | materially enabled by a covered model combined with other
       | software, including other models, if the covered model did not
       | materially contribute to the other software's ability to cause or
       | materially enable the harm. (C) Harms that are not caused or
       | materially enabled by the developer's creation, storage, use, or
       | release of a covered model or covered model derivative.
        
         | handfuloflight wrote:
         | Does Newsom believe that an AI model can do this damage
         | autonomously or does he understand it must be wielded and
         | overseen by humans to do so?
         | 
         | In that case, how much of an enabler is an AI to meet the
         | destructive ends, when, if the humans can use AI to conduct the
         | damage, they can surely do it without the AI as well.
         | 
         | The potential for destruction exists either way but is the
         | concern that AI makes this more accessible and effective?
         | What's the boogeyman? I don't think these models have private
         | information regarding infrastructure and systems that could be
         | exploited.
        
           | SonOfLilit wrote:
           | "Critical harm" does not include any of the following: (A)
           | Harms caused or materially enabled by information that a
           | covered model or covered model derivative outputs if the
           | information is otherwise reasonably publicly accessible by an
           | ordinary person from sources other than a covered model or
           | covered model derivative.
           | 
           | The bogeyman is not these models, it's future agentic
           | autonomous ones, if and when they can hack major
           | infrastructure or build nukes. The quoted text is very very
           | clear on that.
        
       | hn_throwaway_99 wrote:
       | Curious if anyone can point to some resources that summarize the
       | pros/cons arguments of this legislation. Reading this article, my
       | first thought is that I definitely agree it sounds impossibly
       | vague for a piece of legislation - "reasonable care" and
       | "unreasonable risk" sound like things that could be endlessly
       | litigated.
       | 
       | At the same time,
       | 
       | > Computer scientists Geoffrey Hinton and Yoshua Bengio, who
       | developed much of the technology on which the current generative-
       | AI wave is based, were outspoken supporters. In addition, 119
       | current and former employees at the biggest AI companies signed a
       | letter urging its passage.
       | 
       | These are obviously highly intelligent people (though I've
       | definitely learned in my life that intelligence in one area, like
       | AI and science, doesn't mean you should be trusted to give legal
       | advice), so I'm curious to know why Hinton and Bengio supported
       | the legislation so strongly.
        
         | throwup238 wrote:
         | California's Office of Legislative Counsel always provides a
         | "digest" for every bill as part of its full text:
         | https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...
         | 
         | It's not an opinionated pros/cons list from the industry but
         | it's probably the most neutral explanation of what the bill
         | does.
        
       | nisten wrote:
       | Imagine being concerned about AI safety and then introducing a
       | bill that had to be ammended to change criminal responsability of
       | AI developers to civil legal responsability for people who are
       | trying to investigate and work openly on models.
       | 
       | What's next, going after maintainers of python packages... is
       | attacking transparency itself a good way to make AI safer. Yeah,
       | no, it's f*king idiotic.
        
       | Lonestar1440 wrote:
       | This is no way to run a state. The Democrat-dominated legislature
       | passes everything that comes before it (and rejects anything that
       | the GOP touches, in committee) and then the Governor needs to
       | veto the looniest 20% of them to keep us from falling into total
       | chaos. This AI bill was far from the worst one.
       | 
       | "Vote out the legislators!" but for who... the Republican party?
       | And we don't even get a choice on the general ballot most of the
       | time, thanks to "Open Primaries".
       | 
       | It's good that Newsom is wise enough to muddle through, but this
       | is an awful system.
       | 
       | https://www.pressdemocrat.com/article/news/california-gov-ne...
        
       | gdiamos wrote:
       | If that bill had passed I would have seriously considered moving
       | my AI company out of the state.
        
       | elicksaur wrote:
       | Nothing like this should pass until the legislators can come up
       | with a definition that doesn't encompass basically every computer
       | program ever written:
       | 
       | (b) "Artificial intelligence model" means a machine-based system
       | that can make predictions, recommendations, or decisions
       | influencing real or virtual environments and can use model
       | inference to formulate options for information or action.
       | 
       | Yes, they limited the scope of law by further defining "covered
       | model", but the above shouldn't be the baseline definition of
       | "Artificial intelligence model."
       | 
       | Text: https://legiscan.com/CA/text/SB1047/id/2919384
        
       | StarterPro wrote:
       | Whaaat? The sleazy Governor sided with the tech companies??
       | 
       | I'll have to go get a thesaurus, shocked won't cover how I'm
       | feeling rn.
        
       ___________________________________________________________________
       (page generated 2024-09-29 23:00 UTC)