[HN Gopher] Call-to-Action on SB 1047 - Frontier Artificial Inte...
       ___________________________________________________________________
        
       Call-to-Action on SB 1047 - Frontier Artificial Intelligence Models
       Act
        
       Author : jph00
       Score  : 88 points
       Date   : 2024-04-28 21:34 UTC (1 hours ago)
        
 (HTM) web link (www.affuture.org)
 (TXT) w3m dump (www.affuture.org)
        
       | andy99 wrote:
       | I agree with this (the call to action, not the act) and will try
       | and respond and share it, but it's a lobby group right ("Alliance
       | for the Future"), I'd like to know who is funding it and a bit
       | more about it.
        
         | stefan_ wrote:
         | It seems to be run by.. a twitter troll?
         | 
         | https://twitter.com/psychosort
        
           | artninja1988 wrote:
           | Why do you say he's a troll?
        
           | jph00 wrote:
           | What's with the ad-hominem? I can't see where you're getting
           | that from at all. The folks involved in this lobby group are
           | listed here:
           | 
           | https://www.affuture.org/about/
        
       | s1k3s wrote:
       | The article suggests that this act will effectively destroy any
       | open source AI initiative in California. After reading the act,
       | this seems to be the correct assumption. But, is Open Source AI
       | even a thing at this point?
       | 
       | By the way, this is how the EU does things and that's why we're
       | always behind on anything tech :)
        
         | OKRainbowKid wrote:
         | This is also why we have actually meaningful consumer
         | protections in place.
        
           | s1k3s wrote:
           | Can you give me an example of effective consumer protection?
        
             | noodlesUK wrote:
             | Yes, consumer rights in Europe are great for buying various
             | goods and services!
             | 
             | For example, the EU regs for flight delays are a great
             | example of consumer protection that is actually beneficial.
             | You get paid cash compensation (which often exceeds the
             | face value of the ticket) if you're delayed more than a
             | certain amount.
        
               | karaterobot wrote:
               | FYI, you are entitled to a refund in the U.S. if your
               | flight is delayed significantly. As of last week, it's 3
               | hours for domestic flights, or 6 hours for international.
               | The refund is automatic. It's been true for a long time
               | that people could get refunds for significant delays, but
               | the definition of how long "significant" means was not
               | defined.
        
         | AnthonyMouse wrote:
         | > is Open Source AI even a thing at this point?
         | 
         | What do you mean? You can download llama.cpp or Stable
         | Diffusion and run it on your ordinary PC right now. People make
         | variants using LoRA adapters and things with relatively modest
         | resources. Even creating small specialized models from scratch
         | is not impossibly expensive and they often outperform larger
         | generalized models in the domain they're specialized for.
         | 
         | Creating a large model like llama or grok takes a lot of
         | resources, but then it's entities with a lot of resources that
         | create them. Both of those models have open weights.
        
           | s1k3s wrote:
           | > (j) (1) "Developer" means a person that creates, owns, or
           | otherwise has responsibility for an artificial intelligence
           | model.
           | 
           | For as long as you don't distribute the model and you only
           | use it for yourself, you don't fall under this definition (if
           | I understand correctly).
        
             | AnthonyMouse wrote:
             | Open source implies that you _are_ distributing it.
        
               | s1k3s wrote:
               | Your comment implies you're not distributing it, you're
               | using something that was distributed to you [in this case
               | by META].
               | 
               | If you would distribute a mod of the model you would fall
               | under the restrictions of this bill. Which is why I asked
               | the original question: are people even doing this?
        
               | AnthonyMouse wrote:
               | But then Meta is distributing it. And if you modify it in
               | a way that others may find useful, you might also like to
               | distribute your modifications.
        
       | johnea wrote:
       | Thanks for the heads up!
       | 
       | I'll take this opportunity to write my California state senator
       | and the bill's author, to support the bill.
       | 
       | With the extreme centralization of compute resources into massive
       | multi-billion $ companies, caused by both "cloud" and the tech
       | erroneously refereed to as some kind of intelligence, I'm in
       | favor of choke-chaining both of them.
        
         | choilive wrote:
         | I can't tell if this is satire. If so, good one. If not - you
         | should think about what at least what the second order effects
         | would be here
        
       | pcthrowaway wrote:
       | This bill sounds unbelievably stupid. If passed, it will just
       | result in a migration of AI projects out of California, save a
       | few which are already tied to the EA movement.
       | 
       | I'm not under the impression that the EA movement is better
       | suited to steward AI development than other groups, but even
       | assuming they were, there is no chance for an initiative like
       | this to work unless every country agreed to it and followed it.
        
         | echelon wrote:
         | Honestly it would be good for AI if it left California.
         | 
         | California has too much regulatory burden and taxation.
        
           | kbenson wrote:
           | My initial interpretation of this is along the lines of "this
           | very contentious thing that many people are afraid will cause
           | lots of problems if not handled carefully with checks and
           | balances should move out of the current place it's generally
           | being done that cares a lot about and puts a lot of checks
           | and balances into place, because they have too many."
           | 
           | Am I jumping to conclusions and is there a different
           | interpretation you think I should be coming away with?
        
             | AnthonyMouse wrote:
             | That interpretation isn't necessarily unmeritorious.
             | Suppose you have a place with Level 7 checks and balances
             | and people are content to live under them. If you dial it
             | up to 9, then they move to a place at Level 2, which
             | otherwise wouldn't have been worth it because of other
             | trade offs. So the new rules don't take you from Level 7 to
             | Level 9, they take you from Level 7 to Level 2.
             | 
             | But there is also another interpretation, which is that the
             | new thing is going to happen in whatever place has the
             | least stringent rules anyway, so more stringent rules don't
             | improve safety, they just deprive your jurisdiction of any
             | potential rewards from keeping the activity local, and
             | provide people in other jurisdictions the benefit of the
             | influx of people you're inducing to leave.
        
         | mquander wrote:
         | The bill doesn't give special treatment to "EA" models, so what
         | does it matter whether projects are tied to EA or whether EAs
         | are good stewards? Either it's a good law or it isn't.
         | 
         | At a glance it looks like it's not going to affect AI projects
         | that are basically consumers of existing models, which is most
         | projects.
        
       | _heimdall wrote:
       | Anyone have a link to a less biased explanation of the bill? I
       | can't take this one too seriously when it baselessly claims
       | people will be charged with thought crimes.
        
         | s1k3s wrote:
         | Why not read the bill itself?
         | https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...
         | 
         | It's not that big
        
           | thorum wrote:
           | The bill only applies to new models which meet these
           | criteria:
           | 
           | (1) The artificial intelligence model was trained using a
           | quantity of computing power greater than 10^26 integer or
           | floating-point operations.
           | 
           | (2) The artificial intelligence model was trained using a
           | quantity of computing power sufficiently large that it could
           | reasonably be expected to have similar or greater performance
           | as an artificial intelligence model trained using a quantity
           | of computing power greater than 10^26 integer or floating-
           | point operations in 2024 as assessed using benchmarks
           | commonly used to quantify the general performance of state-
           | of-the-art foundation models.
           | 
           | ...and have the following:
           | 
           | "Hazardous capability" means the capability of a covered
           | model to be used to enable any of the following harms in a
           | way that would be significantly more difficult to cause
           | without access to a covered model:
           | 
           | (A) The creation or use of a chemical, biological,
           | radiological, or nuclear weapon in a manner that results in
           | mass casualties.
           | 
           | (B) At least five hundred million dollars ($500,000,000) of
           | damage through cyberattacks on critical infrastructure via a
           | single incident or multiple related incidents.
           | 
           | (C) At least five hundred million dollars ($500,000,000) of
           | damage by an artificial intelligence model that autonomously
           | engages in conduct that would violate the Penal Code if
           | undertaken by a human.
           | 
           | (D) Other threats to public safety and security that are of
           | comparable severity to the harms described in paragraphs (A)
           | to (C), inclusive.
           | 
           | ...In which case the organization creating the model must
           | apply for one of these:
           | 
           | "Limited duty exemption" means an exemption, pursuant to
           | subdivision (a) or (c) of Section 22603, with respect to a
           | covered model that is not a derivative model that a developer
           | can reasonably exclude the possibility that a covered model
           | has a hazardous capability or may come close to possessing a
           | hazardous capability when accounting for a reasonable margin
           | for safety and the possibility of posttraining modifications.
        
             | purlane wrote:
             | This sounds entirely reasonable!
        
             | yellow_postit wrote:
             | very similar to what the Whitehouse put out [1] in terms of
             | applicability being based on dual use & size. It is hard
             | not to see this as a push for regulatory capture,
             | specifically trying to chill open source development in
             | favor of some well-funded industry closed-source groups
             | which can adhere to these regulations.
             | 
             | A harms-based approach, regardless of the model used, seems
             | more able to be put into practice.
             | 
             | [1] https://www.whitehouse.gov/briefing-room/presidential-
             | action...
        
               | s1k3s wrote:
               | This is exactly what it does. Small players go out
               | because they can't afford the legal implication and big
               | players get bigger because this means nothing to them.
        
             | jph00 wrote:
             | Pretty much all models, including today's models, already
             | fall foul of the "Hazardous capability" clause. These
             | models can be used to craft persuasive emails or blog
             | posts, analyse code for security problems, and so forth.
             | Whether such a thing is done as part of a process that
             | leads to lots of damage depends on the context, not on the
             | model.
             | 
             | So in practice, only the flops criteria matters. Which
             | means only giant companies with well-funded legal
             | departments, or large states, can build these models,
             | increasing centralization and control, and making full
             | model access a scarce resource worth fighting over.
        
           | polski-g wrote:
           | Looks like it's trying to literally regulate speech. This
           | would be struck down per Bernstein v DOJ.
           | 
           | There is freedom of speech regardless if it's written in
           | English or C.
        
         | gedy wrote:
         | Briefly:
         | 
         | - Developers must assess whether their AI models have hazardous
         | capabilities before training them. They must also be capable of
         | promptly shutting down the model if safety concerns arise.
         | 
         | - Developers must annually certify compliance with safety
         | requirements. They must report any AI safety incidents to a
         | newly created Frontier Model Division within the Department of
         | Technology.
         | 
         | - Cluster Operation Regulation: OOpolicies to assess whether
         | customers intend to use the cluster for deploying AI models.
         | Violations may lead to civil penalties.
         | 
         | - A new division within the Department of Technology will
         | review developer certifications, release summarized findings,
         | and may assess related fees.
         | 
         | - The Department of Technology will establish a public cloud
         | computing cluster named CalCompute, focusing on safe and secure
         | deployment of large-scale AI models and promoting equitable
         | innovation.
        
       | baggy_trough wrote:
       | Wiener of course. He's literally a demon in human form.
        
       | throwing_away wrote:
       | Slow down there, California.
       | 
       | Florida is growing too fast as it is.
        
       | synapsomorphy wrote:
       | I don't think this bill would be that effective, but I do feel
       | that if we as a species don't do something drastic soon, we won't
       | be around for a whole lot longer.
       | 
       | And I'm not sure if it's even possible to do something drastic
       | enough at this point - regulating datacenters would just make
       | companies move to other countries, just like this would probably
       | just make companies move out of CA.
        
         | squigz wrote:
         | > we won't be around for a whole lot longer.
         | 
         | Why do you think that?
         | 
         | > regulating datacenters would just make companies move to
         | other countries
         | 
         | To say nothing of the potential issues regarding free society
         | going down this route will yield - and has arguably already
         | yielded.
        
       | Imnimo wrote:
       | >(2) "Hazardous capability" includes a capability described in
       | paragraph (1) even if the hazardous capability would not manifest
       | but for fine tuning and posttraining modifications performed by
       | third-party experts intending to demonstrate those abilities.
       | 
       | So if I hand-write instructions to make a chemical weapon, and
       | aggressively "fine-tune" Llama 7B to output those instructions
       | verbatim regardless of input, Meta is liable for releasing a
       | model with hazardous capabilities?
        
         | thorum wrote:
         | The text says "in a way that would be significantly more
         | difficult to cause without access to a covered model" and in
         | another place mentions "damage by an artificial intelligence
         | model that autonomously engages in conduct that would violate
         | the Penal Code if undertaken by a human" so that _probably_
         | doesn't count. Though it might be open to future
         | misinterpretation.
        
           | Imnimo wrote:
           | I don't agree with that reading. As long as my custom
           | chemical weapon instructions are not publicly available
           | otherwise, then it is surely more difficult to build the
           | weapon without access to the instructions.
           | 
           | The line about autonomous actions is only item C in the list
           | of possible harms. It is separate from item A which covers
           | chemical weapons and other similar acts.
        
             | simonh wrote:
             | If someone is so keen on doing something in a way that's
             | illegal that they go to all that trouble specially to get
             | in trouble with the law, maybe that's up to them.
        
               | jph00 wrote:
               | You're missing the point. Liability here would also fall
               | on the open source developer who created a general
               | purpose model, which someone else then went on to fine-
               | tune and prompt to do something harmful.
        
       | nonplus wrote:
       | I guess I think we should hold models used for non-academic
       | reasons to a higher standard, and there should be oversight.
       | 
       | I don't know if all the language in this bill does what we need,
       | but I'm against letting large corporations like a META or X live
       | test whatever they want on their end users.
       | 
       | Calling out derivative models are exempt sounds good; only new
       | training sets have to be subjected to this. I think there should
       | be an academic limited duty exemption, models that can't be
       | commercialized likely don't need the rigor of this law.
       | 
       | I guess I don't agree with affuture.org and think we need
       | legislation like this in place.
        
       | Animats wrote:
       | I just sent in some comments.
       | 
       | It's too late to stop "deep fakes". That technology is already in
       | Photoshop and even built into some cameras. Also, regulate that
       | and Hollywood special effects shops may have to move out of
       | state.
       | 
       | As for LLMs making it easier to people to build destructive
       | devices, Google can provide info about that. Or just read some
       | "prepper" books and magazines. That ship sailed long ago.
       | 
       | Real threats are mostly about how much decision power companies
       | delegate to AIs. Systems terminating accounts with no appeal are
       | already a serious problem. An EU-type requirement for appeals, a
       | requirement for warning notices, and the right to take such
       | disputes to court would help there. It's not the technology.
        
         | andy99 wrote:
         | > Systems terminating accounts with no appeal are already a
         | serious problem.
         | 
         | Right, there is no issue with how "smart" ML models will get or
         | whatever ignorant framing about intelligence and existential
         | risk gets made up by people who don't understand the
         | technology.
         | 
         | The real concern is dumb use of algorithmic decision making
         | without recourse which is just as valid whether it's an if
         | statement or a trillion parameter LLM.
        
       | cscurmudgeon wrote:
       | > A developer of a covered model that provides commercial access
       | to that covered model shall provide a transparent, uniform,
       | publicly available price schedule for the purchase of access to
       | that covered model
       | 
       | Interesting, we don't have transparent, uniform, publicly
       | available price schedule for healthcare and other basic needs
       | (electricity, e.g. see PGE).
       | 
       | Something is fishy here.
        
       | protocolture wrote:
       | Question. What happens if I write a piece of software that is
       | harmful that doesnt have the AI label.
       | 
       | It seems dumb to have a separate classification for harms caused
       | by trained AI models. The training aspect doesnt seem to limit
       | liability at all. A judge might rule differently, but thats why
       | the justice system is built such as it is, to make intelligent
       | decisions based on the specific facts of a case.
       | 
       | I am betting that software that causes some significant harm is
       | already outlawed. So this whole thing is just a waste of time.
        
       | jph00 wrote:
       | I've written a submission to the authors of this bill, and made
       | it publicly available here:
       | 
       | https://www.answer.ai/posts/2024-04-29-sb1047.html
       | 
       | The EFF have also prepared a submission:
       | 
       | https://www.context.fund/policy/2024-03-26SB1047EFFSIA.pdf
       | 
       | A key issue with the bill is that it criminalises creating a
       | model that someone else uses to cause harm. But of course, it's
       | impossible to control what someone else does with your model --
       | regardless of how you train it, it can be fine-tuned, prompted,
       | etc by users for their own purposes. Even then, you can't really
       | know _why_ a model is doing something -- for instance, AI
       | security researchers Arvind Narayanan and Sayash Kapoor point
       | out:
       | 
       | > _Consider the concern that LLMs can help hackers generate and
       | send phishing emails to a large number of potential victims. It's
       | true -- in our own small-scale tests, we've found that LLMs can
       | generate persuasive phishing emails tailored to a particular
       | individual based on publicly available information about them.
       | But here's the problem: phishing emails are just regular emails!
       | There is nothing intrinsically malicious about them. A phishing
       | email might tell the recipient that there is an urgent deadline
       | for a project they are working on, and that they need to click on
       | a link or open an attachment to complete some action. What is
       | malicious is the content of the webpage or the attachment. But
       | the model that's being asked to generate the phishing email is
       | not given access to the content that is potentially malicious. So
       | the only way to make a model refuse to generate phishing emails
       | is to make it refuse to generate emails._
       | 
       | Nearly a year ago I warned that that bills of this kind could
       | hurt, rather than help safety, and could actually tear down the
       | foundations of the Enlightenment:
       | 
       | https://www.fast.ai/posts/2023-11-07-dislightenment.html
        
       | interroboink wrote:
       | I feel like the legal definition of "AI Model" is pretty
       | slippery.
       | 
       | From this document, they define:                   "Artificial
       | intelligence model" means an engineered or machine-based system
       | that, for explicit or implicit objectives, infers, from the input
       | it receives, how to generate outputs that can influence physical
       | or virtual environments and that may operate with varying levels
       | of autonomy.
       | 
       | That's pretty dang broad. Doesn't it cover basically all
       | software? I'm not a lawyer, and I realize it's ultimately up to
       | judges to interpret, but it seems almost limitless. Seems like it
       | could cover a kitchen hand mixer too, as far as I can tell.
        
       ___________________________________________________________________
       (page generated 2024-04-28 23:00 UTC)