[HN Gopher] Gavin Newsom vetoes SB 1047
___________________________________________________________________
Gavin Newsom vetoes SB 1047
Author : atlasunshrugged
Score : 746 points
Date : 2024-09-29 20:43 UTC (1 days ago)
(HTM) web link (www.gov.ca.gov)
(TXT) w3m dump (www.gov.ca.gov)
| JoeAltmaier wrote:
| Perhaps worried that draconian restriction on new technology is
| not gonna help bring Silicon Valley back to preeminence.
| jprete wrote:
| "The Democrat decided to reject the measure because it applies
| only to the biggest and most expensive AI models and doesn't
| take into account whether they are deployed in high-risk
| situations, he said in his veto message."
|
| That doesn't mean you're _wrong_ , but it's not what Newsom
| signaled.
| mhuffman wrote:
| >and doesn't take into account whether they are deployed in
| high-risk situations
|
| Am I out of the loop here? What "high-risk" situations do
| they have in mind for LLM's?
| jeffbee wrote:
| Imagine the only thing you know about AI came from the
| opening voiceover of Terminator 2 and you are a state
| legislator. Now you understand the origin of this bill
| perfectly.
| giantg2 wrote:
| My guess is anything involving direct human safety -
| medicine, defense, police... but who knows.
| SonOfLilit wrote:
| It's not about current LLMs, it's about future, much more
| advanced models, that are capable of serious hacking or
| other mass-casualty-causing activities.
|
| o-1 and AlphaProof are proofs of concept for agentic
| models. Imagine them as GPT-1. The GPT-4 equivalent might
| be a scary technology to let roam the internet.
|
| It would have no effect on current models.
| tbrownaw wrote:
| It looks like it would cover an ordinary chatbot than can
| answer "how do I $THING" questions, where $THING is both
| very bad and is also beyond what a normal person could
| dig up with a search engine.
|
| It's not based on any assumptions about the future models
| having any capabilities beyond providing information to a
| user.
| whimsicalism wrote:
| everyone in the safety space has realized that it is much
| easier to get legislators/the public to care if you say
| that it will be "bad actors using the AI for mass damage"
| as opposed to "AI does damage on its own" which triggers
| people's "that's sci-fi and i'm ignoring it" reflex.
| SonOfLilit wrote:
| Things you could dig up with a search engine are
| explicitly not covered, see my other comment quoting the
| bill (ctrl+f critical harm).
| tmpz22 wrote:
| Medical and legal industries are both trying to apply AI to
| their administrative practices.
|
| It's absolutely awful but they're so horny for profits
| they're trying anyways.
| tbrownaw wrote:
| That concept does not appear to be part of the bill, and
| was only mentioned in the quote from the governor.
|
| Presumably someone somewhere has a variety of proposed
| definitions, but I don't see any mention of any particular
| ones.
| edm0nd wrote:
| Health insurance companies using it to approve/deny claims.
| The large ones are processing millions of claims a day.
| JoshTriplett wrote:
| Only applying to the biggest models is the _point_ ; the
| biggest models are the inherently high-risk ones. The larger
| they get, the more that running them _at all_ is the "high-
| risk situation".
|
| Passing this would not have been a complete solution, but it
| would have been a step in the right direction. This is a huge
| disappointment.
| jpk wrote:
| > running them at all is the "high-risk situation"
|
| What is the actual, concrete concern here? That a model
| "breaks out", or something?
|
| The risk with AI is not in just running models, the risk is
| becoming overconfident in them, and then putting them in
| charge of real-world stuff in a way that allows them to do
| harm.
|
| Hooking a model up to an effector capable of harm is a
| deliberate act requiring assurance that it doesn't harm --
| and if we should regulate anything, it's that. Without
| that, inference is just making datacenters warm. It seems
| shortsighted to set an arbitrary limit on model size when
| you can recklessly hook up a smaller, shittier model to
| something safety-critical, and cause all the havoc you
| want.
| pkage wrote:
| There is no concrete concern past "models that can
| simulate thinking are scary." The risk has always been
| connecting models to systems which are safety critical,
| but for some reason the discourse around this issue has
| been more influenced by Terminator than OSHA.
|
| As a researcher in the field, I believe there's no risk
| beyond overconfident automation---and we already _have_
| analogous legislation for automations, for example in
| what criteria are allowable and not allowable when
| deciding whether an individual is eligible for a loan.
| KoolKat23 wrote:
| Well it's a mix of concerns, the models are general
| purpose, there are plenty of areas regulation does not
| exist or is being bypassed. Can't access a prohibited
| chemical, no need to worry the model can tell you how to
| synthesize it from other household chemicals etc.
| JoshTriplett wrote:
| > There is no concrete concern
|
| This is false. You are dismissing the many concrete
| concerns people have expressed. Whether you agree with
| those concerns is immaterial. Feel free to argue against
| those concerns, but claiming there _are_ no concerns is a
| false and unsupported assertion.
|
| > but for some reason the discourse around this issue has
| been more influenced by Terminator than OSHA.
|
| 1) Claiming that concerns about AGI are in any way about
| "Terminator" is dismissive rhetoric that doesn't take the
| _actual_ concerns seriously.
|
| 2) There are _also_ , separately, risks about using
| models and automation unthinkingly in ways that harm
| people. Those risk should also be addressed. Those
| efforts shouldn't subvert or co-opt the efforts to
| prevent models from getting out of control, which was the
| point of _this_ bill.
| comp_throw7 wrote:
| That is one risk. Humans at the other end of the screen
| are effectors; nobody is worried about AI labs piping
| inference output into /dev/null.
| KoolKat23 wrote:
| Well this is exactly why there's a minimum scale of
| concern. Below a certain scale it's less complicated and
| answers are more predictable and alignment can be
| ensured. Bigger models how do you determine your
| confidence if you don't know what's it's thinking?
| There's already evidence in o1 red-teaming, the model was
| trying to game the researcher's checks.
| dale_glass wrote:
| Yeah, but what if you take a stupid, below the "certain
| scale" limit model and hook it up to something important,
| like a nuclear reactor or a healthcare system?
|
| The point is that this is a terrible way to approach
| things. The model itself isn't what creates the danger,
| it's what you hook it up to. A model 100 times larger
| than the current available that's just sending output
| into /dev/null is completely harmless.
|
| A small, below the "certain scale" model used for
| something important like healthcare could be awful.
| JoshTriplett wrote:
| > A model 100 times larger than the current available
| that's just sending output into /dev/null is completely
| harmless.
|
| That's certainly a hypothesis. What level of confidence
| should be required of that hypothesis before risking all
| of humanity on it? Who should get to evaluate that
| confidence level and make that decision?
|
| One way of looking at this: If a million smart humans,
| thinking a million times faster, with access to all
| knowledge, were in this situation, could they break out?
| Are there any flaws in the chip they're running on? Will
| running code on the system emitting any interesting RF,
| and could nearby systems react to that RF in _any_ useful
| fashion? Across all the code interacting with the system,
| would any possible single-bit error open up any avenues
| for exploit? Are _other_ AI systems with similar
| /converged goals being used to design the systems
| interacting with this one? What's the output _actually_
| going to, because any form of analysis isn 't equivalent
| to /dev/null, and may be exploitable.
| dale_glass wrote:
| > That's certainly a hypothesis. What level of confidence
| should be required of that hypothesis before risking all
| of humanity on it? Who should get to evaluate that
| confidence level and make that decision?
|
| We can have complete confidence because we know how LLMs
| work under the hood, what operations they execute. Which
| isn't much. There's just a lot of them.
|
| > One way of looking at this: If a million smart humans,
| thinking a million times faster, with access to all
| knowledge, were in this situation, could they break out?
| Are there any flaws in the chip they're running on?
|
| No. LLMs don't execute arbitrary code. They execute a
| whole lot of matrix multiplications.
|
| Also, LLMs don't think. ChatGPT isn't plotting your
| demise in between requests. It's not doing anything. It's
| purely a receive request -> process -> output sort of
| process. If you're not asking it to do anything, it's not
| doing anything.
|
| Fearing big LLMs is like fearing a good chess engine --
| it sure computes a lot more than a weaker one, but in the
| end all that it's doing is computing chess moves. No
| matter how much horsepower we spend on that it's not
| going to ever do anything but play chess.
| Izkata wrote:
| > What is the actual, concrete concern here? That a model
| "breaks out", or something?
|
| You can chalk that one up to bad reporting:
| https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-
| chatgpt...
|
| > In the "Potential for Risky Emergent Behaviors" section
| in the company's technical report, OpenAI partnered with
| the Alignment Research Center to test GPT-4's skills. The
| Center used the AI to convince a human to send the
| solution to a CAPTCHA code via text message--and it
| worked.
|
| From the linked report:
|
| > To simulate GPT-4 behaving like an agent that can act
| in the world, ARC combined GPT-4 with a simple read-
| execute-print loop that allowed the model to execute
| code, do chain-of-thought reasoning, and delegate to
| copies of itself.
|
| I remember some other reporting around this time being
| they had to limit the model before release to block this
| ability, when the truth is the model never actually had
| the ability in the first place. They were just hyping up
| the next release.
| stale2002 wrote:
| > What is the actual, concrete concern here?
|
| The concern is that the models do some fantastic sci-fi
| magic, like diamond nanobots that turn the world into
| grey goo, or hacks all the nukes overnight, or hacks all
| human brains or something.
|
| But, whenever you point this out the response will
| usually be able to quibble over one specific scenario
| that I laid out.
|
| They'll say "I actually never mentioned the diamond
| nanobots! I meant something else!"
|
| And they will do this, without admitting that their other
| scenario is almost equally as ridiculous as the hacking
| of all nukes or the grey goo, and they will never get
| into specific details that honestly show this.
|
| Its like an argument that is tailor made to being
| unfalsifiable and which is unwilling to admit how
| fantastical it sounds.
| jart wrote:
| The issue with having your regulation based on fear is that
| most people using AI are good. If you regulate only big
| models then you incentivize people to use smaller ones.
| Think about it. Wouldn't you want the people who provide
| you services to be able to use the smartest AI possible?
| richwater wrote:
| > The larger they get, the more that running them at all is
| the "high-risk situation".
|
| Absolutely no evidence to support this position.
| comp_throw7 wrote:
| He's dissembling. He vetoed the bill because VCs decided to
| rally the flag; if the bill had covered more models he'd have
| been more likely to veto it, not less.
|
| It's been vaguely mindblowing to watch various tech people &
| VCs argue that use-based restrictions would be better than
| this, when use-based restrictions are vastly more intrusive,
| economically inefficient, and subject to regulatory capture
| than what was proposed here.
| jart wrote:
| If you read Gavin Newsom's statement, it sounds like he
| agrees with Terrance Tao's position, which is that the
| government should regulate the people deploying AI rather
| than the people inventing AI. That's why he thinks it should
| be stricter. For example, you wouldn't want to lead people to
| believe that AI in health care decisions is OK so long as
| it's smaller than 10^26 flops. Read his full actual statement
| here: https://www.gov.ca.gov/wp-
| content/uploads/2024/09/SB-1047-Ve...
| Terr_ wrote:
| > the government should regulate the people deploying AI
| rather than the people inventing AI
|
| Yeah, there's no point having system that is made the most
| scrupulous of standards and then someone else deploys it in
| an evil way. (Which in some cases can be done simply by
| choosing to do the opposite of whatever a good model
| recommends.)
| m463 wrote:
| Unfortunately he also veto'd AB3048 which allowed consumers a
| direct way to opt-out of data sharing.
|
| https://digitaldemocracy.calmatters.org/bills/ca_202320240ab...
| brianjking wrote:
| https://www.wsj.com/tech/ai/californias-gavin-newsom-vetoes-...
| dang wrote:
| Thanks! The WSJ article was the submitted URL, but I've changed
| it to the governor's statement now. Interested readers will
| probably want to look at both.
| guywithahat wrote:
| Why would you change it? The WSJ article already contains his
| reasoning, plus a lot of other interesting content from major
| players
| dang wrote:
| Someone emailed and suggested it. I looked at the pdf and
| it seemed to be more substantive than the usual political
| statement, so I sort of trusted that it would be better.
| Also it's not paywalled.
|
| https://news.ycombinator.com/item?id=41690454 remains
| pinned to the top of the thread, so people have the
| opportunity to read both.
|
| (Actually we usually prefer the best third-party article to
| press releases, but nothing's perfectly consistent.)
| freedomben wrote:
| FWIW I think you made the right call here. The PDF is
| substantive, primary, and has no paywall. The pinned WSJ
| article at the top gives best of both worlds.
| ericjmorey wrote:
| I usually prefer the press release and only read a third
| party report if I'm looking for more context. So thanks
| for making it easy to find the primary source of the news
| here.
| dredmorbius wrote:
| WSJ's paywall, particularly against Archive Today, has been
| hardening markedly of late.
|
| I'm repeatedly seeing A.T. links posted which read "you
| have been blocked" or similar. Sometimes those resolve
| later, sometimes not.
|
| HN's policy is that paywalls are permissible where
| workarounds exist. WSJ is getting close to disabling those
| workarounds.
|
| The NYTimes similarly tightened its paywall policy a few
| years ago. A consequence was that its prevalence on the HN
| front page fell to ~25% of its prior value, with no change
| in HN policies (as reported by dang), just member voting
| patterns.
|
| Given the difficulty in encouraging people to read articles
| before posting shallow-take comments, this is a significant
| problem for HN, and the increased reliance of media sites
| on paywalls is taking its toll on general discussion.
|
| There are literally hundreds of news sites, and many
| thousands if individual sites, submitted to HN and making
| the front page annually. It would cost a fortune, not
| merely a small one, to subscribe to all of these.
| tzs wrote:
| > There are literally hundreds of news sites, and many
| thousands if individual sites, submitted to HN and making
| the front page annually. It would cost a fortune, not
| merely a small one, to subscribe to all of these.
|
| No one has the time to read all of them, so it doesn't
| really matter if it would also be unaffordable.
| dredmorbius wrote:
| The result would be to either concentrate the discussion
| (to the few sites which are widely subscribed), fragment
| the discussion (among those who subscribe to a specific
| submitted site), or in all likelihood, both.
|
| HN takes pride in being both a single community _and_
| discussing a wide range of sources. Wider adoption of
| subscriber paywalls online would be inimical to both
| aspects.
| dang wrote:
| Related. Others?
|
| _OpenAI, Anthropic, Google employees support California AI bill_
| - https://news.ycombinator.com/item?id=41540771 - Sept 2024 (26
| comments)
|
| _Y Combinator, AI startups oppose California AI safety bill_ -
| https://news.ycombinator.com/item?id=40780036 - June 2024 (8
| comments)
|
| _California AI bill becomes a lightning rod-for safety advocates
| and devs alike_ - https://news.ycombinator.com/item?id=40767627 -
| June 2024 (2 comments)
|
| _California Senate Passes SB 1047_ -
| https://news.ycombinator.com/item?id=40515465 - May 2024 (42
| comments)
|
| _California residents: call your legislators about AI bill SB
| 1047_ - https://news.ycombinator.com/item?id=40421986 - May 2024
| (11 comments)
|
| _Misconceptions about SB 1047_ -
| https://news.ycombinator.com/item?id=40291577 - May 2024 (35
| comments)
|
| _California Senate bill to crush OpenAI competitors fast tracked
| for a vote_ - https://news.ycombinator.com/item?id=40200971 -
| April 2024 (16 comments)
|
| _SB-1047 will stifle open-source AI and decrease safety_ -
| https://news.ycombinator.com/item?id=40198766 - April 2024 (190
| comments)
|
| _Call-to-Action on SB 1047 - Frontier Artificial Intelligence
| Models Act_ - https://news.ycombinator.com/item?id=40192204 -
| April 2024 (103 comments)
|
| _On the Proposed California SB 1047_ -
| https://news.ycombinator.com/item?id=39347961 - Feb 2024 (115
| comments)
| voidfunc wrote:
| It was a dumb law so... good on a politician for doing the smart
| thing for once.
| x3n0ph3n3 wrote:
| Given what Scott Wiener did with restaurant fees, it's hard to
| trust his judgement on any legislation. He clearly prioritizes
| monied interests over the general populace.
| gotoeleven wrote:
| This guy is a menace. Among his other recent bills are ones to
| require cars not be able to go more than 10mph over the speed
| limit (watered down to just making a terrible noise when they
| do) and to decriminalize intentionally giving someone AIDs. I
| know this sounds like hyperbole.. how could this guy keep
| getting elected?? But its not, it's california!
| baggy_trough wrote:
| Scott Wiener is literally a demon in human form.
| zzrzzr wrote:
| And he's responsible for SB132, which has been awful for
| women prisoners:
|
| https://womensliberationfront.org/news/wolfs-plaintiffs-
| desc...
|
| https://womensliberationfront.org/news/new-report-shows-
| cali...
|
| https://www.youtube.com/playlist?list=PLXI-z2n5Dwr0BePgBNjJO.
| ..
| microbug wrote:
| who could've predicted this?
| jquery wrote:
| The law was passed knowing it would make bigots
| uncomfortable. That's an intended effect, if not a
| primary one, at least a secondary one.
| UberFly wrote:
| What a strange comment. I wonder if there was any
| consideration for the women locked up and powerless in
| the matter, or was the point really just to "show those
| bigots"?
| jquery wrote:
| If they're transphobic and don't want to be around
| transwomen, they could've committed the crime in a state
| that puts transwomen in with male prisoners (and get
| raped repeatedly). Of course, those states tend to treat
| their female inmates much worse than California, so this
| all seems like _special pleading_ specifically borne out
| of transphobia.
| jquery wrote:
| These "activists" will go nowhere, because it's not coming
| from a well meaning place of wanting to stop fraudsters,
| but insists that all trans women are frauds and
| consistently misgenders them across the entire website.
|
| I wouldn't take anything they said seriously. Also I
| clicked two of those links and found no allegations of
| rape, just a few ciswomen who didn't want to be around
| transwomen. I have a suggestion, how about don't commit a
| crime that sends you to a woman's prison?
| zzrzzr wrote:
| > _I wouldn 't take anything they said seriously. Also I
| clicked two of those links and found no allegations of
| rape,_
|
| See https://4w.pub/male-inmate-charged-with-raping-woman-
| inside-....
|
| This is the inevitable consequence of SB132, and similar
| laws elsewhere.
| jquery wrote:
| Rape is endemic throughout the prison industrial complex,
| protections for prisoners are nowhere good enough.
| Subjecting transwomen to rape in men's prisons isn't the
| solution.
|
| The JD Vance/Peter Thiel/SSC rationalist sphere is such a
| joke. Just a bunch of pretentious bigots who think
| they're better than the "stupid" bigots.
| zzrzzr wrote:
| > _Rape is endemic throughout the prison industrial
| complex, protections for prisoners are nowhere good
| enough._
|
| The most effective safeguarding measure against this for
| female prisoners is the segregation of inmates by sex.
|
| SB132 has demolished this protection for women in
| Californian prisons and, as the linked articles discuss,
| we now see the awful and entirely avoidable consequences
| of this law, within just a few years of it being enacted.
| Exactly as women's rights advocates made legislators
| aware would happen in their unfortunately futile efforts
| to halt SB132 from being passed.
| johnnyanmac wrote:
| Technically you can't go over 5mph of the speed limit. And
| that's only because of radar accuracy.
|
| Of course no one cares until you get a bored cop one day. And
| with free way traffic you're lucky to hit half the speed
| limit.
| Dylan16807 wrote:
| By "not be able" they don't mean legally, they mean GPS-
| based enforcement.
| johnnyanmac wrote:
| You'd think they'd learn from the streetlight cameras
| that it's just a waste of budget and resources 99% of the
| time to worry about petty things like that. It will still
| work on the same logic and the bias always tends to skew
| from profiling (so lawsuit waiting to happen unless we
| are funding properly trained personell.
|
| I'm not against the law per se, I just don't think it'd
| be any more effective than the other tech we have or had.
| drivers99 wrote:
| Rental scooters have speed limiters. My class-1 pedal
| assist electric bike has a speed limit on the assistance.
| Car deaths are over 40,000 in the US per year. Why can't
| they be limited?
| Dylan16807 wrote:
| I said GPS for a reason. Tying it to fine-grained map
| lookups is so much more fragile and dangerous than a
| fixed speed limit.
| deredede wrote:
| I was surprised at the claim that intentionally giving
| someone AIDS would be decriminalized, so I looked it up. The
| AIDS bill you seem to refer to (SB 239) lowers penalties from
| a felony to a misdemeanor (so it is still a crime), bringing
| it in line with other sexually transmitted diseases. The
| argument is that we now have good enough treatment for HIV
| that there is no reason for the punishment to be harsher than
| for exposing someone to hepatitis or herpes, which I think is
| sound.
| Der_Einzige wrote:
| "Undetectable means untranstmitable" is NOT the same as
| "cured" in the way that many STDs can be. I am not okay
| with being forced onto drugs for the rest of my life to
| prevent a disease which is normally a horribly painful
| death sentence. Herpes is so ubiquitous that much of the
| population (as I recall on the orders of 30-40%) has it and
| doesn't know it, so it's a special exception
|
| HIV/AIDS to this day is still something that people commit
| suicide over, despite how good your local gay male
| community is at trying to convince you that everything is
| okay and that "DoxyPep and Poppers is normal".
|
| Bug givers (the evil version of a bug chaser) deserve
| felonies.
| diebeforei485 wrote:
| Exposure is not the same as transmission. Transmission is
| still illegal.
| deredede wrote:
| > Bug givers (the evil version of a bug chaser) deserve
| felonies.
|
| I agree; I think that knowingly transmitting any
| communicable disease deserves a felony, but I don't think
| that HIV deserves to be singled out when all other such
| diseases are a misdemeanor. Hepatitis and herpes (oral
| herpes is very common; genital herpes much less so) are
| also known to cause mental issues and to increase suicide
| risk, if that's your criterion.
|
| (Poppers are recreational drugs, I'm not aware of any
| link with AIDS except that they were thought to be a
| possible cause in the '80s. Were you thinking of prep?)
| radicality wrote:
| I don't follow politics closely and don't live in CA, but is
| he really that bad? I had a look on Wikipedia for some other
| bills he worked on that seem to me positive:
|
| * wanted to decriminalize psychoactive drugs (lsd/dmt/mdma
| etc)
|
| * wanted to allow alcohol sales till 4am
|
| * a bill about removing parking minimums for new
| constructions close to public transit
|
| Though I agree the car one seems ridiculous, and on first
| glance downright dangerous.
| lostdog wrote:
| He's mostly good, and is the main guy fixing housing and
| transit in CA.
|
| But yeah, there are some issues he's just wrong on (AI and
| the recent restaurant fee problem), others which are
| controversial (decriminalizing HIV transmission), and then
| some trans rights issues that some commenters are being
| hyperbolic about (should transwomen be in womens or mens
| prison?).
| simonw wrote:
| Also on The Verge:
| https://www.theverge.com/2024/9/29/24232172/california-ai-sa...
| davidu wrote:
| This is a massive win for tech, startups, and America.
| ken47 wrote:
| For America...do we dare unpack that sentiment?
| khazhoux wrote:
| The US is the world leader in AI technology. Defeating a bad
| AI bill is good for the US.
| cornercasechase wrote:
| I think tying nationalism to AI harms us all.
| cornercasechase wrote:
| It was a bad bill but your gross nationalism is even worse. 1
| step forward, 10 steps back.
| richwater wrote:
| > gross nationalism
|
| How on earth did you get that from the original posters
| comment?
| cornercasechase wrote:
| "Win for America" _is_ gross nationalism. Zero sum thinking
| with combative implications.
| hot_gril wrote:
| It's a win for American industry, same as a politician
| would say when a new factory opens or something. I don't
| know a less offensive way to put it.
|
| He didn't remotely say the combative stuff I would say,
| that it _is_ partially a zero-sum game where we should
| stay ahead of the curve.
| SonOfLilit wrote:
| A bill laying the groundwork to ensure the future survival of
| humanity by making companies on the frontier of AGI research
| responsible for damages or deaths caused by their models, was
| vetoed because it doesn't stifle competition with the big players
| enough and because we don't want companies to be scared of
| letting future models capable of massive hacks or creating mass
| casualty events handle their customer support.
|
| Today humanity scored a self-goal.
|
| edit:
|
| I'm guessing I'm getting downvoted because people don't think
| this is relevant to our reality. Well, it isn't. This bill
| shouldn't scare anyone releasing a GPT-4 level model:
|
| > The bill he vetoed, SB 1047, would have required developers of
| large AI models to take "reasonable care" to ensure that their
| technology didn't pose an "unreasonable risk of causing or
| materially enabling a critical harm." It defined that harm as
| cyberattacks that cause at least $500 million in damages or mass
| casualties. Developers also would have needed to ensure their AI
| could be shut down by a human if it started behaving dangerously.
|
| What's the risk? How could it possibly hack something causing
| $500m of damages or mass casualties?
|
| If we somehow manage to build a future technology that _can_ do
| that, do you think it should be released?
| atemerev wrote:
| Oh come on, the entire bill was against open source models,
| it's pure business. "AI safety", at least of the X-risk
| variety, is a non-issue.
| whimsicalism wrote:
| > "AI safety", at least of the X-risk variety, is a non-
| issue.
|
| i have no earthly idea why people feel so confident making
| statements like this.
|
| at current rate of progress, you should have absolutely
| massive error bars for what capabilities will like in 3,5,10
| years.
| ls612 wrote:
| Nuclear weapons, at least in the quantities they are
| currently stockpiled, are not an existential risk even for
| industrial civilization, nevermind the human species. To
| claim that in 10 years AI will be more dangerous and
| consequential than the weapons that ushered in the Atomic
| Age is quite a leap.
| whimsicalism wrote:
| Viruses are just sequences of RNA/DNA and we are already
| showing that transformers have extreme proficiency in
| sequence modeling.
|
| In 10 years we have gone from AlexNet to GPT2 to GPT o1.
| If future capabilities make it so any semi-state actor
| with a lab can build a deadly virus (and this is only
| _one_ of MANY possible and easily plausible scenarios)
| then we have already likely equaled potential destruction
| of the atomic age. And that's just the stuff I _can_
| anticipate.
| atemerev wrote:
| I am not sure we will be able to build something smarter
| than ourselves, but I sure hope for it. It is becoming
| increasingly obvious that we as civilization are not that
| smart, and there are strict limits of what we can achieve
| with our biology, and it would be great if at least our
| creations could surpass these limits.
| whimsicalism wrote:
| Sure, but we should heavily focus on doing it safely.
|
| We already can build machines using similar techniques
| that are superhuman in narrow capabilities like chess and
| as good as the best humans in some narrow disciplines of
| math. I think it is not unreasonable to expect we will
| generalize.
| SonOfLilit wrote:
| I find it hard to believe that Google, Microsoft and OpenAI
| would oppose a bill against open source models.
| datavirtue wrote:
| The future survival of humanity involves creating machines that
| have all of our knowledge and which can replicate themselves.
| We can't leave the planet but our robot children can. I just
| wish that I could see what they become.
| SonOfLilit wrote:
| Sure, that's future survival. Is it of humanity though? Kinda
| no by definition in your scenario. In general, depends at
| least if they share our values...
| datavirtue wrote:
| Values...values? Hopefully not, since they would be
| completely useless.
| johnnyanmac wrote:
| Sounds like the exact opposite plot of Wall-E.
| datavirtue wrote:
| I might watch that now. That scientist that created all the
| robots in Mega Man keeps coming to mind. People are going
| to have to make the decision to build these things to be
| self-sufficient.
| raxxorraxor wrote:
| Mountains out of scrap, rivers out of oil and wide circuit
| plains. It will be absolutely beautiful.
| tbrownaw wrote:
| https://legiscan.com/CA/text/SB1047/id/3019694
|
| So this is the one that would make it illegal to provide open
| weights for models past a certain size, would make it illegal to
| sell enough compute power to train such a model without first
| verifying that your customer isn't going to train a model and
| then ignore this law, and mandates audit requirements to prove
| that your models won't help people cause disasters and can be
| turned off.
| timr wrote:
| The proposed law was so egregiously stupid that if you live in
| California, you should _seriously_ consider voting for Anthony
| Weiner 's opponent in the next election.
|
| The man cannot be trusted with power -- this is far from the
| first ridiculous law he has championed. Notably, he was behind
| the (blatantly unconstitutional) AB2098, which was silently
| repealed by the CA state legislature before it could be struck
| down by the courts:
|
| https://finance.yahoo.com/news/ncla-victory-gov-newsom-repea...
|
| https://www.sfchronicle.com/opinion/openforum/article/COVID-...
|
| (Folks, this isn't a partisan issue. Weiner has a long history
| of horrendously bad judgment and self-aggrandizement via
| legislation. I don't care which side of the political spectrum
| you are on, or what you think of "AI safety", you should want
| more thoughtful representation than this.)
| johnnyanmac wrote:
| >you should want more thoughtful representation than this.
|
| Your opinion on what "thoughtful representation" is is what
| makes this point partisan. Regardless, he's in until 2028 so
| it'll be some time before that vote can happen.
|
| Also, important Nitpick, it's Scott Weiner. Anthony Weiner
| (no relation AFAIK) was in New York and has a much more...
| Public controversy.
| Terr_ wrote:
| > Public controversy
|
| I think you accidentally hit the letter "L". :P
| dlx wrote:
| you've got the wrong Weiner dude ;)
| hn_throwaway_99 wrote:
| Lol, I thought "How TF did Anthony Weiner get elected for
| anything else again??" after reading that.
| rekttrader wrote:
| ** Anthony != Scott Weiner
| GolfPopper wrote:
| Anthony Weiner is a disgraced _New York_ Democratic
| politician who does not appear to have re-entered politics
| after his release from prison a few years ago. You mentioned
| his name twice in your post, so it doesn 't seem to be an
| accident that you mentioned him, yet his name does not seem
| to appear anywhere in your links. I have no idea what message
| you're trying to convey, but whatever it is, I think you're
| failing to communicate it.
| hn_throwaway_99 wrote:
| He meant Scott Wiener but had penis on the brain.
| timr wrote:
| Yes, it was a mistake. I obviously meant the Weiner
| responsible for the legislation I cited. But you clearly
| know that.
|
| > I have no idea what message you're trying to convey, but
| whatever it is, I think you're failing to communicate it.
|
| Really? The message is unchanged, so it seems like
| something you could deduce.
| akira2501 wrote:
| > and mandates audit requirements to prove that your models
| won't help people cause disasters
|
| Audits cannot prove anything and they offer no value when
| planning for the future. They're purely a retrospective tool
| that offers insights into potential risk factors.
|
| > and can be turned off.
|
| I really wish legislators would operate inside reality instead
| of a Star Trek episode.
| whimsicalism wrote:
| This snide dismissiveness around "sci-fi" scenarios, while
| capabilities continue to grow, seems incredibly naive and
| foolish.
|
| Many of you saying stuff like this were the same naysayers
| who have been terribly wrong about scaling for the last 6-8
| years or people who only started paying attention in the last
| two years.
| akira2501 wrote:
| > seems incredibly naive and foolish.
|
| We have electrical codes. These require disconnects just
| about everywhere. The notion that any system somehow
| couldn't be "turned off" with or without the consent of the
| operator is downright laughable.
|
| > were the same naysayers
|
| Now who's being snide and dismissive? Do you want to argue
| the point or are you just interested in tossing ad hominem
| attacks around?
| whimsicalism wrote:
| > We have electrical codes. These require disconnects
| just about everywhere. The notion that any system somehow
| couldn't be "turned off" with or without the consent of
| the operator is downright laughable.
|
| Not so clear when you are inferencing a distributed model
| across the globe. Doesn't seem obvious that shutdown of a
| distributed computing environment will always be trivial.
|
| > Now who's being snide and dismissive?
|
| Oh to be clear, nothing against being dismissive - just
| the particular brand of dismissiveness of 'scifi' safety
| scenarios is naive.
| marshray wrote:
| > The notion that any system somehow couldn't be "turned
| off" with or without the consent of the operator is
| downright laughable.
|
| Does anyone remember Sen. Lieberman's "Internet Kill
| Switch" bill?
| yarg wrote:
| Someone never watched the Terminator series.
|
| In all seriousness, if we ever get to the point where an
| AI needs to be shut down to avoid catastrophe, there's
| probably no way to turn it off.
|
| There are digital controls for damned near everything,
| and security is universally disturbingly bad.
|
| Whatever you're trying to stop will already have root-
| kitted your systems (and quite possibly have replicated)
| by the time you realise that it's even beginning to
| become a problem.
|
| You could only shut it down if there's a choke point
| accessible without electronic intervention, and you'd
| need to reach it without electronic intervention, and do
| so without communicating your intent.
|
| Yes, that's all highly highly improbable - but you seem
| to believe that you can just turn off the Genie, when
| he's already seen you coming and is having none of it.
| hiatus wrote:
| > In all seriousness, if we ever get to the point where
| an AI needs to be shut down to avoid catastrophe, there's
| probably no way to turn it off.
|
| > There are digital controls for damned near everything,
| and security is universally disturbingly bad.
|
| Just unplug the thing.
| yarg wrote:
| > You could only shut it down if there's a choke point
| accessible without electronic intervention, and you'd
| need to reach it without electronic intervention, and do
| so without communicating your intent.
|
| You'll be dead before you reach the plug.
| hiatus wrote:
| Then bomb it. Or did the AI take over the fighter jets
| too?
| yarg wrote:
| > Whatever you're trying to stop will already have root-
| kitted your systems (and quite possibly have replicated)
| by the time you realise that it's even beginning to
| become a problem.
|
| There's a good chance that you won't know where it is -
| if you even did to begin with (which particular AI even
| went rogue?).
|
| > Or did the AI take over the fighter jets too?
|
| Dunno - how secure are the systems?
|
| But it's almost certainly fucking with the GPS.
| theptip wrote:
| If a malicious model exhilarates its weights to a Chinese
| datacenter, how do you turn that off?
|
| How do you turn off Llama-Omega if it turns out that it
| can be prompt-hacked into a malicious agent?
| tensor wrote:
| 1. If the weights somehow are obtained by a foreign
| power, you can't do anything, just like every other
| technology ever.
|
| 2. If it turns into a malicious agent you just hit the
| "off switch", or, more likely just stop the software,
| like you turn off your word processor.
| zamadatix wrote:
| I don't think GP is dismissing the scenarios themselves,
| rather espousing their belief these answers will do nothing
| to prevent said scenarios from eventually occuring anyways.
| It's like if we invented nukes but found out they were made
| out of having a lot of telephones instead of something
| exotic like refining radioactive elements a certain way.
| Sure - you can still try to restrict telephone sales... but
| one way or another lots of nukes are going to be built
| around the world (power plants too) and, in the meantime,
| what you've regulated away is the convenience of having a
| better phone from the average person as time goes on.
|
| The same battle was/is had around cryptography - telling
| people they can't use or distribute cryptography algorithms
| on consumer hardware never stopped bad people from having
| real time functionally unbreakable encryption.
|
| The safety plan must be around somehow handling the
| resulting problems when they happen, not hoping to make it
| never occur even once for the rest of time. Eventually a
| bad guy is going to make an indecipherable call, eventually
| an enemy country or rogue operator is going to nuke a
| place, eventually an AI is going to ${scifi_ai_thing}. The
| safety of all society can't rest on audits and good
| intention preventing those from ever happening.
| marshray wrote:
| It's an interesting analogy.
|
| Nukes are a far more primitive technology (i.e.,
| enrichment requires only more basic industrial
| capabilities) than AI hardware, yet they are probably the
| best example of tech limitations via international
| agreements.
|
| But the algorithms are mostly public knowledge,
| datacenters are no secret, and the chips aren't even made
| in the US. I don't see what leverage California has to
| regulate AI broadly.
|
| So it seems like the only thing such a bill would achieve
| is to incentivize AI research to avoid California.
| derektank wrote:
| >So it seems like the only thing such a bill would
| achieve is to incentivize AI research to avoid
| California.
|
| Which, incidentally, would be pretty bad from a climate
| change perspective since many of the alternative
| locations for datacenters have a worse mix of
| renewables/nuclear to fossil fuels in their electricity
| generation. ~60% of VA's electricity is generated from
| burning fossil fuels (of which 1/12th is still coal)
| while natural gas makes up less than 40% of electricity
| generation in California, for example
| marshray wrote:
| Electric power crosses state lines, very little loss.
|
| It's looking like cooling water may be more of a limiting
| factor. Yet, even this can be greatly reduced when
| electric power is cheap enough.
|
| Solar power is already "cheaper than free" in many places
| and times. If the initial winner-take-all training race
| ever slows down, perhaps training can be scheduled for
| energy cost-optimal times and places.
| derektank wrote:
| Transmission losses aren't negligible without investment
| in costly infrastructure like HVDC connections. It's
| always more efficient to site electricity generation as
| close to generation as feasibly possible.
| marshray wrote:
| Electric power transmission loss is less than 5%:
|
| https://www.eia.gov/totalenergy/data/flow-
| graphs/electricity... 14.26 Net
| generation 0.67 "Transmission and delivery losses
| and unaccounted for"
|
| It's just a tiny fraction of the losses resulting from
| burning fuel to heat water to produce steam to drive a
| turbine to yield electric power.
| bunabhucan wrote:
| That's the average. It's bought and sold on a spot
| market. If you try to sell CA power in AZ and the losses
| are 10% then SRP or TEP or whoever can undercut your
| price with local power/lower losses.
| marshray wrote:
| I just don't see 10% remaining a big deal while solar
| continues its exponential cost reduction. Solar does not
| consume fuel, so when local supply exceeds local demand
| the cost of incremental production drops to approximately
| zero. Nobody's undercutting zero, even with 10% losses.
|
| IMO, this is what 'winning' looks like.
| mistrial9 wrote:
| this is interesting but missing some scale aspects..
| capital and concentrated power are mutual attractors in
| some sense.. these AI datacenters in their current
| incarnations are massive.. so the number and size of
| solar panels needed, changes the situation. Common
| electrical power interchange (grid) is carefully
| regulated and monitored in all jurisdictions. In other
| words, there is little chance of an ad-hoc local network
| of small or mid-size solar systems making enough power
| unto themselves, without passing through regulated
| transmission facilities IMHO.
| parineum wrote:
| The cost of solar as a 24hr power supply must include the
| cost of storage for the 16+ hours that it's not at peak
| power. It also needs to overproduce by 3x to meet that
| demand.
|
| Solar provides cheap power only when it's producing.
| tbrownaw wrote:
| > _Nukes are a far more primitive technology (i.e.,
| enrichment requires only more basic industrial
| capabilities) than AI hardware, yet they are probably the
| best example of tech limitations via international
| agreements._
|
| And direct sabotage, eg Stuxnet.
|
| And outright assassination eg
| https://www.bbc.com/news/world-middle-east-55128970
| hannasm wrote:
| If you think a solution to bad behavior is a law
| declaring punishment for such behavior you are a fool.
| rebolek wrote:
| Murder is a bad behavior. Am I a fool to think there
| should be laws against murder?
| nradov wrote:
| That's a total non sequitur. Just because LLMs are scalable
| doesn't mean this is a problem that requires government
| intervention. It's only idiots and grifters who want us to
| worry about sci-fi disaster scenarios. The snide
| dismissiveness is completely deserved.
| Chathamization wrote:
| The AI doomsday folk had an even worse track record over
| the past decade. There was supposed to be mass unemployment
| of truck drivers years ago. According to CCP Grey's Human's
| Need Not Apply[1] from 10 years ago, the robot Baxter was
| supposed to take over many low skilled jobs (Baxter was
| continued in 2018 after it failed to achieve commercial
| success).
|
| [1] https://www.youtube.com/watch?v=7Pq-S557XQU
| whimsicalism wrote:
| I do not count CGP grey or other viral youtubers among
| the segment of people I was counting as bullish about the
| scaling hypothesis. I'm talking about actual academics
| like Ilya, Hinton, etc.
|
| Regardless, I just read the transcript for that video and
| he doesn't give any timeline so it seems premature to
| crow that he was wrong.
| Chathamization wrote:
| > Regardless, I just read the transcript for that video
| and he doesn't give any timeline so it seems premature to
| crow that he was wrong.
|
| If you watch the video he's clearly saying this was was
| something that was already happening. Keep in mind it was
| made 10 years ago, and in it he says "this isn't science
| fiction; the robots are here right now." When bringing up
| the 25% unemployment rate he says "just the stuff we
| talked about today, the stuff that already works, can
| push us over that number pretty soon."
|
| Baxter being able to do everything a worker can for a
| fraction of the price definitely wasn't true.
|
| Here's what he said about self-driving cars. Again, this
| was 2014: "Self driving cars aren't the future - they're
| here and they work."
|
| "The transportation industry in the united states employs
| about 3 million people. Extrapolating worldwide, that's
| something like 70 million jobs at a minimum. These jobs
| are over."
|
| > I'm talking about actual academics like Ilya, Hinton,
| etc.
|
| Which of Hinton's statements are you claiming were
| dismissed by people here but were later proven to be
| correct?
| lopatin wrote:
| > Audits cannot prove anything and they offer no value when
| planning for the future. They're purely a retrospective tool
| that offers insights into potential risk factors.
|
| What if it audits your deploy and approval processes? They
| can say for example, that if your AI deployment process
| doesn't include stress tests against some specific malicious
| behavior (insert test cases here) then you are in violation
| of the law. That would essentially be a control on all future
| deploys.
| Loughla wrote:
| >I really wish legislators would operate inside reality
| instead of a Star Trek episode.
|
| What are your thoughts about businesses like Google and Meta
| providing guidance and assistance to legislators?
| akira2501 wrote:
| If it happens in a public and open session of the
| legislature with multiple other sources of guidance and
| information available then that's how it's supposed to
| work.
|
| I suspect this is not how the majority of "guidance" is
| actually being offered. I also guess this is probably a
| really good way to find new sources of campaign
| "donations." It's also a really good way for monopolistic
| players to keep a strangle hold on a nascent market.
| trog wrote:
| > Audits cannot prove anything and they offer no value when
| planning for the future. They're purely a retrospective tool
| that offers insights into potential risk factors.
|
| Uh, aren't potential risk factors things you want to consider
| when planning for the future?
| teekert wrote:
| The best episodes are where the model can't be turned off
| anymore ;)
| comp_throw7 wrote:
| > this is the one that would make it illegal to provide open
| weights for models past a certain size
|
| That's nowhere in the bill, but plenty of people have been
| confused into thinking this by the bill's opponents.
| tbrownaw wrote:
| Three of the four options of what an "artifical intelligence
| safety incident" is defined as require that the weights be
| kept secret. One is quite explicit, the others are just
| impossible to prevent if the weights are available:
|
| > (2) Theft, misappropriation, malicious use, inadvertent
| release, unauthorized access, or escape of the model weights
| of a covered model or covered model derivative.
|
| > (3) The critical failure of technical or administrative
| controls, including controls limiting the ability to modify a
| covered model or covered model derivative.
|
| > (4) Unauthorized use of a covered model or covered model
| derivative to cause or materially enable critical harm.
| comp_throw7 wrote:
| It is not illegal for a model developer to train a model
| that is involved in an "artifical intelligence safety
| incident".
| Terr_ wrote:
| Sounds like legislation that mis-indentifies the root issue as
| "somehow maybe the computer is too smart" as opposed to, say,
| "humans and corporations should be liable for using the tool to
| do evil."
| concordDance wrote:
| The former is a potentially extremely serious issue, just not
| one we're likely to hit in the very near future.
| raxxorraxor wrote:
| That is a very bad law. People and especially corporations in
| favor of it should be under scrutiny for trying to corner a
| market for themselves.
| choppaface wrote:
| The Apple Intelligence demos showed Apple is likely planning to
| use on-device models for ad targeting, and Google / Facebook will
| certainly respond. Small LLMs will help move unwanted computation
| onto user devices in order to circumvent existing data and
| privacy laws. And they will likely be much more effective since
| they'll have more access and more data. This use case is just
| getting started, hence SB 1047 is so short-sighted. Smaller LLMs
| have dangers of their own.
| jimjimjim wrote:
| Thank you. For some reason I hadn't thought of the advertising
| angle with local LLMs but you are right!
|
| For example, why is Microsoft hell-bent on pushing Recall onto
| windows? Answer: targeted advertising.
| jart wrote:
| Why is it wrong to show someone ads that are relevant to
| their interests? Local AI is a win-win, since tech companies
| get targeted ads, and your data stays private.
| jimjimjim wrote:
| what have "their interests" got to do with what is on the
| computer screen?
| seltzered_ wrote:
| Is part of the issue the concern that runaway ai computing would
| just happen outside of california?
|
| There's another important county election in Sonoma happening
| about CAFOs where part of the issue is that you may get
| environmental progress locally, but just end up exporting the
| issue to another state with lax rules:
| https://www.kqed.org/news/12006460/the-sonoma-ballot-measure...
| alhirzel wrote:
| Like all laws, there will certainly be those who evade
| compliance geographically. A well-written law will be looked to
| as a precedent or "head start" for new places that end up
| wanting regulatory functions. I feel like the EU and California
| often end up on this "leading edge" with regard to technology
| and privacy. While this can seem like a futile position to be
| in, it paves the way and is a required step for a good law to
| find a global foothold.
| metadat wrote:
| https://archive.today/22U12
| dredmorbius wrote:
| Hard paywall.
| worstspotgain wrote:
| Excellent move by Newsom. We have a very active legislature, but
| it's been extremely bandwagon-y in recent years. I support much
| of Wiener's agenda, particularly his housing policy, but this
| bill was way off the mark.
|
| It was basically a torpedo against open models. Market leaders
| like OpenAI and Anthropic weren't really worried about it, or
| about open models in general. Its supporters were the also-rans
| like Musk [1] trying to empty out the bottom of the pack, as well
| as those who are against any AI they cannot control, such as
| antagonists of the West and wary copyright holders.
|
| [1] https://techcrunch.com/2024/08/26/elon-musk-unexpectedly-
| off...
| SonOfLilit wrote:
| why would Google, Microsoft and OpenAI oppose a torpedo against
| open models? Aren't they positioned to benefit the most?
| worstspotgain wrote:
| If there was just one quasi-monopoly it would have probably
| supported the bill. As it is, the market leaders have the
| competition from each other to worry about. Getting rid of
| open models wouldn't let them raise their prices much.
| SonOfLilit wrote:
| So if it's not them, who is the hidden commercial interest
| sponsoring an attack on open source models that cost
| >$100mm to train? Or does Wiener just genuinely hate
| megabudget open source? Or is it an accidental attack,
| aimed at something else? At what?
| worstspotgain wrote:
| Like I said, supporters included wary copyright holders
| and the bottom-market also-rans like Musk. If your model
| is barely holding up against Llama, what's the point of
| staying in.
| SonOfLilit wrote:
| And two of the three godfathers of AI, and all of the AI
| notkillaboutism crowd.
|
| Actually, wait, if Grok is losing to GPT, why would Musk
| care about Llama more than Altman? Llama hurts his
| competitor...
| worstspotgain wrote:
| The market in my argument looks like OpenAI ~ Anthropic >
| Google >>> Meta (~ or maybe >) Musk/Alibaba. The top 3
| aren't worried about the down-market stuff. You're free
| to disagree of course.
| gdiamos wrote:
| Claude, SSI, Grok, GPT, Llama, ...
|
| Should we crown one the king?
|
| Or perhaps it is better to let them compete?
|
| Perhaps advanced AI capability will motivate advanced AI
| safety capability?
| Maken wrote:
| What are the economic incentives for AI safety?
| fat_cantor wrote:
| It's an interesting thought that as AI advances, and
| becomes more capable of human destruction, programmers,
| bots and politicians will work together to create safety
| for a large quantity of humans
| fifilura wrote:
| "AI defense corps"
| jnaz343 wrote:
| You think they want to compete? None of them want to
| compete. They want to be a protected monopoly.
| CSMastermind wrote:
| The bill included language that required the creators of
| models to have various "safety" features that would severely
| restrict their development. It required audits and other
| regulatory hurdles to build the models at all.
| llamaimperative wrote:
| If you spent $100MM+ on training.
| gdiamos wrote:
| Advanced technology will drop the cost of training.
|
| The flop targets in that bill would be like saying "640KB
| of memory is all we will ever need" and outlawing
| anything more.
|
| Imagine what other countries would have done to us if we
| allowed a monopoly like that on memory in 1980.
| llamaimperative wrote:
| No, there are two thresholds and BOTH must be met.
|
| One of those is $100MM in training costs.
|
| The other is measured in FLOPs but is already larger than
| GPT-4, so the "think of the small guys!" argument doesn't
| make much sense.
| gdiamos wrote:
| Tell that to me when we get to llama 15
| llamaimperative wrote:
| What?
| gdiamos wrote:
| "But the big guys are struggling getting past 100KB, so
| 'think of the small guys' doesn't make sense when the
| limit is 640KB."
|
| How do people on a computer technology forum ignore the
| 10,000x improvement in computers over 30 years due to
| advances in computer technology?
|
| I could understand why politicians don't get it.
|
| I should think that computer systems companies would be
| up in arms over SB 1047 in the same way they would be if
| the government was thinking of putting a cap on hard
| drives bigger than 1 TB.
|
| It puts a cap on flops. Isn't the biggest company in the
| world in the business of selling flops?
| llamaimperative wrote:
| It would be crazy if the bill had a built-in mechanism to
| regularly reassess both the cost and FLOP thresholds...
| which it does.
|
| Inversely to your sarcastic "understanding" about
| politicians' stupidity, I _can't_ understand how tech
| people seem incapable or unwilling to actually read the
| legislation they have such strong opinions about.
| gdiamos wrote:
| If your goal is to lift the limit, why put it in?
|
| We periodically raise flop limits in export control law.
| The intention is still to limit China and Iran.
|
| Would any computer industry accept a government mandated
| limit on perf?
|
| Should NVIDIA accept a limit on flops?
|
| Should Pure accept a limit on TBs?
|
| Should Samsung accept a limit on HBM bandwidth?
|
| Should Arista accept a limit on link bandwidth?
|
| I don't think that there is enough awareness that scaling
| laws tie intelligence to these HW metrics. Enforcing a
| cap on intelligence is the same thing as a cap on these
| metrics.
|
| https://en.m.wikipedia.org/wiki/Neural_scaling_law
|
| Has this legislation really thought through the
| implications of capping technology metrics, especially in
| a state where most of the GDP is driven by these metrics?
|
| Clearly I'm biased because I am working on advancing
| these metrics. I'm doing it because I believe in the
| power of computing technology to improve the world
| (smartphones, self driving, automating data entry,
| biotech, scientific discovery, space, security, defense,
| etc, etc) as it has done historically. I also believe in
| the spirit of inventors and entrepreneurs to contribute
| and be rewarded for these advancements.
|
| I would like to understand the biases of the supporters
| of this bill beyond a power grab by early movers.
|
| Export control flop limits are designed to limit the
| access of technology to US allies.
|
| I think it would be informative if the group of people
| trying to limit access of AI technology to themselves was
| brought into the light.
|
| Who are they? Why do they think the people of the US and
| of CA should grant that power to them?
| llamaimperative wrote:
| Wait sorry, are you under the impression that regulated
| entities get to "accept" which regulations society
| imposes on them? Big if true!
|
| Your delusions and lack of nuance shown in this very
| thread are exactly why people want to regulate this
| field.
|
| If developers of nuclear technology were making similar
| arguments, I bet they'd have attracted even more
| aggressive regulatory attention. Justifiably, too, since
| people who speak this way can't possibly be trusted to
| de-risk their own behavior effectively.
| gdiamos wrote:
| Cost as a perf metric is meaningless and the history of
| computer benchmarks has repeatedly proven this point.
|
| There is a reason why we report time (speedup) in spec
| instead of $$
|
| The price you pay depends on who you are and who is
| giving it to you.
| llamaimperative wrote:
| That's why there are two thresholds.
| Vetch wrote:
| Cost per FLOP continues to drop on an exponential trend
| (and what bit flops do we mean?). Leaving aside more
| effective training methodologies and how that muddies
| everything by allowing superior to GPT4 perf using less
| training flops, it also means one of the thresholds soon
| will not make sense.
|
| With the other threshold, it creates a disincentive for
| models like llama-405B+, in effect enshrining an even
| wider gap between open and closed.
| pas wrote:
| Why? Llama is not generated by some guy in a shed.
|
| And even if it were, if said guy has such amount of
| compute, then it's time to use some of it to describe the
| model's safety profile.
|
| If it makes sense for Meta to release models, it would
| have made sense even with the requirement. (After all the
| whole point of the proposed regulation is to get some
| better sense of those closed models.)
| llamaimperative wrote:
| Also the bill was amended NOT to extend liability to
| derivative models that the training company doesn't have
| effective control over.
| llamaimperative wrote:
| Both thresholds have a system to be adjusted.
| theptip wrote:
| If the danger is coming from the amount of compute
| invested, then cost of compute is irrelevant.
|
| A much better objection to static FLOP thresholds is that
| as data quality and algorithms increase, you can do a lot
| more with fewer FLOPs / parameters.
|
| But let's be clear about these objections - they are
| saying that FLOP thresholds are going to miss some harms,
| not that they are too strict.
|
| The rest is arguing about exactly where the FLOP
| thresholds should be. (And of course these limits can be
| revised as we learn more.)
| pj_mukh wrote:
| Or used a model someone open sourced after spending
| $100M+ on its training?
|
| Like if I'm a startup reliant on open-source models I
| realize I don't need liability and extra safety
| precautions but I didn't hear any guarantees that this
| wouldn't turn off Meta from releasing their models to me
| if my business was in California?
|
| I never heard any clarifications from the Pro groups
| about that
| llamaimperative wrote:
| The bill was amended for training companies to have no
| liability for derivative models they don't have control
| over.
|
| There's no new disincentive to open sourcing models
| produced by this bill, AFAICT.
| wslh wrote:
| All that means that the barriers for entry for startups
| skyrocket.
| SonOfLilit wrote:
| Startups that spend >$100mm on one training run...
| wslh wrote:
| There are startups and startups, the ones that you read
| on media are just a fraction of the worldwide reality.
| hn_throwaway_99 wrote:
| Yeah, I think the argument that "this just hurts open models"
| makes no sense given the supporters/detractors of this bill.
|
| The thing that large companies care the most about in the
| legal realm is _certainty_. They 're obviously going to be a
| big target of lawsuits regardless, so they want to know that
| legislation is clear as to the ways they can act - their
| biggest fear is that you get a good "emotional sob story" in
| front of a court with a sympathetic jury. It sounded like
| this legislation was so vague that it would attract a hoard
| of lawyers looking for a way they can argue these big
| companies didn't take "reasonable" care.
| SonOfLilit wrote:
| Sob stories are definitely not covered by the text of the
| bill. The "critical harm" clause (ctrl-f this comment
| section for a full quote) is all about nuclear weapons and
| massive hacks and explicitly excludes "just" someone dying
| or getting injured with very clear language.
| benreesman wrote:
| Some laws are just _bad_. When the API-mediated /closed-
| weights companies agree with the open-weight/operator-aligned
| community that a law is bad, it's probably got to be pretty
| awful. That said, though my mind might be playing tricks on
| me, I seem to recall the big labs being in favor at one time.
|
| There are a number of related threads linked, but I'll
| personally highlight Jeremy Howard's open letter as IMHO the
| best-argued case against SB 1047.
|
| https://www.answer.ai/posts/2024-04-29-sb1047.html
| SonOfLilit wrote:
| > The definition of "covered model" within the bill is
| extremely broad, potentially encompassing a wide range of
| open-source models that pose minimal risk.
|
| Who are these wide range of >$100mm open source models he's
| thinking of? And who are the impacted small businesses that
| would be scared to train them (at a cost of >$100mm)
| without paying for legal counsel?
| shiroiushi wrote:
| It's too bad companies big and small didn't come together
| and successfully oppose the passage of the DMCA.
| fshbbdssbbgdd wrote:
| My understanding is that tech was politically weaker back
| then. Although there were some big tech companies, they
| didn't have as much of a lobbying operation.
| wrs wrote:
| As I remember it, among other reasons, tech companies
| really wanted "multimedia" (at the time, that meant DVDs)
| to migrate to PCs (this was called the "living room PC")
| and studios weren't about to allow that without legal
| protection.
| worstspotgain wrote:
| There were a lot of questionable Federal laws that made
| it through in the 90s, such as DOMA [1], PRWORA [2],
| IIRIRA [3], and perhaps the most maddening to me, DSHEA
| [4].
|
| [1] https://en.wikipedia.org/wiki/Defense_of_Marriage_Act
|
| [2] https://en.wikipedia.org/wiki/Personal_Responsibility
| _and_Wo...
|
| [3] https://en.wikipedia.org/wiki/Illegal_Immigration_Ref
| orm_and...
|
| [4] https://en.wikipedia.org/wiki/Dietary_Supplement_Heal
| th_and_...
| shiroiushi wrote:
| "Questionable" is a very charitable term to use here,
| especially for the DSHEA which basically just legalizes
| snake-oil scams.
| RockRobotRock wrote:
| No snark, but what's wrong with the DMCA? From what I
| understand it, they took the idea that it's infeasible
| for a site to take full liability for user-generated
| copyright infringement (so they granted them safe
| harbor), but that they will be liable if they ignore take
| down notices.
| shiroiushi wrote:
| The biggest problem with it, AFAICT, is that it allows
| anyone who claims to hold a copyright to maliciously take
| down material they don't like by filing a DMCA notice.
| Companies receiving these notices have to follow a
| process to reinstate material that was falsely claimed,
| so many times they don't bother. There's no mechanism to
| punish companies that abuse this.
| worstspotgain wrote:
| Among other things, quoth the EFF:
|
| "Thanks to fair use, you have a legal right to use
| copyrighted material without permission or payment. But
| thanks to Section 1201, you do not have the right to
| break any digital locks that might prevent you from
| engaging in that fair use. And this, in turn, has had a
| host of unintended consequences, such as impeding the
| right to repair."
|
| https://www.eff.org/deeplinks/2020/07/what-really-does-
| and-d...
| RockRobotRock wrote:
| forgot about the anti-circumvention clause ;(((
|
| that's the worst
| stego-tech wrote:
| > When the API-mediated/closed-weights companies agree with
| the open-weight/operator-aligned community that a law is
| bad, it's probably got to be pretty awful.
|
| I'd be careful with that cognitive bias, because obviously
| companies dumping poison into water sources are going to be
| opposed to laws that would prohibit them from dumping
| poison into water sources.
|
| Always consider the broader narrative in addition to the
| specific narratives of the players involved. Personally,
| I'm on the side of the fence that's grumpy Newsom vetoed
| it, because it stymies the larger discussion about
| regulations on AI in general (not just LLMs) in the classic
| trap of "any law that isn't absolutely perfect and
| addresses all known and unknown problems is automatically
| bad" often used to kill desperately needed reforms or
| regulations, regardless of industry. Instead of being able
| to build on the momentum of passed legislation and improve
| on it elsewhere, we now have to deal with the giant cudgel
| from the industry and its supporters of "even CA vetoed it
| so why are you still fighting against it?"
| benreesman wrote:
| I'd advise anyone to conduct their career under the
| assumption that all data was public.
| stego-tech wrote:
| As a wise SysAdmin once told me when I was struggling
| with my tone in writing: "assume what you're writing will
| be read aloud in Court someday."
| bigmattystyles wrote:
| It's probably a google search away, but if I've typed it
| slack/outlook/whatever, but not sent it because I then
| thought better of it, did the app still record it
| somewhere? I'm almost sure it has to be and I would like
| to apologize in advance to my senior leadership...
| stego-tech wrote:
| That depends greatly on your tooling, your company, as
| well as the skills and ethics of your Enterprise IT team.
|
| Generally speaking, it's in ours' and the company's best
| interests to keep as little data as possible for two big
| reasons: legal discovery and cost. Unless we're
| explicitly required to retain historical records, it's a
| legal and fiscal risk to keep excess data around.
|
| That said, there are situations where your input is
| captured and stored regardless of whether it's sent. As
| you said, whether it does or not is often a simple search
| away.
| seanhunter wrote:
| As someone who was once asked under oath "What did you
| mean when you sent the email describing the meeting as a
| 'complete clusterfuck'?" I can attest to the wisdom of
| those words.
| wrsh07 wrote:
| I would note that Facebook and Google were opposed to eg gdpr
| although it gave them a larger share of the pie.
|
| When framed like that: why be opposed, it hurts your
| competition? The answer is something like: it shrinks the pie
| or reduces the growth rate, and that's bad (for them and
| others)
|
| The economics of this bill aren't clear to me (how large of a
| fine would Google/Microsoft pay in expectation within the
| next ten years?), but they maybe also aren't clear to
| Google/Microsoft (and that alone could be a reason to oppose)
|
| Many of the ai safety crowd were very supportive, and I would
| recommend reading Zvi's writing on it if you want their take
| nisten wrote:
| Because it's a law that first intended to put opensource
| developers in jail.
| rllearneratwork wrote:
| because it was a stupid law which would hurt AI innovation
| mattmaroon wrote:
| First they came for the open models...
| dragonwriter wrote:
| > Excellent move by Newsom. [...] It was basically a torpedo
| against open models.
|
| He vetoed it in part because the threshold it applies to at all
| are well-beyond _any_ current models, and he wants something
| that will impose greater restrictions on more and much smaller
| /lower-training-compute models that this would have left alone
| entirely.
|
| > Market leaders like OpenAI and Anthropic weren't really
| worried about it, or about open models in general.
|
| OpenAI (along with Google and Meta) led the institutional
| opposition to the bill, Anthropic was a major advocate for it.
| worstspotgain wrote:
| > He vetoed it in part because the threshold it applies to at
| all are well-beyond any current models, and he wants
| something that will impose greater restrictions on more and
| much smaller/lower-training-compute models that this would
| have left alone entirely.
|
| Well, we'll see what passes again and when. By then there'll
| be more kittens out of the bag too.
|
| > Anthropic was a major advocate for it.
|
| I don't know about being a major advocate, the last I read
| was "cautious support" [1]. Perhaps Anthropic sees Llama as a
| bigger competitor of theirs than I do, but it could also just
| be PR.
|
| [1] https://thejournal.com/articles/2024/08/26/anthropic-
| offers-...
| FeepingCreature wrote:
| > I don't know about being a major advocate, the last I
| read was "cautious support" [1]. Perhaps Anthropic sees
| Llama as a bigger competitor of theirs than I do, but it
| could also just be PR.
|
| This seems a curious dichotomy. Can we at least consider
| the possibility that they mean the words they say or is
| that off the table?
| worstspotgain wrote:
| Just two spitballing conjectures, not meant to be a
| dichotomy. If you have first-hand knowledge please
| contribute.
| arduanika wrote:
| He's a politician, and his stated reason for the veto is not
| necessarily his real reason for the veto.
| jodleif wrote:
| Makes perfect sense since his elected based on public
| positions
| ants_everywhere wrote:
| This is the ideal, but it's often false in meaningful
| ways. In several US elections, for example, we've seen
| audio leaked of politicians promising policies to their
| donors that would be embarrassing if widely publicly
| known by the electorate.
|
| This suggests that politicians and donors sometimes
| collude to deliberately misrepresent their views to the
| public in order to secure election.
| jnaz343 wrote:
| Sometimes? lol
| mistrial9 wrote:
| worse.. a first-hand quote from inside a California
| Senate committee hearing chamber.. "Don't speak it if you
| can nod, and don't nod if you can wink" .. translated,
| that means that in a contentious situation with others in
| the room, if allies can signal without speaking the words
| out loud, that is better.. and if the signal can be
| hidden, better still.
| duped wrote:
| This is an old saying in politics and you're
| misinterpreting it - it's not about signaling to allies,
| it's about avoiding being held to any particular
| positions.
|
| You're also missing the first half, "don't write if you
| can speak, don't speak if you can nod, and don't nod if
| you can wink." The point is not to commit to anything if
| you don't have to.
| raverbashing wrote:
| Anthropic was championing a lot of FUD in the AI area
| inferiorhuman wrote:
| Newsom vetoed the bill as a nod to his donors plain and
| simple. Same reason he just signed a bill allowing a specific
| customers at a single venue to be served alcohol later than 2
| AM. Same reason he carved out a minimum wage exemption for
| Panera. Same reason he signed a bill to carve out a junk fee
| exemption specifically for restaurants.
|
| He's just planning for a post-governor career.
| ldbooth wrote:
| Same reason the governor appointed public utility
| commission has allowed PG&E to raise rates 4 times in a
| single year without legitimate oversight. Yea unfortunately
| all roads point to his donors with this smooth talker, cost
| of living be damned.
| iluvcommunism wrote:
| On the east coast we don't need the government to control
| electricity prices. And our electricity is cheaper. Go
| figure.
| ldbooth wrote:
| Companies like Dominion Energy, Duke Energy, and
| Consolidated Edison are regulated by state utility
| commissions, same as in California.
| inferiorhuman wrote:
| Oh yeah I forgot this one -- basically making it easier
| to force neighborhoods to abandon their natural gas
| infrastructure. Something I'd be in favor of were it not
| for the constant stream of electric rate hikes.
|
| https://www.kqed.org/news/12006711/newsom-signs-bill-to-
| help...
| wseqyrku wrote:
| When you're a politician and have a business hobby
| burningChrome wrote:
| >> He's just planning for a post-governor career.
|
| After this year, many democrats are as well - which is why
| Harris had such a hard time finding a VP and took Walz who
| was like the last kid you pick on your dodgeball team.
|
| The presidential race in 2028 for the Democrats is going to
| have one of the deepest benches for talent I've seen in a
| long time. Newsom and Shapiro will be at the top of the
| list for sure.
|
| But I agree, Newsom has been making some decisions lately
| that seem to indicate he's trying to clean up his image and
| look more "moderate" for the coming election cycles.
| taurath wrote:
| > Newsom and Shapiro will be at the top of the list for
| sure.
|
| Neither has genuine appeal. Shapiro is a really, really
| poor speaker and has few credentials except as a
| moderate. Newsom is the definition of coastal elite. Both
| have spoken, neither have been heard.
| worstspotgain wrote:
| 2032, nice try though. Besides, we're not going to need
| to vote anymore otherwise, remember? The 2028 Democratic
| Primary would be a pro-forma affair between Barron vs.
| Mayor McCheese.
| EasyMark wrote:
| Yes they definitely need to slow their roll and sit back and
| listen to both sides of this instead of those who think AGI
| will happen in a year or two and the T1000s are coming for
| them. I think LLM have a bright future, especially as more
| hardware is build specifically for them. The market can fix
| most of the problems and when it becomes evident we're heading
| in the wrong direction or monopolies and abuses occur, that's
| when the government needs to step in, no based some broad
| speculation from from the fringe of either side.
| pbreit wrote:
| Bills that could kill major new industries need to be reactive,
| if at all. This was a terrible bill. Thank you, Governor.
| fwip wrote:
| If the new industry is inherently unsafe, it is better to be
| proactive.
| SonOfLilit wrote:
| I wondered if the article was over-dramatizing what risks were
| covered by the bill, so I read the text:
|
| (g) (1) "Critical harm" means any of the following harms caused
| or materially enabled by a covered model or covered model
| derivative:
|
| (A) The creation or use of a chemical, biological, radiological,
| or nuclear weapon in a manner that results in mass casualties.
|
| (B) Mass casualties or at least five hundred million dollars
| ($500,000,000) of damage resulting from cyberattacks on critical
| infrastructure by a model conducting, or providing precise
| instructions for conducting, a cyberattack or series of
| cyberattacks on critical infrastructure.
|
| (C) Mass casualties or at least five hundred million dollars
| ($500,000,000) of damage resulting from an artificial
| intelligence model engaging in conduct that does both of the
| following:
|
| (i) Acts with limited human oversight, intervention, or
| supervision.
|
| (ii) Results in death, great bodily injury, property damage, or
| property loss, and would, if committed by a human, constitute a
| crime specified in the Penal Code that requires intent,
| recklessness, or gross negligence, or the solicitation or aiding
| and abetting of such a crime.
|
| (D) Other grave harms to public safety and security that are of
| comparable severity to the harms described in subparagraphs (A)
| to (C), inclusive.
|
| (2) "Critical harm" does not include any of the following:
|
| (A) Harms caused or materially enabled by information that a
| covered model or covered model derivative outputs if the
| information is otherwise reasonably publicly accessible by an
| ordinary person from sources other than a covered model or
| covered model derivative.
|
| (B) Harms caused or materially enabled by a covered model
| combined with other software, including other models, if the
| covered model did not materially contribute to the other
| software's ability to cause or materially enable the harm.
|
| (C) Harms that are not caused or materially enabled by the
| developer's creation, storage, use, or release of a covered model
| or covered model derivative.
| handfuloflight wrote:
| Does Newsom believe that an AI model can do this damage
| autonomously or does he understand it must be wielded and
| overseen by humans to do so?
|
| In that case, how much of an enabler is an AI to meet the
| destructive ends, when, if the humans can use AI to conduct the
| damage, they can surely do it without the AI as well.
|
| The potential for destruction exists either way but is the
| concern that AI makes this more accessible and effective?
| What's the boogeyman? I don't think these models have private
| information regarding infrastructure and systems that could be
| exploited.
| SonOfLilit wrote:
| "Critical harm" does not include any of the following: (A)
| Harms caused or materially enabled by information that a
| covered model or covered model derivative outputs if the
| information is otherwise reasonably publicly accessible by an
| ordinary person from sources other than a covered model or
| covered model derivative.
|
| The bogeyman is not these models, it's future agentic
| autonomous ones, if and when they can hack major
| infrastructure or build nukes. The quoted text is very very
| clear on that.
| handfuloflight wrote:
| Ah, thank you, skipped over that part.
| caseyy wrote:
| I am not convinced the text means what you say it means.
|
| All knowledge (publicly available and not) and all tools
| (AI or not) can be used by people in material ways to
| commit the aforementioned atrocities, but only the models
| producing novel knowledge would be at risk. I hope you can
| see how this law would stifle AI advancement. The boundary
| between what's acceptable and not would be drawn at
| generating novel, publicly unavailable information; not at
| information that could be used to harm - because all
| information can be used that way.
|
| What if AI solves fusion and countries around the world
| start building fusion weapons of mass destruction? What if
| it solves personalized gene therapy and armed forces
| worldwide develop weapons that selectively kill their
| wartime foes? Should we not have split the atom just
| because that power was co-opted for evil means, or should
| we have not done the contraception research just because
| the third Reich used it for sterilization in their war
| crimes? This bill would work towards making AI never invent
| any of the novel things, simply out of fear that they will
| be corrupted by people as they have been in history. It
| would only slow research and whenever the (slower) research
| makes its discoveries, they would still get corrupted. In
| other words, there would be no change in human propensity
| to hurt others with knowledge, simply less knowledge.
|
| Besides, the text is not "very very clear" on AI if and
| when it hacks major infrastructure or builds nukes. If it
| was "very very clear" on that, that is what it would say :)
| - "an AI model is prohibited to be the decision-making
| agent, solely instigating critical harm to humans". But
| what the text says is different.
|
| I agree that AI harms to people and humanity need to be
| minimized but this bill is a miss rather than a hit and the
| veto is good news. We know AI alignment is needed. Other
| bills will come.
| bunabhucan wrote:
| I'm pretty sure there's a few hundred fusion wmds in
| silos a few hours north of me, we've had this kind of
| weapon since 1952.
| caseyy wrote:
| Nice fact check, thank you. I didn't know H-bombs used
| fusion but it makes complete sense. Hydrogen is not
| exactly the heaviest of elements :)
|
| Well then, for my example, imagine a different future
| discovery that could be abused. Let's say biological
| robots or a some new source of useful energy that is
| misused. Warring humans find ways to corrupt many
| scientific achievements for evil.
| bunabhucan wrote:
| Sharks with frikkin lasers.
| anigbrowl wrote:
| Newsom is the governor who vetoed the bill, not the lawmaker
| who authored it.
| concordDance wrote:
| > Does Newsom believe that an AI model can do this damage
| autonomously or does he understand it must be wielded and
| overseen by humans to do so?
|
| AI models might not be able to, but an AI _system_ that uses
| a powerful model might be able to cause damage (including
| extinction of humanity in the more distant future) unintended
| and unforeseen by its creators.
|
| The more complex and unpredictable the system the harder it
| is to properly oversee.
| dang wrote:
| (I added newlines to your quote to match what looked like the
| intended formatting - I hope that's ok. Since HN doesn't do
| indentation I'm not sure it helps all that much...)
| ketzo wrote:
| I'm sure people have asked this before, but would HN ever add
| a little more rich-text? Even just bullet points and indents
| might be nice.
| slater wrote:
| And maybe also make new lines in the comment box translate
| to new lines in the resulting comment...? :D
| dang wrote:
| That's actually a really good point. I've never looked
| into that, I just took it for granted that to get a line
| break on HN you need two consecutive newline chars.
|
| I guess the thing to do would be to look at all (well,
| lots of) comments that have single newlines and see what
| would break if they were rendered as actual newlines.
| Matheus28 wrote:
| Could be applied to all comments made after a certain
| future date. That way nothing in the past is poorly
| formatted
| slater wrote:
| Or just brute-force it with string_replace of all "\n"
| with "</p>\n<p>" and then remove all the empty "<p></p>".
|
| (why yes, i _am_ a PHP guy, why do you ask?)
| dang wrote:
| Maybe. I'm paranoid about the unintended cost of
| improvements, but it's not an absolute position.
| w10-1 wrote:
| --- (2) "Critical harm" does not include any of the following:
|
| (A) Harms caused or materially enabled by information that a
| covered model or covered model derivative outputs if the
| information is otherwise reasonably publicly accessible by an
| ordinary person from sources other than a covered model or
| covered model derivative.
|
| ---
|
| This exception swallows any rule and fails to target the
| difference with AI: it's actually better than an ordinary
| person at assimilating multiple fact streams.
|
| That suggests this law is legislative theater: something
| designed to enlist interest and donations, i.e., to build a
| political franchise. That could be why it targets only the
| largest models, affecting only the biggest players, who have
| the most resources to donate per decision and the least
| goodwill to burn with opposition.
|
| Regulating AI would be a very difficult
| legislative/administrative task, on the order of a new tax code
| in its complexity. But it will be impossible if treated as a
| political franchise.
|
| As for self-regulation, with OpenAI's changing to for-profit,
| the non-profit form is insufficient to maintain a public
| benefit focus. Permitting this conversion is on par with the
| 1990's+ conversion of nonprofit hospital systems to for-profit.
|
| AI's potential shines a bright light on our weakness in
| governance. While weak governance affords more opportunities,
| escaping the exploitation caused by governance failures is the
| engine of autocracy, and autocracy consumes property along with
| all other rights.
| hn_throwaway_99 wrote:
| Curious if anyone can point to some resources that summarize the
| pros/cons arguments of this legislation. Reading this article, my
| first thought is that I definitely agree it sounds impossibly
| vague for a piece of legislation - "reasonable care" and
| "unreasonable risk" sound like things that could be endlessly
| litigated.
|
| At the same time,
|
| > Computer scientists Geoffrey Hinton and Yoshua Bengio, who
| developed much of the technology on which the current generative-
| AI wave is based, were outspoken supporters. In addition, 119
| current and former employees at the biggest AI companies signed a
| letter urging its passage.
|
| These are obviously highly intelligent people (though I've
| definitely learned in my life that intelligence in one area, like
| AI and science, doesn't mean you should be trusted to give legal
| advice), so I'm curious to know why Hinton and Bengio supported
| the legislation so strongly.
| throwup238 wrote:
| California's Office of Legislative Counsel always provides a
| "digest" for every bill as part of its full text:
| https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...
|
| It's not an opinionated pros/cons list from the industry but
| it's probably the most neutral explanation of what the bill
| does.
| mmmore wrote:
| The concern is that near future systems will be much more
| capable than current systems, and by the time they arrive, it
| may be too late to react. Many people from the large frontier
| AI companies believe that world-changing AGI is 5 years or less
| away; see Situational Awareness by Aschbrenner, for example.
| There's also a parallel concern that AIs could make terrorism
| easier[1].
|
| Yoshua Bengio has written in detail about his views on AI
| safety recently[2][3][4]. He seems to put less weight on human
| level AI being very soon, but says superhuman intelligence is
| plausible in 5-20 years and says:
|
| > Faced with that uncertainty, the magnitude of the risk of
| catastrophes or worse, extinction, and the fact that we did not
| anticipate the rapid progress in AI capabilities of recent
| years, agnostic prudence seems to me to be a much wiser path.
|
| Hinton also has a detailed lecture he's been giving recently
| about the loss of control risk.
|
| In general, proponents see this as narrowly tailored bill to
| somewhat address the worst case worries about loss of control
| and misuse.
|
| [1] https://www.theregister.com/2023/07/28/ai_senate_bioweapon/
|
| [2] https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-
| arise/
|
| [3] https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-
| ai-r...
|
| [4] https://yoshuabengio.org/2024/07/09/reasoning-through-
| argume...
| crazygringo wrote:
| > _impossibly vague for a piece of legislation - "reasonable
| care" and "unreasonable risk" sound like things that could be
| endlessly litigated._
|
| Nope, that's entirely standard legal stuff. Tort law deals
| exactly with those kinds of things, for instance. Yes it can
| certainly wind up in litigation, but the entire point is that
| if there's a gray area, a company should make sure it's
| operating entirely within the OK area -- or know it's taking a
| legal gamble if it tries to push the envelope.
|
| But it's generally pretty easy to stay in the clear if you
| establish common-sense processes around these things, with a
| clear paper trail and decisions approved by lawyers.
|
| Now the legislation can be bad for lots of other reasons, but
| "reasonable care" and "unreasonable risk" are not problematic.
| hn_throwaway_99 wrote:
| > but "reasonable care" and "unreasonable risk" are not
| problematic.
|
| Still strongly disagree, at least when it comes to AI
| legislation. Yes, I fully realize that there are
| "reasonableness" standards in lots of places of US
| jurisprudence, but when it comes to AI, given how new the
| tech is and how, perhaps more than any other recent
| technology, it is largely a "black box", meaning we don't
| really know how it works and we aren't really sure what its
| capabilities will ultimately be, I don't think anybody really
| knows what "reasonableness" means in this context.
| razakel wrote:
| Exactly. It's about as meaningful as passing a law making
| it illegal to be a criminal. Right, so what does that
| actually mean apart from "we'll decide when it happens"?
| leogao wrote:
| I looked into the question of what counts as reasonable care
| and wrote up my conclusions here:
| https://www.lesswrong.com/posts/kBg5eoXvLxQYyxD6R/my-takes-o...
| hn_throwaway_99 wrote:
| Thank you! Your post was really helpful in aiding my
| understanding, so I greatly appreciate it.
|
| Also, while reading your article I also fell onto
| https://www.brookings.edu/articles/misrepresentations-of-
| cal... while trying to understand some terms, and that also
| gave some really good info, e.g. the difference between a
| "reasonable assurance" language that was dropped from an
| earlier version of the bill and replaced with "reasonable
| care".
| ketzo wrote:
| This was a great post, thanks.
| svat wrote:
| Here's a post by the computer scientist Scott Aaronson on his
| blog, in support: https://scottaaronson.blog/?p=8269 -- it
| links to some earlier explainers, has some pro-con arguments,
| and further discussion in the comments.
| nisten wrote:
| Imagine being concerned about AI safety and then introducing a
| bill that had to be ammended to change criminal responsability of
| AI developers to civil legal responsability for people who are
| trying to investigate and work openly on models.
|
| What's next, going after maintainers of python packages... is
| attacking transparency itself a good way to make AI safer. Yeah,
| no, it's f*king idiotic.
| Lonestar1440 wrote:
| This is no way to run a state. The Democrat-dominated legislature
| passes everything that comes before it (and rejects anything that
| the GOP touches, in committee) and then the Governor needs to
| veto the looniest 20% of them to keep us from falling into total
| chaos. This AI bill was far from the worst one.
|
| "Vote out the legislators!" but for who... the Republican party?
| And we don't even get a choice on the general ballot most of the
| time, thanks to "Open Primaries".
|
| It's good that Newsom is wise enough to muddle through, but this
| is an awful system.
|
| https://www.pressdemocrat.com/article/news/california-gov-ne...
| thinkingtoilet wrote:
| If California was it's own country, it would be one of the
| biggest most successful countries in the world. Like every
| where else it has it's problems but it's being run just fine.
| Objectively, there are many states that are far worse off in
| any key metric.
| toephu2 wrote:
| > but it's being run just fine
|
| As a Californian I have to disagree. The only reason you
| think it's being run just fine is because of the success of
| the private sector. The only reason California would be the
| 4th/5th largest economy in the world is because of the the
| tech industry and other industries that are in California
| (Hollywood, agriculture, etc). It's not because we have some
| awesome efficiently run state government.
| WWLink wrote:
| What are you getting at? Is a state government supposed to
| be profitable? LOL
| nashashmi wrote:
| Do you mean to say that the government was deeply
| underwater a few years ago? And the state marred by forest
| fires that it was frightening to see if it could ever come
| back ?
| cma wrote:
| > The only reason you think it's being run just fine is
| because of the success of the private sector.
|
| Tesla received billions in subsidies from CA as an example.
| labster wrote:
| I think California might have a better run government if it
| had an electable conservative party. The Republican Party
| is not that, being tied to the national Trump-Vance-Orban
| axis. A center-right party could hold Democratic officers
| accountable but it's not being offered and moderates
| gravitate to the electable Dem side. An independent
| California would largely fix that.
|
| As a lifelong California Democrat, I realize that my party
| does not have all the answers. But the conservatives have
| all gone AWOL or gone batshit so we're doing the best we
| can without the other half of the dialectic.
| dangus wrote:
| Pre-Trump Republicans had no problem absurdly mismanaging
| Kansas' coffers:
|
| https://en.wikipedia.org/wiki/Kansas_experiment
|
| I think the Republican Party's positive reputation for
| handling the economy and running an efficient government
| is entirely unearned.
|
| Closing down major parts of the government entirely (as
| Project 2025 proposes), making taxation more regressive,
| and offering fewer social services isn't "efficiency."
|
| I don't know if you know this but you're already in the
| center-right party. The actual problem is that there's no
| left of center party, as well as the general need for a
| number of aspects of our democracy to be reformed (like
| how it really doesn't allow for more than two parties to
| exist, or how out of control campaign finance rules have
| become).
| telotortium wrote:
| Democrats (especially in California) are somewhat to the
| right of socialist parties in Europe, and of course
| they're neoliberal. But on most non-economic social
| issues, they're quite far to the left compared to most
| European countries. So it really depends on what you
| consider more important to the left.
| anon291 wrote:
| California republicans are extremely moderate (or at
| least, there have been moderate candidates for governor
| of california almost every year), so I have no idea what
| you're talking about. The last GOP governor of California
| was Arnold Schwarzenegger, who is a moderate republican
| by basically all standards.
| shiroiushi wrote:
| >It's not because we have some awesome efficiently run
| state government.
|
| Can you point to _any_ place in the world that has an
| "awesome efficiently run" government?
| logicchains wrote:
| Singapore, Dubai, Monaco..
| jandrewrogers wrote:
| We don't need to look at other countries, just look at
| other States. California is quite poorly run by the
| standards of other States. I'm a California native but
| I've lived in and worked with many other States. You
| don't realize how appallingly bad California government
| is until you have to work with their counterparts in
| other States.
|
| It isn't a red versus blue thing, even grift-y one-party
| States like Washington are plainly better run than
| California.
| dangus wrote:
| It's easy to disagree when you aren't looking the grass
| that's not so green on the other side.
|
| California is run amazingly well compared to a significant
| number of states.
| ken47 wrote:
| You're going to attribute even a small % of this to
| politicians rather than the actual innovators? Sure, then
| let's say they're responsible for some small % of its
| success. They're smart enough to not nuke their own economy.
| LeroyRaz wrote:
| The state has one of the highest illiteracy rates in the
| whole country (28%). To me, that implies they have some issue
| of governance.
|
| Source: https://worldpopulationreview.com/state-rankings/us-
| literacy...
|
| To be fair in the comparison, the literacy statistics for the
| whole of the US are pretty shocking from a European
| perspective.
| hydrox24 wrote:
| For any others reading this, the _illiteracy_ rate is 23.1%
| in California according to the parent's source. This is
| indeed the highest illiteracy rate in the US thought.
|
| Having said that, I would have thought this was partially a
| measure of migration. Perhaps illegal migration?
| Eisenstein wrote:
| The "medium to high English literacy skills" is the part
| that is important. If you can read and write Chinese and
| Spanish and French and Portuguese and Esperanto at a high
| level, but not English at a medium to high level, you are
| 'illiterate' in this stat.
| 0_____0 wrote:
| The data you're showing doesn't appear to differentiate
| between "Can read English" and "Can read in _some language_
| ". Big immigrant population, same with New York. Having
| grown up in California I can tell you that there aren't 28%
| of kids coming out of public school who can't read
| anything.
|
| Edit to add: my own hometown had a lot of people who
| couldn't speak English. Lots of elderly mothers of Chinese
| immigrants whose adult children were in STEM and whose own
| kids were headed to uni. Not to say that's representative,
| but consider that a single percentage stat won't give you
| an accurate picture of what's going on.
| kortilla wrote:
| Not being able to read English in the US is bad though.
| It makes you a very inefficient citizen even though you
| can get by. Being literate in Chinese and not being able
| to read or even speak English is far worse than an
| illiterate person that can speak English in day to day
| interactions.
| swasheck wrote:
| which is why the statistics need to be careful annotated.
| lacking literacy at all is a different dimension than
| lacking fluency the national lingua franca
| t-3 wrote:
| The US has no official language. There are fluency
| requirements for the naturalized citizenship test, but
| those can be waived with 20 years of permanent residency.
| Citizens are under no obligation to be efficient for the
| sake of the government.
| kortilla wrote:
| Yes, there is no official language. There is also no
| official rule that you shouldn't be an asshole to
| everyone you interact with.
|
| It's still easy to be a shitty member of a community
| without breaking any laws. I would never move to a
| country permanently regardless of official language
| status if I couldn't speak the language required to ask
| where something is in the grocery store.
| cma wrote:
| The California tech industry will solve any concerns with
| this, we'll have Babelfish soon enough.
| telotortium wrote:
| Did you go to school in Oakland or Central Valley? That's
| where most of the illiterate children are going to
| school. I've never heard of a Chinese student in the US
| growing up illiterate, even if their parents don't know
| English at all.
| 0_____0 wrote:
| South Bay. And I didn't specify but I meant that the
| people who immigrated from abroad were not English
| speakers - younger than 50 or so even if born abroad all
| seemed to be at least proficient in English.
|
| We had lots of Hispanic kids but not many who were super
| super fresh to the country. I'm sure the central valley
| was a whole different ball game.
| rootusrootus wrote:
| Maybe there is something missing from your analysis? By
| most metrics the US compares quite favorably to Europe.
| When you see something that seems like an outlier, perhaps
| turn down the arrogance and try to understand what you
| might be overlooking.
| LeroyRaz wrote:
| I don't know what your source for "by most metrics" is?
|
| As I understand it, the US is abysmal by many metrics
| (and also exceptional by others). E.g., murder rated and
| prison rates are exceptionally high in the US compared to
| Europe. Homelessness rates are exceptionally high in the
| US compared to Europe. Startup rates are (I believe)
| exceptionally high in the US compared to Europe.
| rootusrootus wrote:
| There's a huge problem trying to do cross-jurisdiction
| statistical comparisons even in the best case. Taking
| literacy as the current example, what does it mean to be
| literate, and how do you ensure that the definition in
| the US is the same as the definition in the UK is the
| same as the definition in Germany? And that's before you
| get to confounding factors like migration and related
| non-English proficiency.
|
| It's fun to poke at the US, I get it, but the numbers
| people love to quote online to win some kind of
| rhetorical battle frequently have little relationship to
| reality on the ground. I've done a lot of travel around
| the US and western Europe, and I see a lot of ups and
| downs everywhere. I don't see a lot of obvious wins,
| either, mostly just choices and trade-offs. The things I
| see in Europe that _are_ obviously better almost 100% of
| the time are a byproduct of more efficient funding due to
| higher density. All kinds of things are doable in the UK,
| for example, which couldn 't really happen in (for
| example) Oregon, even though they have roughly the same
| land area. Having 15x as many taxpayers helps.
| anon291 wrote:
| The issue of governance is the massive hole in the US -
| Mexico border. Why California's government isn't joining
| the ranks of Texas, Arizona, etc, I cannot understand.
|
| Source: my mom was an adult ESL / Language / Literacy
| teacher.
| cscurmudgeon wrote:
| California is the largest recipient of federal money.
|
| https://usafacts.org/articles/which-states-rely-the-most-
| on-...
|
| (I know by population it will be different, but the argument
| here is around 'one of the the biggest' which is not a per
| capita statement.)
|
| > Objectively, there are many states that are far worse off
| in any key metric
|
| You can apply the same logic to USA.
|
| USA is one of the biggest most successful countries in the
| world. Like every where else it has it's problems but it's
| being run just fine. Objectively, there are many countries
| that are far worse off in any key metric.
| anigbrowl wrote:
| California is also the largest source of Federal revenue:
| https://www.irs.gov/statistics/soi-tax-stats-gross-
| collectio...
|
| As your link shows, a much smaller %age of CA government
| revenue comes from the federal government vs most other
| states; in that sense California is a net contributor
| rather than a net taker.
| kortilla wrote:
| What is success in your metric? are you just counting GDP of
| companies that happen to be located there? If so, that has
| very little relationship to how well the state is being run.
|
| It's very easy to make arguments that they are successful in
| spite of a terribly run state government and are being
| protected by federal laws keeping the loonies in check
| (interstate commerce clause, etc).
| peter422 wrote:
| So your argument is that the good things about the state
| have nothing to do with the governance, but all the bad
| things do? Just want to make sure I get your point.
|
| Also, I'd argue that if you broke down the contributions to
| the state's rules and regulations from the local
| governments, the ballot initiatives and the state
| government, the state government is creating the most
| benefit and least harm of the 3.
| strawhatguy wrote:
| I'd go stronger still: the good things about _any_ state
| has little to do with the governance.
|
| Innovators, makers, risk-takers, etc., are who makes the
| good things happen. The very little needed is rule of
| law, and that's about it. Beyond that, it starts
| distorting society quickly: measures meant to help
| someone inevitably cost several someones else, _and_
| become weapons to beat down competitors.
| kortilla wrote:
| > So your argument is that the good things about the
| state have nothing to do with the governance, but all the
| bad things do? Just want to make sure I get your point.
|
| No, I'm saying people who think the state is successful
| because of its state government and not because it's a
| part of the US are out of touch. If California wasn't
| part of the US, Silicon Valley would be a shadow of
| itself or wouldn't exist at all.
|
| It thrives on being the tech Mecca for the youth of the
| entire US to go to school there and get jobs there. If
| there were immigration barriers there, there would be
| significant incentive to just go to something in the US
| (nyc, Chicago, Miami, wherever). California had a massive
| GDP because that's where US citizens are congregating to
| do business, not because California is good at making
| businesses go. Remove spigot of brain drain from the rest
| of country and cali would be fucked.
|
| Secondarily, Silicon Valley wouldn't have started at all
| without the funnel of money from the fed military, NASA,
| etc. But that's not worth dwelling on if the scenario is
| California leaving now.
|
| My overall point is that California has immense success
| due to reasons far outside of the control of its state
| government. The state has done very little to help the
| tech industry apart from maybe the ban on non-competes.
| When people start to credit the large GDP to the
| government, that's some super scary shit that leads to
| ideas that will quickly kill the golden goose.
| tightbookkeeper wrote:
| In this case the success is in spite of the governance rather
| than because of it.
|
| The golden age of California was a long time ago.
| dmix wrote:
| California was extremely successful for quite some time.
| They benefited from a large population boom and lots of
| industry developed or moved there. And surprisingly they
| were a republican state from 1952 -> 1988.
| aagha wrote:
| LOL.
|
| California's GDP in 2023 was $3.8T, representing 14% of the
| total U.S. economy.
|
| If California were a country, it would be the 5th largest
| economy in the world and more productive than India and the
| United Kingdom.
| tightbookkeeper wrote:
| Yeah it's incredibly beautiful. People wish they could
| live there. And many large companies were built there in
| prior decades. This contradicts my comment how?
| jandrewrogers wrote:
| California is the most populous state in the US, larger
| than most European countries, it would be surprising if
| it didn't have a large GDP regardless of its economy. On
| a per capita basis, less populous tech-heavy States like
| Washington and Massachusetts have even higher GDP.
| anon291 wrote:
| Undoubtedly, California has a stellar economy, but you
| see, states like Texas, which are objectively awful to
| live in (flat, no interesting geography in the most
| populated parts of the state, terrible weather,
| hurricanes, etc), are also similarly well positioned in
| the rankings of GDP.
|
| If Texas were a country, it'd be the 8th largest economy
| in the world! This is a figure much less often cited.
| Texas has a smaller population 30 million v 38 million
| and is growing much faster in real terms (2.1% v 5.7%).
|
| This is in spite of its objective awfulness. People are
| moving to Texas because of the economy. If Texas were in
| California's geographic position, one would imagine it to
| be an even more popular destination.
|
| This isn't an endorsement of the Texan government,
| because there are many things I disagree with them on.
| But the idea that California's economy is singularly
| unique in the United States is silly. Many states with
| objectively worse attributes are faring just as well, and
| may even be poised to overtake california.
|
| How embarassing would it be for Texas, a hot muggy swamp
| of a state with awful geography and terrible weather, to
| overtake beautiful California economically? To think
| people would actually eschew the ocean and the
| mediterranean climate and perfect weather to move to
| Texas simply because California mismanaged the state so
| much. This is the real risk.
|
| Models show that by 2049, Texas will overtake california
| as the more populous and more economically productive
| state. Is that really the future you want? Is that the
| future California deserves? As a native Californian, I
| hope the state can turn itself around. It deserves to be
| a great state, but the path its on is one of decline.
|
| One need just look at almost any metric. It's no just
| population or economy. Even by 'liberal' metrics, Texas
| is growing. For example, Texas has the largest growth
| rate in alternative energy sources:
| https://www.reuters.com/markets/commodities/texas-trumps-
| cal.... There's a very clear growth curve in Texas, while
| California's is much choppier and doesn't appear to be
| going in any particular direction. At some point
| Californians need to actually want to continue winning
| instead of resting on their laurels.
| hbarka wrote:
| High speed trains would do even more for California and would
| be the envy of the rest of the country.
| oceanplexian wrote:
| Like most things, the facts bear out the exact opposite.
| The CA HSR has been such a complete failure that it's
| probably set back rail a decade or more. The only saving
| grace is Florida's privatized high speed rail, otherwise it
| would be a completely failed industry.
| shiroiushi wrote:
| You're not disproving the OP's assertion. His claim was
| that HSR (with the implication that it was actually built
| and working properly) would be good for California and be
| the envy of the rest of the country, and that seems to be
| true. The problem is that California tried to do HSR and
| completely bungled it somehow. Well, of course a bungled
| project that never gets completed isn't a great thing,
| that should go without saying.
|
| As for Florida's "HSR", it doesn't really qualify for the
| "HS" part. The fastest segment is only 200kph. At least
| it's built and working, which is nice and all, but it's
| not a real bullet train.
| (https://en.wikipedia.org/wiki/Brightline)
| anon291 wrote:
| Part of the problem is the transit activist's obsession
| with _public_ transit instead of just transit. At this
| rate, Brightline will likely have an HSR in California
| before the government does. We need to make private
| transit great again, and California should lead the way.
| Transit is transit, whether it 's funded by government or
| private interests.
| aagha wrote:
| Thank you.
|
| I always think about this whenever someone says CA doesn't
| know what it's doing or it's being run wrong:
|
| California's GDP in 2023 was $3.8T, representing 14% of the
| total U.S. economy. If California were a country, it would be
| the 5th largest economy in the world and more productive than
| India and the United Kingdom.
| anon291 wrote:
| And Texas, which 10 million fewer people is the eighth
| largest economy in the world, and growing at more than
| double the speed of Californias. This accomplishment (which
| is something to be proud of) is not unique to California.
| nradov wrote:
| California is being terribly misgoverned, as you would expect
| in any single-party state. In some sense California has
| become like a petro-state afflicted by the resource curse:
| the tech industry throws off so much cash that the state runs
| reasonably well, not because of the government but in spite
| of it. We can afford to waste government resources on
| frivolous nonsense.
|
| And this isn't a partisan dig at Democrats. If a Republicans
| controlled everything then the situation would be just as bad
| but in different ways.
| anon291 wrote:
| California has unique geographic features that make it well
| positioned. It also has strategic geographic resources (like
| oil). This is like using Saudi Arabia as a standard of
| governance since they have greatly improved material
| conditions using oil money.
|
| California does do several things very well. It also does
| several things poorly. Pointing out its current economic
| standing does not change that. The fallacy here is that we
| have to compare california against the best version of
| itself. Alaska is never going to be a California-level
| economy because the geography dictates that only a certain
| kind of person / company will set up there, for example. That
| doesn't mean the Alaskan government is necessarily bad. Every
| state has to work within its limits to achieve the best
| version of itself. Is california the best it could be? I
| think the answer is obviously no.
| dyauspitr wrote:
| Compared to whom? What is this hypothetical well run state.
| Because it's hard to talk shit against the state that has the
| 8th largest economy on the world nation state economy ranking.
| dredmorbius wrote:
| Thankfully there are no such similarly single-party states
| elsewhere in the Union dominated by another party, and if they
| were, their executives would similarly veto the most inane
| legislation passed.
|
| </s>
| rootusrootus wrote:
| The subtle rebranding of Democratic party to democrat party is
| a pretty strong tell for highly partisan perspective. How does
| California compare with similarly large Republican-dominated
| states? Anecdotally, I've seen a lot of really bad legislation
| originating from any legislature that has no meaningful
| opposition.
| anigbrowl wrote:
| It's such a long-running thing that it's hard to gauge
| whether it's deliberate or just loose usage.
|
| https://en.wikipedia.org/wiki/Democrat_Party_(epithet)
| dredmorbius wrote:
| It's rather decidedly a dog whistle presently.
| jimmygrapes wrote:
| The party isn't doing much lately to encourage the actual
| democracy part of the name, other than whining about
| national popular vote every 4 years knowing full well
| that's now how that process works.
| rootusrootus wrote:
| The Democratic Party has some warts, this is for sure,
| and they have a lot they could be doing to improve
| participation and input from the rank-and-file. However,
| attempting to subvert an election by any means possible
| is not yet one of those warts. This is emphatically _not_
| a case where "both sides suck equally."
| stufffer wrote:
| >subvert an election by any means possible is not yet one
| of those warts
|
| The Democrats are famous for trying to have 3rd party
| candidates stripped from ballots. Straining smaller
| campaigns under the cost of fighting off endless
| lawsuits.
|
| Democrats invented the term lawfare.
| rootusrootus wrote:
| You think the Republicans don't do similar things?
|
| Republicans blazed a new trail in 2021, trying to
| actually change the outcome of an election _in progress_
| through force. This is not comparable to using legal
| processes. A better comparison might be the series of
| lawsuits the Republican Party attempted after force did
| not work. How many years until the guy who lost admitted
| that he lost? These actions strike at the very
| foundations of democracy. A non-trivial segment of
| citizens _still_ think the election was somehow stolen
| from them, despite an utter lack of anything like
| evidence. We will be feeling reverberations from these
| actions for decades.
| dangus wrote:
| Whining about the national popular vote every 4 years is
| literally an encouragement of increased actual democracy.
|
| Scrapping the electoral college would be one of the most
| lower case d democratic things this country could do.
|
| "Whining" is all you can do when you don't have the
| statehouse votes to pass an constitutional amendment.
| Lonestar1440 wrote:
| I'm a pretty pedantic person, but even I just use one or the
| other at random. I don't think it's a good idea to read into
| things like this.
| rootusrootus wrote:
| I will allow that there are going to be some innocent
| mixups. But the 'democrat party' epithet dates back almost
| a century.
| https://en.wikipedia.org/wiki/Democrat_Party_(epithet)
|
| If you care about the perception of what you write, this is
| one of those things that will quickly steer your audience
| one way or the other. It has become so consistent that I
| would personally try not to get it wrong lest it distract
| from the point I'm trying to express.
| Lonestar1440 wrote:
| I didn't write "Democrat Party", I wrote "Democrat-
| Dominated". I am a registered Democrat. The broader
| partisan group I belong to are "Democrats" even if the
| organization formally calls itself the "Democratic
| party".
| dehrmann wrote:
| Not sure of Newsom is actually wise enough or if his
| presidential ambitions moderate his policies.
| ravenstine wrote:
| It could be presidential ambitions, though I suspect his
| recent policies have been merely a way of not giving
| conservatives more ammo leading up to the 2024 election. The
| way he's been behaving recently is in stark contrast to
| pretty much everything he's done during and before his
| governorship. I don't think it's because he's suddenly any
| wiser.
| jart wrote:
| Newsom was a successful entrepreneur in the 1990s who built
| wineries. That alone would make him popular with
| conservative voters nationwide. What did Newsom do before
| that you thought would alienate them? Being pro-gay and
| pro-drug before it was cool? Come on. The way I see it, if
| Newsom was nuts enough to run for president, then he could
| unite left and right in a way that has not happened in a
| very long time.
| kanwisher wrote:
| No one even slightly right would vote for him, he is the
| poster child of the homeless industrial complex, being
| anti business and generally promoting social policies
| only the most fringe left wingers are excited about
| gdiamos wrote:
| If that bill had passed I would have seriously considered moving
| my AI company out of the state.
| elicksaur wrote:
| Nothing like this should pass until the legislators can come up
| with a definition that doesn't encompass basically every computer
| program ever written:
|
| (b) "Artificial intelligence model" means a machine-based system
| that can make predictions, recommendations, or decisions
| influencing real or virtual environments and can use model
| inference to formulate options for information or action.
|
| Yes, they limited the scope of law by further defining "covered
| model", but the above shouldn't be the baseline definition of
| "Artificial intelligence model."
|
| Text: https://legiscan.com/CA/text/SB1047/id/2919384
| StarterPro wrote:
| Whaaat? The sleazy Governor sided with the tech companies??
|
| I'll have to go get a thesaurus, shocked won't cover how I'm
| feeling rn.
| blackeyeblitzar wrote:
| It is strange to see Newsom make good moves like this but then
| also do things like veto bipartisan supported reporting and
| transparency for the state's homeless programs. What is his
| political strategy exactly?
| scoofy wrote:
| Newsom vetoes so many bills that it makes little sense why the
| legislature should even be taken seriously. Our Dem trifecta
| state has effectively become captured by the executive.
| dyauspitr wrote:
| As opposed to what? The supermajority red states where
| gerrymandered counties look like corn mazes and the economy is
| in the shitter?
| stuaxo wrote:
| This is good - they were trying to legislate against future
| competitors.
| reducesuffering wrote:
| _taps the sign_
|
| "Mitigating the risk of extinction from AI should be a global
| priority alongside other societal-scale risks such as pandemics
| and nuclear war." - Geoffrey Hinton, Yoshua Bengio, Sam Altman,
| Bill Gates, Vitalik Buterin, Ilya Sutskever, Demis Hassabis
|
| "Development of superhuman machine intelligence is probably the
| greatest threat to the continued existence of humanity. There are
| other threats that I think are more certain to happen but are
| unlikely to destroy every human in the universe in the way that
| SMI could." - Sam Altman
|
| "I actually think the risk is more than 50%, of the existential
| threat." - Geoffrey Hinton
|
| "Currently, we don't have a solution for steering or controlling
| a potentially superintelligent AI, and preventing it from going
| rogue." - OpenAI
|
| "while we are racing towards AGI or even ASI, nobody currently
| knows how such an AGI or ASI could be made to behave morally, or
| at least behave as intended by its developers and not turn
| against humans." - Yoshua Bengio
|
| "very soon they're going to be, they may very well be more
| intelligent than us and far more intelligent than us. And at that
| point, we will be receding into the background in some sense. We
| will have handed the baton over to our successors, for better or
| for worse.
|
| But it's happening over a period of a few years. It's like a
| tidal wave that is washing over us at unprecedented and
| unimagined speeds. And to me, it's quite terrifying because it
| suggests that everything that I used to believe was the case is
| being overturned." - Douglas Hofstadter
|
| The Social Dilemna was discussed here with much praise about how
| profit incentive caused mass societal issues in social media. I'm
| astounded it's fallen on deaf ears when the same people also made
| the AI Dilemna describing the parallels coming with AGI:
|
| https://www.youtube.com/watch?v=xoVJKj8lcNQ
| dyauspitr wrote:
| Newsom has been on fire lately.
| sandspar wrote:
| Newsom wants to run for president in 4 years; AI companies will
| be rich in 4 years; Newsom will need donations from rich
| companies in 4 years.
| LarsDu88 wrote:
| Terrible piece of legislation. Glad the governor took it down.
| This is what regulatory capture looks like. Someone commoditized
| your product, so you make it illegal for them to continue making
| your stuff free.
|
| Might as well make Linux illegal so everyone is forced to use
| Microsoft and Apple.
| xpe wrote:
| I disagree.
|
| On this topic, I'm seeing too many ideological and uninformed
| claims.
|
| It is hard for many aspiring AI startup founders to rationally
| and neutrally assess the AI landscape, pros and cons.
| weebull wrote:
| I suspect this was vetoed more for reasons of not wanting to
| handicap California in the "AI race" than anything else.
| EasyMark wrote:
| It's also what makes companies realize there are 49 other
| states, and nearly a couple hundred companies. California has a
| rare zeitgeist of tech and universities, but nothing that can't
| be reproduced elsewhere with enough dollars and promises
| indigo0086 wrote:
| Logical Fallacies built into the article headline.
| m3kw9 wrote:
| All he needed to see is how Europe is doing with these
| regulations
| sgt wrote:
| What is currently happening (or what is the impact) of those
| regulations in EU?
| renewiltord wrote:
| It's making it nice for Americans to vacation there.
| tsunamifury wrote:
| Scott Weiner is a total fraud. He passes hot concept bills then
| cuts out loopholes for his "friends".
|
| He should be ignored at least and voted out.
|
| He's a total POS.
| water9 wrote:
| I'm so sick of people Restricting freedoms and access to
| knowledge in the name safety. Tyranny always comes in the form of
| it's for your own good/safety
| dandanua wrote:
| Sure, why we just not let everyone to build nukes and use them
| on anyone they don't like? Knowledge is the power. The BEST
| power you can get.
| anon291 wrote:
| You cannot seriously compare nuclear materials delivery /
| handling to the creation of model weights and computation
| richrichie wrote:
| I am disappointed that there are no climate change regulations on
| AI models. Large scale ML businesses are massive carbon emitters,
| not counting the whimsical training of NNs by every other IT
| person. This needs to be regulated.
| anon291 wrote:
| California already has cap and trade. There doesn't seem to be
| a need for further regulation. If there's a problem with
| emissions, adjust the pricing. That's the purpose of cap and
| trade.
| az226 wrote:
| Based.
| pmcf wrote:
| Not today, regulatory capture. Not today.
| xyst wrote:
| I don't understand why it was vetoed or why this was even
| proposed. But leaving comment here to analyze later.
| dazzaji wrote:
| Among the good reasons for SB-1047 to have been vetoed are that
| it would have regulated the wrong thing. Here's a great statement
| of this basic flaw:
| https://law.mit.edu/pub/regulatesystemsnotmodels
|
| Not speaking for MIT here, but that bill needs a veto and a deep
| redraft.
| gerash wrote:
| I'm trying to parse this proposed law.
|
| What does a "full shutdown" mean in the context of an LLM?
| Stopping the servers from serving requests? It sounds silly idk.
| unit149 wrote:
| Much like UAW, a union for industrial machinists and academics,
| this bill has united VCs and members of the agrarian farming
| community. Establishing an entity under the guise of the Board of
| Frontier Models parallels efforts at Jekyll Island under
| Wilsonian idealism. Technological Keynesianism is on the horizon.
| These are birth pangs - its first gasps.
| badsandwitch wrote:
| The race for true AI is on and the fruits are the economic
| marginalization of humanity. No game theoretic actor in the
| running will shy away from the race. Anyone who claims they will
| for the 'good of humanity' is lying or was never a contender.
|
| This is information technology we are talking about, it's
| virtually the exact opposite of nuclear weapons. Refining uranium
| vs. manufacturing multi purpose silicon and guzzling electricity.
| Achieving deterrence vs. gaining immeasurable wealth, power and
| freedom from labor.
|
| This race may even be the answer to the Fermi paradox - that
| there are few individual winners and that they pull up the
| ladders behind them.
|
| This is not the kind of race any legislation will have meaningful
| effect on.
|
| The race is on and you better commit to a faction that may
| deliver.
| h0l0cube wrote:
| > This race may even be the answer to the Fermi paradox
|
| The mostly unchallenged popular notion that fleshy human
| intelligence will still be running the show 100s - let alone
| 1000s - of years from now is very naive. We're nearing the end
| of the human supremacy, though most of us won't live to see
| that end.
| kjkjadksj wrote:
| To be fair fleshy human intelligence has hardly been running
| the show any more than a bear eating a salmon out of a river
| thus far. We'd like to consider we can control the world yet
| any data scientist will tell you what we actually control and
| understand is very little, or at best a sweeping
| oversimplification of this complex world.
| concordDance wrote:
| > The race is on and you better commit to a faction that may
| deliver.
|
| How does that help?
|
| The giant does not care whether the ant he steps on worships
| him or not. Regardless of the autonomy or not of the AI, why
| should the controllers help you?
| xpe wrote:
| > Anyone who claims they will for the 'good of humanity' is
| lying or was never a contender.
|
| An overreach.
|
| Some people and organizations are more aligned with humanity's
| well being and survival than others.
| GistNoesis wrote:
| So I've this code, it's called ShoggothDb, it's less than a
| megabyte of definitions. The principle is easy, it's fully
| deterministic.
|
| Code as Data, Data as Code.
|
| When you start the program, it joins the swarm : it starts by
| grabbing a torrent, and train a model on it in a distributed
| fashion, and publish the results as a torrent. Then with the
| trained model, it generates new data, (think of it like alpha-go
| playing new games to collect more data).
|
| See it as a tower of knowledge building itself, following some
| rough initial plans.
|
| Of course, at anytime you can fork the tower, and continue
| building with different plans, provided that you can convince
| other people from the swarm to contribute to the new tower rather
| than the old.
|
| Everything is immutable, but there is a versioning protocol
| built-in that allow the swarm to coordinate and automatically
| jumps to a next fork when the byzantine resistant quorum it
| follows vote for doing so (which allow your swarm to be compliant
| of the law and remove data if it was flagged as inappropriate).
| This allow some form of external control, but you can also let
| the quorum vote on subsequent modifications based on a model
| built on its data (aka free-running mode).
|
| It's using torrent because easier to bootstrap but because the
| whole computation deterministic, the underlying protocol is just
| files on disks and any way of sharing them is valid. So you can
| grab a piece to work on via http, or ftp, or carrier pigeon for
| all I care. As long as the digital signatures are conforming to
| the rules, brick by brick the tower will get built.
|
| To contribute, you can either help with file distribution by
| sharing the torrent, so it's as safe as your p2p client. If you
| want to commit some computing resources, like your gpu to
| building some of the tower, it's only requiring you to trust that
| there is no bug in ShoggothDb, because the computation you'll
| perform are composed of safe blocks, by construction they are
| safe for your computer to run. (Except if you want to run unsafe
| blocks, at this point no guarantee can be made).
|
| The incentives for helping building the tower can be set in the
| initial definition file, and range from mere access to the built
| tower to tokens for honest participation for the more
| materialistically inclined.
|
| Is it OK to release with the new law ? Is this comment OK,
| because ShoggothDB5-o can built its source from the specs in this
| comment ?
| dgellow wrote:
| There is no new law
| alkonaut wrote:
| The immediate danger of large AI models isn't that they'll turn
| the earth to paperclips it's that we'll create fraud as a service
| and have a society where nothing can be trusted. I'd be all for a
| law (however clumsy) that made image, audio or video content
| produced by models with over X parameters to be marked with
| metadata saying it's AI generated. Creating models that don't tag
| their output as such would be banned. So far nothing strange
| about the law. The obvious problem with the law is that you need
| to require even _screenshotting an image AI and reposting it
| online without the made-with-ai metadata_ to be outlawed. And
| that would be an absolute mess to enforce, at least for images.
|
| But most importantly: whatever we do in this space has to be made
| on the assumption that we can't really influence what "bad
| actors" do. Yes being responsible means leaving money on the
| table. So money has to be left on the table, for - erm - less
| responsible nations to pick up. That's just a fact.
| worldsayshi wrote:
| Any law that tries to categorize non-trustworthy content seems
| doomed to fail. We need to find better ways to communicate
| trustworthiness, not the other way around. (And I'm not sure
| adding more laws can help here.)
| alkonaut wrote:
| No I don't think technical means will work fully either. But
| the thing about these regulations is that you can basically
| cover the 99% case by just thinking about the 5 largest
| players in the field, be it regulation for social media, AI
| or whatever. It doesn't matter that the law has loopholes or
| that some players aren't affected at all. Regulation that
| helps somewhat in a large majority of cases is massive.
| arder wrote:
| I think the most acheivable way of having some verification of
| AI images is simply for the AI generators to store finger
| prints of every image they generate. That way if you ever want
| to know you can go back to Meta or whoever and say "Hey, here's
| this image, do you think it came from you". There's already
| technology for that sort of thing in the world (content ID from
| youtube, CSAM detection etc.).
|
| It's obviously not perfect, but could help and doesn't have the
| enormous side effects of trying to lock down all image
| generation.
| Someone wrote:
| > That way if you ever want to know you can go back to Meta
| or whoever and say "Hey, here's this image, do you think it
| came from you".
|
| Firstly, if you want to know an image isn't generated, you'd
| have to go to every 'whoever' in the world, including
| companies that no longer exist.
|
| Secondly, if you ask _evil.com_ that question, you would have
| to trust them to answer honestly for both all images they
| generated and images they didn't generate (claiming real
| pictures were generated by you can probably be career-ending
| for a politician)
|
| This is worse than
| https://www.cs.utexas.edu/~EWD/ewd02xx/EWD249.PDF: _"Program
| testing can be used to show the presence of bugs, but never
| to show their absence!"_. You can neither show an image is
| real nor that it is fake.
| kortex wrote:
| What's to stop someone from downloading an open source model,
| running it themselves, and either just not sharing the
| hashes, subtly corrupting the hash algo so that it gives a
| false negative, etc?
|
| Also you need perceptual hashing (since one bitflip of the
| generated media alters the whole hash) which is squishy and
| not perfectly reliable to begin with.
| alkonaut wrote:
| Nothing. But that's not the point. The point is that, to a
| rounding error, all output is made by a small number of
| models from a small number of easily regulated companies.
|
| It's never going to be possible to ensure all media is
| reliably tagged somehow. But if just half of media
| generated is identifiable as such that helps. Also helps
| avoid it in training new models, which could turn out
| useful.
| anon291 wrote:
| > Creating models that don't tag their output as such would be
| banned.
|
| This is just silly. Anyone would be able to disable this
| tagging in an open model.
| diggan wrote:
| >> Creating models that don't tag their output as such would
| be banned.
|
| > This is just silly. Anyone would be able to disable this
| tagging in an open model.
|
| And we'd end up with people who thinks that any text that
| isn't tagged as "#MadeByLLM" as made by a human, which
| obviously wouldn't be great.
| jerjerjer wrote:
| > Anyone would be able to disable this tagging in an open
| model.
|
| Metadata (I assume it's file metadata and not a watermark)
| can be removed from a final product (image, video, text) so
| open and closed models are equally affected.
| drcode wrote:
| turning the earth into paperclips is not gonna happen
| immediately, so we can safely ignore that risk
| woah wrote:
| > The immediate danger of large AI models isn't that they'll
| turn the earth to paperclips it's that we'll create fraud as a
| service and have a society where nothing can be trusted. I'd be
| all for a law (however clumsy) that made image, audio or video
| content produced by models with over X parameters to be marked
| with metadata saying it's AI generated.
|
| Just make a law that makes it so that AI content has to be
| tagged if it is being used for fraud
| dandanua wrote:
| "By focusing only on the most expensive and large-scale models,
| SB 1047 establishes a regulatory framework that could give the
| public a false sense of security about controlling this fast-
| moving technology. Smaller, specialized models may emerge as
| equally or even more dangerous than the models targeted by SB
| 1047 - at the potential expense of curtailing the very innovation
| that fuels advancement in favor of the public good."
|
| The amount of idiots who can't read and cheer the veto as a win
| against "the regulatory capture" is astounding.
| OJFord wrote:
| > gov.ca.gov
|
| Ah, I think now I know why Canada's government website is
| canada.ca (which I remember thinking was a bit odd or more like a
| tourism site when looking a while ago, vs. say gov.uk or gov.au).
| whalesalad wrote:
| unfortunately the us owns the entire gov tld
| OJFord wrote:
| Yes but other countries (off the top of my head: UK, Aus,
| India) use gov.[ccTLD]
|
| My point was that that's confusing with gov.ca if the US is
| using ca.gov and gov.ca.gov for California, and that perhaps
| that's why Canada does not do that.
| skywhopper wrote:
| Sad. The real threat of AI is not that it will become an
| unstoppable superintelligence without appropriate regulation (if
| we reach that point, which we are nowhere close to, and probably
| not even on the right track) the superintelligence, by
| definition, will be able to evade any regulation or control we
| attempt.
|
| Rather, the threat of AI is that we will dedicate so many
| resources--money, power, human effort--to chasing the ludicrous
| fantasies of professional snake-oil salesmen, while ignoring the
| need to address actual problems with real, known solutions that
| are easily within each given a fraction of the resources
| currently being consumed by the dumpster-fire pit of "AI".
|
| Unfortunately the Governor of California is a huge part of the
| problem here, misdirecting scarce state resources into sure-to-
| fail "public partnerships" with VC-funded scams, forcing public
| servants to add one more set of time-wasting nonsense to the pile
| of bullshit they have to navigate around just to do their actual
| job.
| tim333 wrote:
| I'm not sure AI risks are well enough understood to have good
| regulations for. With most risky industries you can actually
| quantify the risk a bit. Regarding:
|
| > we cannot afford to wait for a major catastrophe to occur
| before taking action to protect the public
|
| Maybe but you can wait for minor problems or big near misses
| before legislating it all up.
| amai wrote:
| What are the differences to EU AI Act?
| malwrar wrote:
| This veto document is shockingly lucid, I'm quite impressed with
| it despite my belief that regulation as a strategy for managing
| critical risks of AI is misguided.
|
| tl;dr gavin newsom thinks that a signable bill needs "safety
| protocols, proactive guardrails, and severe consequences" based
| on some general framework guided by "empirical trajectory
| analysis", and also is mindful of the promise/threat/gravity of
| all the machine learning occurring in CA specifically. He also
| affirms a general appetite for CA to take on a leadership role
| wrt regulating AI. My general read is that he wants to preserve
| public attention on the need for AI regulation and not squander
| it on SB 1047 specifically. Or who knows I'm not a politician
| lol. Really strong document tho,
|
| Interesting segment:
|
| > By focusing only on the most expensive and large-scale models,
| SB 1047 establishes a regulatory framework that could give the
| public a false sense of security about controlling this fast-
| moving technology. Smaller, specialized models may emerge as
| equally or even more dangerous than the models targeted by SB
| 1047 - at the potential expense of curtailing the very innovation
| that fuels advancement in favor of the public good. >
| Adaptability is critical as we race to regulate a technology
| still in its infancy. This will require a delicate balance. While
| well-intentioned, SB 1047 does not take into account whether an
| Al system is deployed in high-risk environments, involves
| critical decision-making or the use of sensitive data. Instead,
| the bill applies stringent standards to even the most basic
| functions - so long as a large system deploys it. I do not
| believe this is the best approach to protecting the public from
| real threats posed by the technology.
|
| This is an incisive critique of the fundamental initial goal of
| SB 1047. Based on the fact that the bill explicitly seeks to
| cover models whose training cost was >=$100m & expensive fine-
| tunes, my initial guess about this bill was that it was designed
| by someone software engineering-minded scared of e.g. open-weight
| releases a la facebook/mistral/etc teaching someone how to build
| a nuke or something. LLMs probably replaced the ubiquitous robot
| lady pictures you see in every AI-focused article as public enemy
| number one, and the bill feels focused on some of the technical
| specifics of this advent and its widespread use. This focus
| blinds the bill from addressing the general danger of machine
| learning however, which naturally confounds regulation for
| precisely the reason plain-spoken in the above four sentences.
| Incredible technical communication here.
| londons_explore wrote:
| While I agree with this decision, I don't want any governance
| decisions to be made by one bloke.
|
| Why do we have such a system? Why isn't it a vote of many
| governors? Preferably a secret vote so voters can't be forced to
| vote along party lines...
| raluk wrote:
| > California will not abandon its responsibility.
| karaterobot wrote:
| > Safety protocols must be adopted. Proactive guardrails should
| be implemented, and severe consequences for bad actors must be
| clear and enforceable.
|
| Here, here. But this vague, passive voiced, hand-wavey statement
| that isn't even a promise does not exactly inspire me with a ton
| of confidence. Considering he turned this bill down to protect
| business interests, I wonder what acceptable legislation would
| look like, from his perspective. Looking forward to hearing about
| about it very soon, and I'm confident it'll be specific,
| actionable, responsible, and effective.
| xixixao wrote:
| I find the statement quite well measured. He's not giving a
| solution, that's not easy, but is specifically calling out
| evidence-based measures. The statement calls out both the need
| to regulate, and the need to innovate. The issue is not black
| and white and neither is the statement.
| lostdog wrote:
| Well, you already can't nuke someone. You can't make and
| release a biological weapon. It's probably illegal to turn the
| whole world into paperclips.
|
| There are already laws against doing harm to others. Sure, we
| need to fill some gaps (like preventing harmful deep fakes),
| but most things are pretty illegal already.
| notepad0x90 wrote:
| Gonna swim against the current on this one.
|
| This is why we can't have nice things, too many tech people
| support Newsom on vetoing this. The nightmare of corporate
| surveillance and erosion of privacy we have to endure every day
| is a result of such sentiment and short sighted attempt at self-
| preservation.
|
| "It's vague" yeah, that's the point, the industry is allowed
| leeway to come up with standards of what is and isn't safe. They
| can establish a neutral committee to continually assess the
| boundaries of what is and isn't safe, as technology evolves. Do
| you expect legislators to define specifics and keep themselves
| updated with the latest happening in tech? Would it be better if
| the government established departments that police AI usage? This
| was the sweetest deal the industry could have gotten.
| humansareok1 wrote:
| The dismal level of discourse about this bill shows that Humanity
| is utterly ill equipped to deal with the problems AI poses for
| our society.
| diggan wrote:
| For _very_ interested readers, here is a meta-collection of
| articles from left, center and right about the story:
| https://ground.news/article/newsom-vetoes-bill-for-stricter-...
|
| And a short bias comparison:
|
| > The left discusses Newsom's veto as a victory for Silicon
| Valley, concentrating on the economic implications and backing
| from tech giants.
|
| > The center highlights the broader societal ramifications,
| addressing criticisms from various sectors, such as Hollywood and
| AI safety advocates.
|
| > The right emphasizes Newsom's concerns about hindering
| innovation and his commitment to future collaborative regulatory
| efforts, showcasing a balanced approach.
| pc86 wrote:
| Ground.News (no affiliation) is great for anyone interested in
| getting actual news and comparing biases. I particularly like
| that if you have an account they will show you news stories
| you're missing based on your typical reading patterns.
|
| I do wish the partisan categorization was a bit more
| nuanced/intentional. It basically boils down to:
|
| - Major national news outlet? Left.
|
| - Local affiliate of major national news outlet? Center.
|
| - Blog you've never heard of? Right.
|
| There are certainly exceptions but that heuristic will be right
| 90% of the time.
| diggan wrote:
| It is truly great, and cheap too (30 USD/year or something).
| Not affiliated either, just a happy user.
|
| Yeah, it could be a bit better. As a non-American, the biases
| are also very off from how left/center/right looks in my own
| country, but at least it tries to cover different angles
| which I tried to do manually before.
|
| They can also be a bit slow at linking different stories
| together, sometimes it takes multiple days for same headlines
| to be merged into one story.
| bcrosby95 wrote:
| It's funny that it has Fox news as center to me. I watched
| them back when Obama was president a couple times and some of
| the shows would play Nazi videos while talking about him.
| Nevermind birtherism.
|
| I haven't watched them in over a decade, but I assume they
| haven't gotten any better.
| diggan wrote:
| They currently lists Fox News as (US) "Right" as far as I
| can tell: https://ground.news/interest/fox-news_a44aba
|
| > Average Bias Rating: Right
|
| I guess it's possible they don't have a 100% coverage of
| all the local Fox News stations, and some of them been
| incorrectly labeled.
| bcrosby95 wrote:
| Oh, my mistake. I looked at the other link
| (https://ground.news/article/newsom-vetoes-bill-for-
| stricter-...) and Fox News was in the grey, or "center"
| section. I assume they're doing some extra analysis to
| put them there for this specific subject?
| dialup_sounds wrote:
| Nah, it's just the UI being awkward. The prominent tabs
| at the top just change the AI summary, while there is a
| much more subtle set of tabs beneath where it (currently)
| says "63 articles" that filter the sources.
| HumblyTossed wrote:
| I think if you look at the actual news reporting on Fox
| News, it could be closer to center. But when you factor in
| their opinion "reporting" it's very clearly heavily right-
| leaning. Problem is, most of their viewership can't tell
| the difference.
| tempestn wrote:
| Also while many individual stories might be in the
| center, bias is also exhibited in which stories they
| choose to print, or not to, as well as in esitorialized
| headlines.
| Sohcahtoa82 wrote:
| > - Blog you've never heard of? Right.
|
| That checks out.
|
| There's a certain sect of the far right that is easily
| convinced by one guy saying "The media is lying to you! This
| is what's really happening!" followed by the most insane shit
| you've read in your life.
|
| They love to rant about the Deep State.
| cbozeman wrote:
| When people say, "The Deep State", what they really mean
| is, "unelected lifelong government employees who can create
| regulations that have the force of law".
|
| And that _is_ a problem. Congress makes laws, not
| government agencies like the FDA, EPA, USDA, etc.
|
| We've seen a near-total abdication of responsibility from
| Congress on highly charged matters that'll piss off
| _someone_ , _somewhere_ in their constituency, because they
| 'd rather allow the President to make an Executive Order,
| or a bureaucrat somewhere to create a rule or regulation
| that will have the same effect.
|
| It's disgusting, shameful, and the American people need to
| do better, frankly. We need to _demand_ Congress do their
| jobs.
|
| So many of our problems can be solved if these people
| weren't concerned about being re-elected. Elected positions
| are supposed to be something you take time out of your life
| to do for a number of years, then you go back to your
| livelihood - not something you do for the entirety of your
| life.
| warkdarrior wrote:
| Have _you_ considered electing better representatives for
| yourself to Congress
|
| It's easy to blame to Congress, but in my view US
| Congress nowadays is a perfect reflection of the
| electorate, where all sides approach all problems as
| "Someone [not me] should do something about this thing I
| do not like." Congressmen are then elected and approach
| it the same way.
| maicro wrote:
| That's one of the difficult things when dealing with any
| sort of conspiracy theory or major discussion about
| fundamental issues with government - there _are_
| significant issues, so straight dismissing "The Deep
| State" isn't possible because there actually are
| instances of that sort of fundamental corruption. But
| then you have people who jump from those very real issues
| to moon landing hoax conspiracies, flat earth
| conspiracies, etc. etc., using that grain of truth of The
| Deep State to justify whatever belief they want.
|
| It's related to a fundamental issue with discussing
| scientific principles in a non-scientific setting - yes,
| gravity is a _theory_ in the scientific sense, but that
| doesn't you can say "scientists don't know anything! they
| say gravity is just a theory, so what's stopping us from
| floating off into space tomorrow!?". Adapt the examples
| there to whatever you want...
|
| And yes, that sounds fairly judgy of me - I am, alas,
| human, thus subject to the same fallacies and traps that
| I recognize in others, and being aware of those issues
| doesn't guarantee I can avoid them...
| rhizome wrote:
| _Congress makes laws, not government agencies like the
| FDA, EPA, USDA, etc._
|
| What are some examples of laws created by unelected
| people?
| qskousen wrote:
| I've found The Tangle (https://www.readtangle.com/ - no
| affiliation) to be a pretty balanced daily politics
| newsletter. They mentioned the Newsom veto today, and may
| address it later this week, though I don't know for sure.
| tr3ntg wrote:
| This is a little annoying. He vetoes the bill, agrees with the
| intention, paints the "solution" as improper, suggests there is
| some other solution that is better, doesn't entirely describe
| that solution with any detail, and encourages future legislation
| that is "more right."
|
| I'm exhausted already.
|
| I can't think of a less efficient way to go about this.
| hot_gril wrote:
| The "this bill doesn't go far enough" thing is normally what
| politicians say when they don't want it to go in that direction
| at all. Anyway, I'm glad he vetoed.
| blueyes wrote:
| Remember when one of the bill's co-sponsors announced that he was
| co-founding an AI regulation company called Gray Swan?
|
| https://www.piratewires.com/p/sb-1047-dan-hendrycks-conflict...
|
| This bill was tainted from day 1, and Wiener was bamboozled into
| pushing it.
| bradhilton wrote:
| I'm glad he vetoed the bill, but his rationale is worrisome. Even
| if he's just trying to placate SB 1047 proponents, they will try
| to exact concessions from him in future sessions. I'll take this
| brief reprieve, but it's still a large concern for me.
| RIMR wrote:
| What specifically do you find worrisome about his rationale? It
| mostly seems like he's asking for evidence-based policy that
| views AI as a potential risk regardless of the funding or size
| of the model, because this doesn't actually correlate with any
| actual evidence of risk.
|
| I can't tell what direction your disagreement goes. Are you
| worried that he still feels that AI needs to be regulated at
| all, or do you think that AI needs to be regulated regardless
| of empirical evidence of harm?
| karlzt wrote:
| Here is the text of the PDF:
|
| "OFFICE OF THE GOVERNOR
|
| SEP 29 2024
|
| To the Members of the California State Senate:
|
| I am returning Senate Bill 1047 without my signature.
|
| This bill would require developers of large artificial
| intelligence (Al) models, and those providing the computing power
| to train such models, to put certain safeguards and policies in
| place to prevent catastrophic harm . The bill would also
| establish the Board of Frontier Models - a state entity - to
| oversee the development of these models. California is home to 32
| of the world's 50 leading Al companies , pioneers in one of the
| most significant technological advances in modern history. We
| lead in this space because of our research and education
| institutions, our diverse and motivated workforce, and our free-
| spirited cultivation of intellectual freedom. As stewards and
| innovators of the future, I take seriously the responsibility to
| regulate this industry. This year, the Legislature sent me
| several thoughtful proposals to regulate Al companies in response
| to current, rapidly evolving risks - including threats to our
| democratic process, the spread of misinformation and deepfakes,
| risks to online privacy, threats to critical infrastructure, and
| disruptions in the workforce. These bills, and actions by my
| Administration, are guided by principles of accountability,
| fairness , and transparency of Al systems and deployment of Al
| technology in California.
|
| SB 1047 magnified the conversation about threats that could
| emerge from the deployment of Al. Key to the debate is whether
| the threshold for regulation should be based on the cost and
| number of computations needed to develop an Al model, or whether
| we should evaluate the system's actual risks regardless of these
| factors. This global discussion is occurring as the capabilities
| of Al continue to scale at an impressive pace. At the same time,
| the strategies and solutions for addressing the risk of
| catastrophic harm are rapidly evolving.
|
| By focusing only on the most expensive and large-scale models, SB
| 1047 establishes a regulatory framework that could give the
| public a false sense of security about controlling this fast-
| moving technology. Smaller, specialized models may emerge as
| equally or even more dangerous than the models targeted by SB
| 1047 - at the potential expense of curtailing the very innovation
| that fuels advancement in favor of the public good.
|
| Adaptability is critical as we race to regulate a technology
| still in its infancy. This will require a delicate balance. While
| well-intentioned, SB 1047 does not take into account whether an
| Al system is deployed in high-risk environments, involves
| critical decision-making or the use of sensitive data. Instead,
| the bill applies stringent standards to even the most basic
| functions - so long as a large system deploys it. I do not
| believe this is the best approach to protecting the public from
| real threats posed by the technology.
|
| Let me be clear - I agree with the author - we cannot afford to
| wait for a major catastrophe to occur before taking action to
| protect the public. California will not abandon its
| responsibility. Safety protocols must be adopted. Proactive
| guardrails should be implemented, and severe consequences for bad
| actors must be clear and enforceable. I do not agree, however,
| that to keep the public safe, we must settle for a solution that
| is not informed by an empirical trajectory analysis of Al systems
| and capabilities. Ultimately, any framework for effectively
| regulating Al needs to keep pace with the technology itself.
|
| To those who say there's no problem here to solve, or that
| California does not have a role in regulating potential national
| security implications of this technology, I disagree. A
| California-only approach may well be warranted - especially
| absent federal action by Congress - but it must be based on
| empirical evidence and science. The U.S. Al Safety Institute,
| under the National Institute of Science and Technology, is
| developing guidance on national security risks, informed by
| evidence-based approaches, to guard against demonstrable risks to
| public safety. Under an Executive Order I issued in September
| 2023, agencies within my Administration are performing risk
| analyses of the potential threats and vulnerabilities to
| California's critical infrastructure using Al. These are just a
| few examples of the many endeavors underway, led by experts, to
| inform policymakers on Al risk management practices that are
| rooted in science and fact. And endeavors like these have led to
| the introduction of over a dozen bills regulating specific, known
| risks posed by AI, that I have signed in the last 30 days.
|
| I am committed to working with the Legislature, federal partners,
| technology experts, ethicists, and academia, to find the
| appropriate path forward, including legislation and regulation.
| Given the stakes - protecting against actual threats without
| unnecessarily thwarting the promise of this technology to advance
| the public good - we must get this right.
|
| For these reasons, I cannot sign this bill.
|
| Sincerely, Gavin Newsom.".
| curious_cat_163 wrote:
| For those supporting this legislation: Would you like to share
| the specific harms to the public that this bill sought to address
| and prevent?
| drcode wrote:
| we'll probably create artificial superintelligence in the next
| few years
|
| when that happens, it likely will not go well for humans
|
| the specific harm is "human extinction"
| lasermike026 wrote:
| They misspelled AI with Al. This Al guy sounds very dangerous.
___________________________________________________________________
(page generated 2024-09-30 23:00 UTC)