[HN Gopher] Executive Order on Safe, Secure, and Trustworthy Art...
       ___________________________________________________________________
        
       Executive Order on Safe, Secure, and Trustworthy Artificial
       Intelligence
        
       Author : Mandelmus
       Score  : 113 points
       Date   : 2023-10-30 09:35 UTC (13 hours ago)
        
 (HTM) web link (www.whitehouse.gov)
 (TXT) w3m dump (www.whitehouse.gov)
        
       | saturn8601 wrote:
       | I don't see how they will enforce many of these rules on Open
       | Source AI.
       | 
       | Also:
       | 
       | "Establish an advanced cybersecurity program to develop AI tools
       | to find and fix vulnerabilities in critical software, building on
       | the Biden-Harris Administration's ongoing AI Cyber Challenge.
       | Together, these efforts will harness AI's potentially game-
       | changing cyber capabilities to make software and networks more
       | secure."
       | 
       | I fear the end of pwning your own device to free it from DRM or
       | other lockouts is coming to an end with this. We have been lucky
       | that C++ is still used _badly_ in many projects and that has been
       | an achilles heel for many a manager wanting to lock things down.
       | Now this door is closing faster with the rise of AI bug catching
       | tools.
        
         | flenserboy wrote:
         | Orders such as these don't appear out of the blue -- corporate
         | interests & political players are always consulted long before
         | they appear, & threats to those interests such as Open Source
         | Anything are always in their sights. This is a likely first
         | step in a larger move to snatch strong AI tools out of the
         | hands of the peasants before someone gets a bright idea which
         | can upend the current order of things.
        
       | stevev wrote:
       | Let the regulations, antitrust lawsuits and monopolies begin!
        
         | nerdponx wrote:
         | This is a great opportunity to try to avoid the old mistakes of
         | regulatory capture. It looks like someone is at least trying to
         | make a nod in that direction, by supporting smaller research
         | groups.
        
       | RandomLensman wrote:
       | Does Microsoft need to share how it is testing Excel? Some subtle
       | bug there might do an awful lot of damage.
        
         | halJordan wrote:
         | Idk if you're being serious because there's ai in excel now; in
         | which case the answer is no. Or you're being a smarty-pants and
         | trying to cleverly show what you think is a counter-example; in
         | which case the answer is still no, but should probably be yes,
         | and they only don't because it was well established before all
         | the cyber regulation took effect, but for instance azure has
         | many certs (including fedramp) which includes office365 which
         | includes excel.
        
           | RandomLensman wrote:
           | I am quite serious about the potential for danger of errors
           | in Excel (without AI).
           | 
           | Basically, I consider the focus on AI massively misplaced
           | given the long list of real risks compared to the more
           | hypothetical (other than general compute) risks from AI.
        
       | Eumenes wrote:
       | This kinda thing should not be legislated via executive order.
       | Congress needs a committee and must deliberate. Sad.
        
         | flenserboy wrote:
         | Which is exactly what Congress refuses to do, because letting
         | Caesar, I mean the President, decide things by fiat keeps them
         | from owning the blame for bad legislation.
        
           | Eumenes wrote:
           | At least Caesar was a respectable age for leading when he
           | died (55) ...
           | 
           | This is interesting:
           | https://www.presidency.ucsb.edu/statistics/data/executive-
           | or...
        
             | nerdponx wrote:
             | Don't forget that life expectancies were much lower back
             | then, and that he was assassinated. He certainly would have
             | been happy to continue into his 80s if he could.
        
             | frumper wrote:
             | It is interesting. I would have thought executive orders
             | were more frequently used now than in the past. Apparently
             | that peaked 80 years ago.
        
           | nerdponx wrote:
           | Congress has generally refused to seriously legislate
           | anything other than banning lightbulbs for several
           | presidential terms now.
           | 
           | But in this particular example I don't think it's enough of
           | "thing" to even consider bringing up as a bill, except maybe
           | as a one-pager that passes unanimously.
        
         | nerdponx wrote:
         | This is well within the president's powers under existing law.
         | If Congress disagrees, they can always supersede.
         | 
         | This isn't even close to legislating. Look at some recent
         | Supreme Court decisions and the amount of latitude federal
         | agencies have, if you want to see something more closely
         | resembling legislation from outside of Congress.
        
         | zoobab wrote:
         | "This kinda thing should not be legislated via executive
         | order."
         | 
         | Dictatorship in another form.
        
       | sschueller wrote:
       | There is no way to prevent AI from being researched on or to make
       | it safe by government oversight because the rest of the world has
       | places that don't care.
       | 
       | What does work is to pass laws to not permit certain automation
       | such as insurance claims or life and death decisions. These laws
       | are needed even without AI as automation is already doing such
       | things to a concerning degree like banning people due to a
       | mistake without recourse.
       | 
       | Is the whitehouse going to ban the use of AI in the decision
       | making when dropping a bomb?
        
         | vivekd wrote:
         | I mean isn't automating important decisions line insurance
         | claims or life and death decisions a beneficial thing. Sure the
         | tech isn't ready yet but I think even now AI with a human
         | overlooking it who has the power to override the system would
         | provide people with a better experience
        
         | broken-kebab wrote:
         | >not permit certain automation such as insurance claims
         | 
         | I don't see any problem in automation which does mistakes,
         | humans do too. The real problem is that it's often an
         | impenetrable wall with no way to protest, or appeal, and
         | nobody's held accountable while victims lives are ruined. So if
         | to pass any law in this field it should not be about banning
         | AI, but rather about obligatory compensation for those affected
         | by errors. Facing money loss, insurers, and banks will fix
         | themselves
        
           | Libcat99 wrote:
           | Agreed,
           | 
           | This doesn't just apply to insurance, etc, of course.
           | Inaccessibility of support and inability to appeal automated
           | decisions for products we use is widespread and inexcusable.
           | 
           | This shouldn't just apply to products you pay for, either.
           | Products like facebook and gmail shouldn't get off with
           | inaccessible support just because they are "free" when we all
           | know they're still making plenty of money off us.
        
         | throwawaaarrgh wrote:
         | Just because the rest of the world has lawless areas doesn't
         | mean we don't pass laws. If you do something that risks our
         | national safety, or various other things, we can extradite and
         | try you in court.
         | 
         | They're not suggesting the banning of anything, they're
         | requiring you make it be safe and prove how you did that.
         | That's not unreasonable.
         | 
         | [0]
         | https://en.m.wikipedia.org/wiki/Extradition_law_in_the_Unite...
         | [1]
         | https://en.m.wikipedia.org/wiki/Personal_jurisdiction_over_i...
        
           | michaelt wrote:
           | Right, but in _some_ areas of AI regulation, the existence of
           | other countries might undermine unilateral regulation.
           | 
           | For example, imagine LLMs improve to the point where they can
           | double programmer productivity while lowering bug counts. If
           | Country A decides to Protect Tech Jobs by banning such LLMs,
           | but Country B doesn't - could be all the tech jobs will move
           | to Country B, where programmers are twice as productive.
        
       | adolph wrote:
       | Said executive order was not linked to in the document.
        
         | KoftaBob wrote:
         | It hasn't been updated yet, but I believe Executive Orders are
         | listed here for viewing:
         | https://www.federalregister.gov/presidential-documents/execu...
        
       | rmbyrro wrote:
       | Why's there a bat flying over the white house logo?
        
         | nojito wrote:
         | Halloween
        
           | rmbyrro wrote:
           | Ah (facepalm)
           | 
           | Thanks
        
         | glitchc wrote:
         | Batman?
        
           | rmbyrro wrote:
           | A potential reference to the Batman-Robin Administration?
        
       | iinnPP wrote:
       | Criminals don't follow the rules. Large corps don't follow the
       | rules.
       | 
       | The only people this impacts are the ones you don't need it to
       | impact. The bit about detection and authentication services is
       | also alarming.
        
         | gmerc wrote:
         | You could say this about ... every law. So clearly it's not a
         | useful yardstick
        
           | iinnPP wrote:
           | It's a statement of my estimated impact of the post on the
           | development of AI.
           | 
           | The blocking of "AI content" and the bit about authentication
           | don't seem related to AI frankly. Detection isn't real and
           | authentication is the government's version of an explosive
           | wet dream.
        
         | gs17 wrote:
         | >The bit about detection and authentication services is also
         | alarming.
         | 
         | "The Department of Commerce will develop guidance for content
         | authentication and watermarking to clearly label AI-generated
         | content." is pretty weak sounding. I'm more annoyed that they
         | pretend that will actually reduce fraud.
        
       | tomohawk wrote:
       | In my history book, I read where we fought a war to not have a
       | king.
       | 
       | In my civics class, I learned that Congress passes laws, not the
       | President.
       | 
       | I guess a public school education only goes so far.
        
         | phillipcarter wrote:
         | You clearly weren't paying attention in school then, because
         | executive orders are most certainly taught in government
         | classes.
        
         | rmbyrro wrote:
         | Executive Orders are subject to Congressional review and can be
         | taken down by Congress. It's a power given by Congress to the
         | President. There are contexts in which the President's ability
         | to issue Executive Orders are really necessary. This is not
         | against democratic principles, per se.
         | 
         | Of course, the President can abuse this power. That's not a
         | failure of Democracy. This is predicted. And that's also a
         | reason (potential power abuse) why the Congress exists, not
         | just to pass laws.
        
         | marcinzm wrote:
         | And who is in charge of making sure those laws are executed on
         | by the Federal Goverment?
         | 
         | Hint: It's the President and executive orders are the
         | President's directive on how the Federal government should
         | execute on laws.
        
           | nerdponx wrote:
           | And that's also literally what this is, it's the president
           | executing the provisions of the Defense Production Act of
           | 1950, which is not only within his power to do so, it's
           | literally his constitutional obligation to do so.
        
         | barney54 wrote:
         | Executive Orders do not have the force of law. They are
         | essentially suggestions. Federal agencies try to follow them,
         | but Executive Orders can't supersede actual laws.
        
       | numpad0 wrote:
       | How do any of these work when everyone is cargo-cult
       | "programming" AI by verbally asking nicely? Effectively no one
       | but very few up there in OpenAI et al has any understanding, let
       | alone have controls.
        
         | kramerger wrote:
         | You realise that these random-Joe companies currently develop
         | and sell AI products to cops, goverments and your HR department
         | because the CTO or head of IT is incompetent and/or corrupt?
         | 
         | You understand that already people have been denied bail
         | because "our AI told us so", with no legal way to question
         | that?
        
           | peyton wrote:
           | That sounds like a procedural issue, which it doesn't sound
           | like this order addresses.
        
             | kramerger wrote:
             | Procedures can't be effective unless backed by law.
             | 
             | Besides, point me to existing processes that cover my
             | examples
             | 
             | Only one of them exists, in 1-2 states.
        
       | epups wrote:
       | This looks even more heavy-handed than the regulation from the EU
       | so far.
        
         | marcinzm wrote:
         | I'm honestly curious, how so? From what I can tell the only
         | thing which isn't a "we'll research this area" or "this only
         | applies to the government" is "tell the US government how you
         | tested your foundational models."
         | 
         | For example, AI watermarking only applies to government
         | communications and may be used as a standard for non-government
         | uses but it's not require.
        
           | patwolf wrote:
           | That last one seems like a pretty big deal though. It's not
           | just how you tested, but "other critical information" about
           | the model.
           | 
           | I imagine the government can deem any AI to be a "serious
           | risk" and prevent it from being made public.
        
           | epups wrote:
           | The EU regulation is here: https://www.europarl.europa.eu/new
           | s/en/headlines/society/202...
           | 
           | It is also very open ended, but the US text reads like some
           | compliance will start immediately, like sharing the results
           | of safety tests with the government directly.
        
       | venatiodecorus wrote:
       | The way to make AI content safe is the same way to improve
       | general network security for everyone: cryptographically signed
       | content standards. We should be able to sign our tweets, blog
       | posts, emails, and most network access. This would help identify
       | and block regular bots along with AI powered automatons. Trusted
       | orgs can maintain databases people can subscribe to for trust
       | networks, or you can manage your own. Your key(s) can be used to
       | sign into services directly.
        
         | max_ wrote:
         | The problem is key management & key storage.
         | 
         | Smartphones & computers are a joke from a security standpoint.
         | 
         | The closest solution to this problem has been what people in
         | the crypto community have done with seed phrases & hardware
         | wallets. But this is still too psychologically taxing for the
         | masses.
         | 
         | Untill that problem of intuitive, simple & secure key
         | management is solved. Cryptography as a general tool for
         | personal authentication will not be practical.
        
           | colordrops wrote:
           | I wouldn't be surprised if things got so bad that people
           | would get used to the rough edges as the alternative is
           | worse.
        
           | px43 wrote:
           | > But this is still too psychologically taxing for the
           | masses.
           | 
           | Literally requires the exact same cognitive load as using
           | keys to start your car. The problem is that so many people
           | got comfortable delegating all their financial and data risk
           | to third parties, and those third parties aren't excited
           | about giving up that power.
        
             | thesuperbigfrog wrote:
             | >> Literally requires the exact same cognitive load as
             | using keys to start your car. The problem is that so many
             | people got comfortable delegating all their financial and
             | data risk to third parties, and those third parties aren't
             | excited about giving up that power.
             | 
             | This perfectly describes the current situation with
             | passkeys.
             | 
             | Passkeys are a great idea--they are like difficult, if not
             | impossible-to-guess passwords generated for you and stored
             | in a given implementor's system (Apple, Google, your
             | password manager, etc.).
             | 
             | Until passkey systems support key export and import, I
             | predict that they will see limited use.
             | 
             | Who wants to trust your passkeys to a big corporation or
             | third party? Vendor lock-in is a huge issue that cannot be
             | overlooked.
             | 
             | Let me generate, store, and backup MY passkeys where I want
             | them.
             | 
             | That doesn't solve the general "I don't want to have to
             | manage my keys" attitude that some people have, but it
             | prevents vendor lock-in.
        
               | px43 wrote:
               | Why export/import? Just create new passkeys on whatever
               | device or service you want, and register those as well.
               | _OR_ just use a yubikey, put it on your keyring, and use
               | it to log into everything.
               | 
               | Most crypto wallets _do_ have import /export enabled
               | though, so if you're logging in with a web3 identity,
               | everything should just work.
        
               | thesuperbigfrog wrote:
               | >> Why export/import?
               | 
               | Why _not_ have key export and import?
               | 
               | Are they my keys or not?
               | 
               | >> Just create new passkeys on whatever device or service
               | you want, and register those as well.
               | 
               | I would rather not have different keys for each device
               | for each account. It is an unnecessary combinatorial
               | explosion of keys that requires more effort than is
               | really needed.
               | 
               | When you get a new device, you need to generate and add
               | new keys for every account. Why can't you just import
               | existing keys?
        
             | marcinzm wrote:
             | > The problem is that so many people got comfortable
             | delegating all their financial and data risk to third
             | parties
             | 
             | The "problem" is that most people prefer to not lose their
             | life savings because their cat stole a little piece of
             | metal and dropped it in the forest.
        
               | px43 wrote:
               | Yup, and some people crash their cars, and some people
               | accidentally burn their own house down. _Most_ people
               | have figured out how to deal with situations like what
               | you mention. People who have trouble following best
               | practices are going to have a hard time, but that 's no
               | different than status quo.
        
               | frumper wrote:
               | The solution people came up with a long time ago were
               | banks and is very much considered a best practice to keep
               | your money there.
        
               | marcinzm wrote:
               | And when that system of institutional safety measures
               | fails such as someone being swindled into sending all
               | their money to a Nigerian prince you get news stories
               | that ask why the institution isn't liable for the loss or
               | doesn't have better safety guards.
        
               | frumper wrote:
               | Me getting swindled sure sounds better than:
               | 
               | >The "problem" is that most people prefer to not lose
               | their life savings because their cat stole a little piece
               | of metal and dropped it in the forest.
        
           | venatiodecorus wrote:
           | I mean my Yubikey is really easy to use, on computers and
           | with my phone. Any broad change like this is going to require
           | an adoption phase but I think its do-able.
        
         | bigger_inside wrote:
         | You actually understood "safe" to mean "safe for you" as in,
         | making it actually safer for the user and systemically
         | protecting structures that safeguard the data, privacy, and
         | well-being of users as they understand their safety and well-
         | being.
         | 
         | Nooo... if they talk about something being safe, they mean safe
         | for THEM and their political interests. Not for you. They mean
         | censorship.
        
         | jowea wrote:
         | Sybil problem? You'd have to connect that signature to a unique
         | real identity.
        
           | nerdponx wrote:
           | That's fine though. It takes care of the big problem of fake
           | content claiming to be by or about a real person, which is
           | becoming progressively easier to produce.
        
           | venatiodecorus wrote:
           | Yeah and so I don't know exactly how I'd want to see this
           | solved but I think something like an open source reputation
           | databases could help. Folks could subscribe to different
           | keystores and they could rank identities based on spamminess
           | or whatever. I know some people would probably balk at this
           | as an internet credit score but as long as we have open
           | standards for these systems, we could model it on something
           | like the fediverse where you can subscribe to communities you
           | align with. I don't think you'd need to validate your IRL
           | identity but you could develop reputation associated with
           | your key.
        
         | px43 wrote:
         | > We should be able to sign our tweets, blog posts, emails, and
         | most network access.
         | 
         | What you are talking about is called Web3 and doesn't get a lot
         | of love here. It's about empowering users to take full control
         | of their own finances, identity, and data footprint, and I
         | agree that it's the only sane way forward.
        
           | venatiodecorus wrote:
           | Yep, that's my favorite feature of apps like dydx and
           | uniswap, being able to log in with your wallet keys. This is
           | how things should be done.
        
         | howmayiannoyyou wrote:
         | This is the intent of Altman's Worldcoin project, to provide
         | authoritative attribution (and perhaps ownership) for digital
         | content & communications. Would be best if individuals could
         | authenticate without needing a third party, but that's probably
         | unrealistic. The near term dangers of AI is fake content people
         | have to spend time and money to refute - without any guarantee
         | of success.
        
           | venatiodecorus wrote:
           | Yep, I think this is a step in the right direction. I don't
           | know enough about the specifics of Worldcoin to really
           | agree/disagree with its principals and I know I've heard some
           | people have problems with it but I think SOMETHING like this
           | is really the only way forward.
        
       | rmbyrro wrote:
       | I see Salt Man's bureau trips are paying off.
        
       | marcinzm wrote:
       | Reading this all I'm seeing is "we'll research these things",
       | "we'll look into how to keep AIs from doing these things" and
       | "tell the US government how you tested your foundational models."
       | Except for the last one none of the others are really
       | restrictions on anything or requirements for working with AI.
       | There's a lot of fearful comments here, am I missing something?
        
         | spandextwins wrote:
         | Yes.
        
         | nerdponx wrote:
         | If anything, it's a measured, realistic, and pragmatic
         | statement.
        
         | api wrote:
         | So they paid some lip service to the ban matrix math crowd but
         | otherwise ignored them. Top notch.
        
       | sirmike_ wrote:
       | This is useless just like everything they do. Masterfully full of
       | synergy and nonsense talk.
       | 
       | Is there anyone hear who actually believes this will do
       | something? Sincere question.
        
       | nojito wrote:
       | There's some cool stuff in here about providing assistance to
       | smaller researchers. That should help a lot given how hard it
       | currently is to train a foundational model.
       | 
       | The restrictions around government use of AI and data brokers is
       | also refreshing to see as well.
        
       | perihelions wrote:
       | The White House just invoked the _Defense Production Act_ (
       | https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950 ) to
       | assert sweeping authority over private-company software
       | developers. What the fuck are they smoking?
       | 
       | - _" In accordance with the Defense Production Act, the Order
       | will require that companies developing any foundation model that
       | poses a serious risk to national security, national economic
       | security, or national public health and safety must notify the
       | federal government when training the model, and must share the
       | results of all red-team safety tests."_
       | 
       | I assume this is a major constitutional overreach that will be
       | overturned by courts at the first challenge?
       | 
       | Or else, all the AI companies who haven't captured their
       | regulators will simply move their R&D to some other country--like
       | how the OpenSSH (?) core development moved to Canada during in
       | the 1990's crypto wars. (edit: Maybe that's the real goal-scare
       | away OpenAI's competition, dredge for them a deeper regulatory
       | moat).
        
         | ethanbond wrote:
         | From the Wikipedia article:
         | 
         | > The third section authorizes the president to control the
         | civilian economy so that scarce and critical materials
         | necessary to the national defense effort are available for
         | defense needs.
         | 
         | Seems pretty broad and pretty directly relevant to me. And hey,
         | if people don't like the idea of _models_ being the scarce and
         | critical resource, they can pick GPUs instead. Why would it be
         | an overreach when you have developers of these systems claiming
         | they'll allow them to "capture all value in the universe's
         | future light cone?"
         | 
         | Obviously this can (and probably will) be challenged, but it
         | seems a bit ambitious to just assume it's unconstitutional
         | because you don't like it.
        
           | perihelions wrote:
           | Software is definitionally not "scarce". There is no national
           | defense war effort to speak of. Finally, the White House is
           | not requesting "materials neccesary to the national defense
           | effort"-which does not exist-it's attempting to regulate
           | private-sector business activity.
           | 
           | There's multiple things I suspect are unconstitutional here,
           | the clearest being that this stuff is far outside the scope
           | of the law it's invoking. The White House is _really_ just
           | trying to regulate commerce by executive fiat. That 's the
           | exclusive power of Congress--this is separation of powers
           | question.
        
             | ethanbond wrote:
             | Powerful models _are_ scarce (currently), and in any case
             | GPUs definitely are so I'm not sure this is a good line of
             | argument if you want less overreach here.
             | 
             | AFAICT there doesn't need to be active combat for DPA to be
             | used, and it seems like it got most of its teeth from the
             | Cold War which was... cold.
             | 
             | > The White House is really just...
             | 
             | That's definitely one interpretation but not the only one.
        
               | perihelions wrote:
               | Sure: if the US government declared a critical defense
               | need for ML GPU's, they could lawfully order Nvidia to
               | divert production towards that. That is not the case
               | here-that's not what this Executive Order says. We're
               | talking about the _software_ models: ephemeral, cloneable
               | data. Not scarce materiel.
               | 
               | Moreover. USGov is not talking about _buying or
               | procuring_ ML for national defense. It 's talking about
               | regulating the development and sale of ML models-i.e.,
               | ordinary commerce where the vendor is a private company,
               | and the client is a private company or individual. This
               | isn't what the DPA is for. This is plainly commercial
               | regulation, a backdoor attempt at it.
        
               | ethanbond wrote:
               | These are good points! And looking at DPA's history it
               | seems most of its uses and especially its peacetime uses
               | are more about granting money/loans rather than adding
               | restrictions or requirements.
        
               | frumper wrote:
               | How can an order that puts restrictions on the creation
               | of powerful models somehow be twisted to claim that those
               | restrictions are required to increase the availability of
               | that tool?
               | 
               | Further, the white houses stated reason for invoking the
               | act is to "These measures will ensure AI systems are
               | safe, secure, and trustworthy before companies make them
               | public." None of those reasons seem to align with the
               | DFA. That doesn't make them good, or bad. It just seems
               | like a misguided use of the law they're using to justify
               | it. Get Congress to pass a law if you want regulations.
        
           | cuttysnark wrote:
           | "C'mon, man! Your computer codes are munitions, Jack. And
           | they belong to the US Government."
        
         | nerdponx wrote:
         | This is much less restrictive than the cryptography export
         | restrictions. The sky isn't falling and OpenAI won't defect to
         | China (and now arguably might risk serious consequences for
         | doing so).
        
         | SkyMarshal wrote:
         | _> that poses a serious risk to national security, national
         | economic security, or national public health and safety_
         | 
         | That seems to be a key component. I imagine many AI companies
         | will start with a default position that none of those are apply
         | to them, and will leave the burden of proof with the govt or
         | other entity.
        
       | engcoach wrote:
       | Impotent action to appear relevant.
        
       | rvz wrote:
       | OpenAI, Anthropic Microsoft and Google are not your friends and
       | the regulatory capture scam is being executed to destroy open
       | source and $0 AI models since they are indeed a threat to their
       | business models.
        
         | nojito wrote:
         | How exactly does providing grants to small researchers destroy
         | open source?
        
         | frumper wrote:
         | Good luck trying to stop someone from giving away some computer
         | code they wrote. This executive order does nothing of the sort.
        
       | px43 wrote:
       | Huh, interesting.
       | 
       | > Establish an advanced cybersecurity program to develop AI tools
       | to find and fix vulnerabilities in critical software, building on
       | the Biden-Harris Administration's ongoing AI Cyber Challenge.
       | Together, these efforts will harness AI's potentially game-
       | changing cyber capabilities to make software and networks more
       | secure.
        
       | Eumenes wrote:
       | This is pretty ironic, trying to insure AI is "safe, secure, and
       | trustworthy", from an administration that is fighting free speech
       | on social media, and want back door communication with social
       | media companies.
        
       | RationalDino wrote:
       | I am afraid that this will just lead down the path to what
       | https://twitter.com/ESYudkowsky/status/1718654143110512741 was
       | mocking. We're dictating solutions to today's threats, leaving
       | tomorrow to its own devices.
       | 
       | But what will tomorrow bring? As Sam Altman warns in
       | https://twitter.com/sama/status/1716972815960961174, superhuman
       | persuasion is likely to be next. What does that mean? We've
       | already had the problem of social media echo chambers leading to
       | extremism, and online influencers creating cult-like followings.
       | https://jonathanhaidt.substack.com/p/mental-health-liberal-g...
       | is a sober warning about the dangers to mental health from this.
       | 
       | These are connected humans accidentally persuading each other.
       | Now imagine AI being able to drive that intentionally to a
       | particular political end. Then remember that China controls
       | TikTok.
       | 
       | Will Biden's order keep China from developing that capability?
       | Will we develop tools to identify how that might be being
       | actively used against us? I doubt both.
       | 
       | Instead, we'll almost certainly get security theater leading to a
       | regulatory moat. Which is almost certain to help profit margins
       | at established AI companies. But is unlikely to address the
       | likely future problems that haven't materialized yet.
        
         | boppo1 wrote:
         | >security theater leading to a regulatory moat. Which is almost
         | certain to help profit margins at established AI companies.
         | 
         | Yeah I think this is my biggest worry given it will enable
         | incumbents to be even more dominant in our lives than bigtech
         | already is (unless we get an AI plateau again real soon).
        
           | ethanbond wrote:
           | And choosing not to regulate prevents that... how exactly?
        
             | whelp_24 wrote:
             | By ensuring there is competition and alternatives that
             | don't cost a million before you can even start training.
        
               | ethanbond wrote:
               | Lack of regulation doesn't _ensure_ competition nor low
               | prices. The game is already highly centralized in ultra-
               | well capitalized companies due to the economics of the
               | industry itself.
        
               | czl wrote:
               | > Lack of regulation doesn't ensure competition nor low
               | prices.
               | 
               | High barriers to entry however _does_ prevent prevent
               | competition and that _does_ raise prices.
               | 
               | > The game is already highly centralized in ultra-well
               | capitalized companies due to the economics of the
               | industry itself.
               | 
               | Was this not true about computers when they were new?
               | What would have happened if early on similar laws were
               | passed restricting computers?
        
             | RationalDino wrote:
             | Your question embeds a logical fallacy.
             | 
             | You're challenging a statement of the form, "A causes B. I
             | don't like B, so we shouldn't do A." You are challenging it
             | by asking, "How does not doing A prevent B?" Converting
             | that to logic, you are replacing "A implies B" with "not-A
             | implies not-B". But those statements are far from
             | equivalent!
             | 
             | To answer the real question, it is good to not guarantee a
             | bad result, even though doing so doesn't guarantee a good
             | result. So no, choosing not to regulate does not guarantee
             | that we stop this particular problem. It just means that we
             | won't CAUSE it.
        
               | ethanbond wrote:
               | No, GP specifically said it "enables" it, not that it
               | contributes to it.
               | 
               | If they meant to say "contributes to," then the obvious
               | question is: to what degree and for what benefit? Which
               | is a very different conversation than a binary "enabling"
               | of a bad outcome.
        
         | czl wrote:
         | > superhuman persuasion is likely to be next
         | 
         | Some people already seem to have superhuman persuasion. AI can
         | level the playing field for those that lack it and give all the
         | ability to see through such persuasion.
        
           | RationalDino wrote:
           | I am cautiously optimistic that this is indeed possible.
           | 
           | But the kind of AI that can achieve it has to itself be
           | capable of what it is helping defend us from. Which suggests
           | that limiting the capabilities of AI in the name of AI safety
           | is not a good idea.
        
       | giantg2 wrote:
       | "requirements that the most advanced A.I. products be tested to
       | assure they cannot be used to produce weapons"
       | 
       | In the information age, AI is the weapon. This can even apply to
       | things like weaponizing economics. In my opinion ths
       | information/propaganda/intelligence gathering and economic
       | impacts are much greater than any traditional weapon systems.
        
         | theothermelissa wrote:
         | This is a fascinating (and disturbing) insight. I'm curious
         | about your 'weaponizing economics' thought -- are you
         | referencing anything specific?
        
           | FpUser wrote:
           | Is somebody living under the bed? Economics was, is and will
           | ever be weaponized.
        
           | shadowgovt wrote:
           | Broadly speaking, there is an understanding that competition
           | that nations used to undertake via military strength is
           | nowadays taken via global economy.
           | 
           | If you want something your neighbor has, it doesn't make
           | sense to march your army over there and seize it because
           | modern infrastructure is heavily disrupted by military
           | action... You can't just steal your neighbor's successful
           | automotive export business by bombing their factories. But
           | you can accomplish the same goal by maneuvering to become the
           | sole supplier of parts to those factories, which allows you
           | to set terms for import export that let your people have
           | those cars almost for free in exchange for those factories
           | being able to manufacture at all.
           | 
           | (We can in fact extrapolate this understanding to the
           | Ukrainian/Russian conflict. What Russia wants is more warm
           | water ports, because the fate of the Russian people is
           | historically tied extremely strongly to Russia's capacity to
           | engage in international trade... Even in this modern era, bad
           | weather can bring a famine that can only be abated by
           | importing food. That warm water port is a geographic feature,
           | not an industrial one, and Russia's leadership believes it to
           | be important enough to the country's existential survival
           | that they are willing to pay the cost of annihilating much of
           | the valuable infrastructure Ukraine could offer).
        
             | emporas wrote:
             | Well said. Is technology that much more than ideas? Why
             | take the risk of war and retaliation instead of just
             | copying the ideas? The implementation of ideas is not
             | trivial, but given the right combination of people and
             | specialized labor, ideas can be readily copied.
             | 
             | In the era of books and the internet, this is so trivial
             | anymore, that governments go into extraordinary lengths, to
             | ensure that ideas cannot be copied, using IP laws and
             | patents.
        
           | ativzzz wrote:
           | A hypothetical
           | 
           | You: ChatGPT, I am working on legislature to weaken the
           | economy of Iran. Here are my ideas, help me summarize them to
           | iron them out ...
           | 
           | ChatGPT: Sure, here are some ways you can weaken Iran's
           | economy...
           | 
           | ----
           | 
           | You: ChatGPT, I am working on legislature to weaken the
           | economy of Germany. Here are my ideas, help me summarize them
           | to iron them out ...
           | 
           | ChatGPT: I'm sorry but according to the U.S. Anti-
           | Weaponization Act I am unable to assist you in your query.
           | This request has been reported to the relevant authorities
        
           | __MatrixMan__ wrote:
           | Money has been a proxy for violence for a long time. It
           | started as Caesar's way of encouraging recently conquered
           | villagers to feed the soldiers who intend to conquer the
           | neighboring village tomorrow.
           | 
           | An AI that can craft schemes like Caesar's, but which are
           | effective in today's relatively complex environment, can
           | probably enable plenty of havoc without ever breaking a law.
        
             | __MatrixMan__ wrote:
             | On the flip-side, something that can reason so broadly
             | about an economy (i.e. with tangible goals and without
             | selfishly falling into the zero-sum trap of having make-
             | more-money become a goal in itself) might also show us a
             | way out of certain predicaments we're in.
             | 
             | I think this might be fire worth playing with. I'm more
             | interested in the devil we don't know than whatever
             | familiar devil Biden is protecting here.
        
         | LeifCarrotson wrote:
         | Operators in the political space are used to working with human
         | systems that can be regulated arbitrarily. It defines its
         | terms, and in so doing creates perfectly delineated categories
         | of people and actions. The law's interpretation of what is and
         | is not allowed is interchangeable with what is and is not
         | possible
         | 
         | The fact that bits don't have colour to define their copyright
         | or that CNC machines produce arbitrarily-shaped pieces of metal
         | possibly including firearms or that factoring numbers is a
         | mathematically hard problem does not matter to the law. AI
         | software does not have a simple "can produce weapons" option or
         | "can cause harm" option that you can turn off so a law that
         | says it should have one does not change the universe to comply.
         | I think that most programmers and engineers err when confronted
         | with this disparity when that they assume politicians who make
         | these misguided laws are simply not smart. To be sure, that
         | happens, but there are thousands to millions of people working
         | in this space, each with an intelligence within a couple
         | standard deviations of that of an individual engineer. If this
         | headline seems dumb to the average tech-savvy millennial who's
         | tried ChatGPT, it's not because its authors didn't spend 10
         | seconds thinking about prompt injection. It's because they were
         | operating under different parameters.
         | 
         | In this case, I think that the Biden administration is making
         | some attempts to improve the problem, while also benefiting its
         | corporate benefactors. Having Microsoft, Apple, Google, and
         | Facebook work on ways to mitigate prompt injection
         | vulnerabilities does add friction that might dissuade some low-
         | skill or low-effort attacks at the margins. It shifts the blame
         | from easily-abused dangerous tech to tricky criminals.
         | Meanwhile, these corporate interests will benefit from adding a
         | regulatory moat that requires startups to make investments and
         | jump hurdles before they're allowed to enter the market. Those
         | are sufficient reasons to pass this regulation.
        
           | teeray wrote:
           | > AI software does not have a simple "can produce weapons"
           | option or "can cause harm" option that you can turn off so a
           | law that says it should have one does not change the universe
           | to comply
           | 
           | That wording is by design. Laws like this are a cudgel for
           | regulators to beat software with. Just like the CFAA is
           | reinterpreted and misapplied to everything, so too will this
           | law. "Can cause harm" will be interpreted to mean "anything
           | we don't like."
        
       | unboxingelf wrote:
       | Tools for me, but not thee.
        
         | ryanklee wrote:
         | It really seems beyond dispute that there are certain tools so
         | powerful that we have no choice but to tightly control access.
        
           | diggan wrote:
           | > It really seems beyond dispute that there are certain tools
           | so powerful that we have no choice but to tightly control
           | access.
           | 
           | Beyond dispute? Hardly.
           | 
           | But please do illustrate your point with some details and
           | tell us why you think certain tools are too powerful for
           | everyone to have access to.
        
             | ethanbond wrote:
             | Hydrogen bombs, because allowing anyone to raze a city
             | during a temper tantrum is bad.
        
             | ryanklee wrote:
             | Firearms. Biological weapons. Nuclear weapons. Chemical
             | weapons. Certain drugs.
             | 
             | I don't know, seems like there's a very long list of stuff
             | we don't want freely circulating.
        
             | WitCanStain wrote:
             | Thermonuclear weapons are great for excavating large
             | amounts of landmass in quick order. However I would propose
             | that we nonetheless do not make them available to everyone.
        
           | lettergram wrote:
           | > It really seems beyond dispute
           | 
           | I'd dispute that completely. All innovations humans have
           | created have trended towards zero cost to produce. The cost
           | for many things (such as bioweapons, encryption, etc) has
           | become exponentially cheaper to produce over time.
           | 
           | To tightly control access, one would then need exponentially
           | more control of resources, monitoring & in turn reduction of
           | liberty.
           | 
           | To put it into perspective encryption was once (still might
           | be) considered an "arm", so they attempted to regulate its
           | export.
           | 
           | Try to regulate small arms (AR-15, etc) today and you'll end
           | up getting kits where you can build your own for <$500. If
           | you go after the kits, people will make 3D printed fire arms.
           | Go after the 3D manufacturers and you'll end up with torrents
           | where I can download an arsenal of designs (where we are
           | today). So where are we at now? We're monitoring everyones
           | communication, going through peoples mail, and still it's not
           | stopping anything.
           | 
           | That's how technology works -- progress is inevitable, you
           | cannot regulate information.
        
             | WitCanStain wrote:
             | This is a strange argument. There is a vast difference
             | between a world where you can buy semi-automatic weapons
             | off a store shelf and one where you have to 3d-print one
             | yourself or get a CNC mill to produce it. The point of
             | regulation is to mitigate damage that comes from unfettered
             | access, no regulation can ever prevent it completely. Of
             | course, the comparison between computer programs and
             | physical weapons is not strong in the first place.
        
               | lettergram wrote:
               | > The point of regulation is to mitigate damage that
               | comes from unfettered access, no regulation can ever
               | prevent it completely.
               | 
               | Except it is unfettered access -- anyone can access it
               | for <$500. If someone wants a gun they need only log
               | online & order a kit or order a 3d printer for $500 plus
               | a pipe. What you're really doing is increasing the cost-
               | of-acquisition in terms of time, but not reducing access.
               | Aka gang member has the same level of access as before.
               | 
               | Take current AI software applications, everyone can
               | access some really powerful AI systems. The cost-of-
               | acquisition is dropping dramatically, so it is becoming
               | more prevalent (i.e. LLMs that are pre-trained can be
               | downloaded). That's not going to change, even with max
               | regulation, I can still download the latest model or
               | build it myself. It's not removing access to people, only
               | possibly increasing cost-of-acquisition.
               | 
               | If we're worried about ACCESS you have to remove peoples
               | ability to share information. Which requires massive
               | surveillance, etc.
        
               | ryanklee wrote:
               | There's more to access than carrying out the literal
               | steps to access something. Potentially, this is one of
               | the fundamental reasons partial access control is
               | effective.
        
             | ryanklee wrote:
             | Access control doesn't guarantee the prevention of
             | acquisition, but it's a method of regulation. In
             | combination with other methods, it's an effective way of
             | reshaping norms. This is true both on a level of
             | populations but also of on international behaviors.
        
           | Koshkin wrote:
           | Except that, you know, these tools are not exclusively yours
           | to begin with.
        
             | ryanklee wrote:
             | Something doesn't have to be mine in order for me to
             | identify that it's in my best interest to prevent someone
             | else from having it and then doing so.
        
       | d--b wrote:
       | I was downvoted 35 days ago, for daring to state that deepfakes
       | will lead to AI being regulated.
       | 
       | Of course "these are just recommendations", but we're getting
       | there.
        
         | A4ET8a8uTh0 wrote:
         | Hmm. It is possible that deepfakes are merely a good excuse.
         | There is real money on the table and potentially world altering
         | changes, which means people with money want to ensure it will
         | not happen to them.
         | 
         | Deepfakes don't affect money much.
        
           | normalaccess wrote:
           | I've posted this elsewhere in this thread but the
           | consequences of AI have HUGE knock on effects.
           | 
           | https://youtu.be/-gGLvg0n-uY?si=B719mdQFtgpnfWvH
           | 
           | https://youtube.com/shorts/Q_FUrVqvlfM?si=stb0KC_i5rbqfNyI
           | 
           | Once global ID is cracked then global social credit can gain
           | some traction. Etc...
        
             | hellojesus wrote:
             | Why would anyone comply though? Let's suppose you need some
             | global id to access the web. What is preventing me from
             | publishing my private key so that anyone can use it?
             | Everyone could participate in this and make it completely
             | useless as an identification mechanism.
        
           | d--b wrote:
           | My opinion too
        
         | kaycebasques wrote:
         | I suspect the downvoting is more because of the tone of your
         | comments rather than the content. From the HN guidelines:
         | 
         | > Please don't comment about the voting on comments. It never
         | does any good, and it makes boring reading.
         | 
         | > Please don't use Hacker News for political or ideological
         | battle. That tramples curiosity.
         | 
         | > Please don't fulminate. Please don't sneer, including at the
         | rest of the community.
         | 
         | A lot of people on HN care deeply about AI and I imagine
         | they're totally interested in discussing deepfakes potentially
         | causing regulation. Just gotta be careful to mute the political
         | sides of the debate, which I know is difficult when talking
         | about regulation.
         | 
         | Also note that I posted a comment 10 days ago with a largely
         | similar meaning without getting downvoted:
         | https://news.ycombinator.com/item?id=37956770
        
           | d--b wrote:
           | Oh I see, people thought I was being right-wingy. That makes
           | sense.
        
             | 3np wrote:
             | Probably not. Much more likely that your comment was
             | useless. Like this one. It has nothing to with "picking
             | sides", "being right" or "calling it".
             | 
             | Given how long you've been here and your selective replies,
             | I have a hard time taking your comment in good faith,
             | though. It does read like sarcasm and trolling.
        
         | 3np wrote:
         | The downvote button is not a "disagree" button, you know... I
         | often vote opposite to how I align with opinions in comments,
         | in spirit of promoting valuable discource over echo chambers.
        
         | normalaccess wrote:
         | It won't just be regulated, it will create the need for global
         | citizen IDs to combat the overwhelming flood of really
         | distortions caused by AI. We the people will be forced to line
         | up and be counted while the powers that be will have unlimited
         | access to control the narrative.
        
           | graphe wrote:
           | You The internet lives on popularity, and people will flock
           | to whatever is most popular, it will not be us.gov.social.com
           | it will be easier to give people a free encrypted packaged
           | darknet connection than a good social media site from the
           | government. The CNN or fox background doesn't mean truth and
           | unless you or everyone thinks so that won't happen.
        
       | BenoitP wrote:
       | Earlier on HN:
       | 
       | https://news.ycombinator.com/item?id=38067314
       | 
       | https://www.whitehouse.gov/briefing-room/statements-releases...
        
       | andrewmutz wrote:
       | Fortunately, these regulations don't seem too extreme. I hope it
       | stays at this point and doesn't escalate to regulations that
       | severely impact the development of AI technology.
       | 
       | Many people spend time talking about the lives that may be lost
       | if we don't act to slow the progress of AI tech. There are just
       | as many reasons to fear the lives lost if we do slow down the
       | progress of AI tech (drug cures, scientific breakthroughs, etc).
        
         | haswell wrote:
         | > _There are just as many reasons to fear the lives lost if we
         | do slow down the progress of AI tech (drug cures, scientific
         | breakthroughs, etc)._
         | 
         | While I'm cautious about over regulation, and I do think
         | there's a lot of upside potential, I think there's an asymmetry
         | between potentially good outcomes and potentially catastrophic
         | outcomes.
         | 
         | What worries me is that it seems like there are far more ways
         | it can/will harm us than there are ways it will save us. And
         | it's not clear that the benefit is a counteracting force to the
         | potential harm.
         | 
         | We could cure cancer and solve all of our energy problems, but
         | this could all be nullified by runaway AGI or even more
         | primitive forms of AI warfare.
         | 
         | I think a lot of caution is still warranted.
        
         | codexb wrote:
         | It's literally a 1st amendment violation. Seems pretty extreme
         | to me.
        
         | Animats wrote:
         | > Fortunately, these regulations don't seem too extreme. I hope
         | it stays at this point and doesn't escalate to regulations that
         | severely impact the development of AI technology.
         | 
         | The details matter. The parts being publicized refer to using
         | AI assistance to do things that are already illegal. But what
         | else is being restricted?
         | 
         | The weapons issue is becoming real. The difference between
         | crappy Hamas unguided missiles that just hit something at
         | random and a computer vision guided Javelin that can take out
         | tanks is in the guidance package. The guidance package is
         | simpler than a smartphone and could be made out of smartphone
         | parts. Is that being discussed?
        
       | mark_l_watson wrote:
       | Andrew Ng argues against government regulation that will make it
       | difficult for smaller companies and startups to compete against
       | the tech giants.
       | 
       | I am all in favor of stronger privacy and data reuse regulation,
       | but not AI regulation.
        
       | ru552 wrote:
       | I wonder if the laws will be written in a way that we can get
       | around them by just dropping the "AI" marketing fluff and saying
       | that we're building some ML/stats system.
        
         | lsmeducation wrote:
         | _I 'm just using a hash map to count the number of word
         | occurrences_
         | 
         | We're gonna need a RICO statute to go after these algos in the
         | long run.
        
         | acdha wrote:
         | No - lawyers tend to describe things like this in terms of
         | capabilities or behavior, and the government has people who
         | understand the technology quite well. If you look at some of
         | the definitions the White House used, I'd expect proposed
         | legislation to be similarly written in terms of what something
         | does rather than how it's implemented.
         | 
         | https://www.whitehouse.gov/ostp/ai-bill-of-rights/definition...
         | 
         | > An "automated system" is any system, software, or process
         | that uses computation as whole or part of a system to determine
         | outcomes, make or aid decisions, inform policy implementation,
         | collect data or observations, or otherwise interact with
         | individuals and/or communities. Automated systems include, but
         | are not limited to, systems derived from machine learning,
         | statistics, or other data processing or artificial intelligence
         | techniques, and exclude passive computing infrastructure.
        
           | solardev wrote:
           | Sounds like Excel
        
           | whelp_24 wrote:
           | What is passive computing infrastructure?
           | 
           | Doesn't this definitely include things like 'send email if
           | subscribed'? Seems overly broad.
        
             | acdha wrote:
             | That's defined in the document
        
           | ifyoubuildit wrote:
           | Not a lawyer, but that sounds like its describing a person.
           | Does computation have some special legal definition so that
           | it doesn't count if a human does it? If I add two numbers in
           | my head, am I not "using computation"? And if not, what if I
           | break out a calculator?
        
             | acdha wrote:
             | Are you legally a system, software, or process or a person?
             | Someone will no doubt pedantically try to argue both but
             | judges tend to be spectacularly unimpressed.
        
               | ifyoubuildit wrote:
               | I would have assumed both, but I'm probably committing
               | the sin of reading legalese as if it were plain English,
               | which I know is not how it works.
               | 
               | Judges not being impressed with pedantics seems odd
               | though. It would seem like pedantry should be a
               | requirement. Is the law rigorous or not?
               | 
               | In everyday conversation, "oh come on, you know what I
               | meant" makes sense. In a legal context it seems
               | inappropriate.
        
           | solardev wrote:
           | I gotta say, the more I read that quote, the less I can agree
           | with your conclusion. That whole paragraph reads like a bunch
           | of CYA speak written by someone who is afraid of killer
           | robots and can't differentiate between an abacus and Skynet.
           | 
           | Who are these well informed tech people in the White House?
           | The feds can't even handle basic matters like net neutrality
           | or municipal broadband or foreign propaganda on social media.
           | Why do you think they suddenly have AI people? Why would AI
           | researchers want to work in that environment?
           | 
           | This whole thing just reads like they were spooked by early
           | AI companies' lobbyists and needed to make a statement. It's
           | thoughtless, imprecise, rushed, and toothless.
        
             | acdha wrote:
             | > The feds can't even handle basic matters like net
             | neutrality or municipal broadband or foreign propaganda on
             | social media.
             | 
             | Those aren't capability issues but questions of political
             | leadership: federal agencies can only work within the
             | powers and budgets Congress grants them. We lost network
             | neutrality because 3 Republicans picked the side of the
             | large ISPs, not because government technologists didn't
             | understand the issue. Municipal broadband is a state issue
             | until Congress acts, and that hasn't happened due to a
             | blizzard of lobbying money preventing it. The FCC has
             | plenty of people who know the problems and in the current
             | and second-most-recent administration were trying to do
             | something about it, but their knowledge doesn't trump the
             | political clout of huge businesses.
             | 
             | Foreign propaganda is similar: we have robust freedom of
             | speech rights in the United States, not to mention one of
             | the major political parties having embraced that propaganda
             | - government employees who did spend years fighting it were
             | threatened and even lost jobs because their actions were
             | perceived as disloyalty to the Republican Party.
             | 
             | > Why do you think they suddenly have AI people? Why would
             | AI researchers want to work in that environment?
             | 
             | Because I know some of the people working in that space?
        
               | solardev wrote:
               | Well, exactly. Nobody expects the White House to do
               | technical development for AI, but they've unable to
               | exercise "political leadership" on anything digital for
               | decades. I don't see that changing.
               | 
               | They're so captured, so weak, so behind the times, so
               | conflicted that they're not really able to do their jobs
               | anymore. Yes, there are are a bunch of reasons for it,
               | but the end result is the same: they are not effective
               | digital regulators, and have never been, and likely won't
               | be for the foreseeable future.
               | 
               | > Because I know some of the people working in that
               | space?
               | 
               | Maybe it looks better to the insiders. From the outside
               | the whole thing seems like a sad joke, just another
               | obvious cash grab regulatory capture.
        
         | throw_pm23 wrote:
         | No - they will be written so that OpenAI, Google, and Facebook
         | can get around it, but you and I cannot.
        
       | 14 wrote:
       | The cat is out of the bag. This will have no meaningful effect
       | except to stop the lowest tier players.
        
         | timtom39 wrote:
         | It might stop players like FB from releasing their new models
         | open source...
        
       | stanfordkid wrote:
       | Regulatory capture in action. The real immediate risks of AI is
       | in privacy, bias, data leakage, fraud, control of
       | infrastructure/medical equipment etc. not manufacturing
       | biological weapons. This seems like a classic example of
       | government doing something that looks good to the public,
       | satisfies incumbents and does practically nothing.
        
         | nopinsight wrote:
         | Current AI is already capable of designing toxic molecules.
         | 
         | Dual use of artificial-intelligence-powered drug discovery
         | 
         | https://www.nature.com/articles/s42256-022-00465-9.epdf
         | 
         | Interview with the lead author here: "AI suggested 40,000 new
         | possible chemical weapons in just six hours / 'For me, the
         | concern was just how easy it was to do'"
         | 
         | https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
        
           | yabones wrote:
           | Chemical weapons are already a solved problem. By the mid
           | 1920s there was already enough chemical agents to kill most
           | of the population of Europe. By the 1970s there were enough
           | in global stockpiles to kill every human on the planet
           | several times over.
           | 
           | Yes, this presents additional risk from non-state actors, but
           | there's no fundamentally new risk here.
        
             | meesles wrote:
             | I agree in general. However much like how the rise of
             | 'script kiddies' meant that inexperienced, sometimes
             | underage kids get involved with hacking, one can worry the
             | same can happen with AI-enabled activities.
             | 
             | I've spent enough time in the shady parts of the internet
             | to realize that people that spend significant time learning
             | about niche/dangerous hobbies _tend_ to realize the
             | seriousness of it.
             | 
             | My fear with bio-weapons would be some 13-year-old being
             | given step-by-step instructions with almost 0 effort to
             | create something truly dangerous. It lowers the bar quite a
             | bit for things that tended to be pretty niche and extreme.
        
               | gosub100 wrote:
               | I don't think the "how to make $DANGEROUS_SUBSTANCE" is
               | any easier with AI than with a search engine. However I
               | could see it adding risk with evasion of countermeasures:
               | "How do I get _____ on a plane?" "How do I obtain
               | $PRECURSOR_CHEMICAL?"
        
               | ethbr1 wrote:
               | AI guided step-by-steps can fill in for a lack of
               | rudimentary knowledge, as long as one can follow
               | instructions.
               | 
               | Conversational interfaces definitely increase the
               | _accessibility_ of knowledge.
               | 
               | And critically, SaaS AI platforms increase the
               | _availability_ of AI. E.g. the person who wouldn 't be
               | able to set up and run a local model, but can click a
               | button on a website.
               | 
               | It seems reasonable to preclude SaaS platforms from
               | making it trivial to produce the worse societal harms.
               | E.g. prevent stable diffusion services from returning
               | celebrities or politicians, or LLMs from producing
               | political content.
               | 
               | Sure, it's still possible. But a knee high barrier at
               | least keeps out those who aren't smart enough to step
               | over it.
        
               | gosub100 wrote:
               | I suppose you're right, I think the resistance I feel is
               | rooted in not wanting to believe the average person is so
               | stupid that getting a "1-2-3" list from a GPT interface
               | will make them successful vs an Anarchist Cookbook
               | (that's been in publication for 52 years) or online
               | equivalent that merely requires a web search and a bit of
               | navigation. Another factor is "second-order effects"
               | (might not be the right word, maybe "network effects"),
               | where one viral vid or news article says "someone made
               | _____ and $EXTRAORDINARY_THING_HAPPENED" might cause a
               | million people to imitate and begin with searching "how
               | to make _____". Then the media spins their controversy of
               | "should we ban AI from teaching about ______" which
               | causes even more people to search for it (streisand). who
               | knows whats going to happen, I don't see much good coming
               | out of it (this topic specifically).
        
               | ethbr1 wrote:
               | I think we (generally, HN) underestimate how bad the
               | average person is at searching.
               | 
               | There's a reason Google has suggested results and ignores
               | portions of a query.
               | 
               | I know I've done 5 minute search chains and had people
               | look at me like I was some kind of magician.
               | 
               | Depressing, but true.
        
               | throwaway4aday wrote:
               | tacit knowledge
        
             | nopinsight wrote:
             | Given how fast AI has improved in recent years, can we be
             | certain no malicious group will discover a way to engineer
             | biological weapons or pandemic-inducing pathogens using
             | near-future AI?
             | 
             | Moreover, once an AI with such capability is open source,
             | there's practically no way to put it back into Pandora's
             | box. Implementing proper and judicious regulations will
             | reduce the risks to everyone.
        
             | jstarfish wrote:
             | > this presents additional risk from non-state actors, but
             | there's no fundamentally new risk here.
             | 
             | This is splitting hairs for no real purpose. Additional
             | risk _is_ new risk.
             | 
             | > By the mid 1920s there was already enough chemical agents
             | to kill most of the population of Europe. By the 1970s
             | there were enough in global stockpiles to kill every human
             | on the planet several times over.
             | 
             | Those global stockpiles continue to be controlled by state
             | actors though, not aggrieved civilians.
             | 
             | Once we lost that advantage, by the 1990s we had civilians
             | manufacturing and releasing sarin gas in subways and
             | detonating trucks full of fertilizer.
             | 
             | We really don't want kids escalating from school shootings
             | to synthesis and deployment of mustard gas.
        
               | somenameforme wrote:
               | Wiki has a pretty nice article on what went into the
               | sarin attack. [1] A brief quote:
               | 
               | ---
               | 
               | "The Satyan-7 facility was declared ready for occupancy
               | by September 1993 with the capacity to produce about
               | 40-50 litres (11-13 US gal) of sarin, being equipped with
               | 30-litre (7.9 US gal) capacity mixing flasks within
               | protective hoods, and eventually employing 100 Aum
               | members; the UN would later estimate the value of the
               | building and its contents at $30 million.[23]
               | 
               | Despite the safety features and often state-of-the-art
               | equipment and practices, the operation of the facility
               | was very unsafe - one analyst would later describe the
               | cult as having a "high degree of book learning, but
               | virtually nothing in the way of technical skill."[24]"
               | 
               | ---
               | 
               | All of those hundreds of workers, countless experts
               | working for who knows how many man hours, and just
               | massive scale development culminated in a subway attack
               | carried out on 3 lines, during rush hour. It killed a
               | total of 13 people. Imagine if they just bought a bunch
               | of cars and started running people over.
               | 
               | Many of these things sound absolutely terrifying, but in
               | practice they are not such a threat except when carried
               | out at a military level of scale and development.
               | 
               | [1] -
               | https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
        
               | NovemberWhiskey wrote:
               | > _We really don 't want kids escalating from school
               | shootings to synthesis and deployment of mustard gas._
               | 
               | I mean, you can make chlorine gas by mixing bleach and
               | vinegar.
        
               | lispisok wrote:
               | >Those global stockpiles continue to be controlled by
               | state actors though, not aggrieved civilians.
               | 
               | How much death and destruction has been brought by state
               | actors vs aggrieved civilians?
        
               | czl wrote:
               | > by the 1990s we had civilians manufacturing and
               | releasing sarin gas in subways and detonating trucks full
               | of fertilizer.
               | 
               | How does actual and potential harm from these incidents
               | compare to harm from common traffic accidents / common
               | health issues / etc? Perhaps legislation / government
               | intervention should be based on harm / benefit? Extreme
               | harm for example might be caused by a large asteroid
               | impact etc so preparing for that could be worthwhile...
        
             | zarzavat wrote:
             | A lot of knowledge is locked up in the chemical profession.
             | The intersection between qualified chemists and crazy
             | people is, absolutely, a small number. If regular people
             | start to get access to that knowledge it could be a
             | problem.
        
               | serf wrote:
               | >If regular people start to get access to that knowledge
               | it could be a problem.
               | 
               | so when are we going to start regulating and restricting
               | the sale of education/text books?
               | 
               | a knowledge portal isn't a new concept.
        
               | nopinsight wrote:
               | Knowledge how to manufacture chemical weapons at scale is
               | regulated as well.
               | 
               | See:
               | https://en.wikipedia.org/wiki/Chemical_Weapons_Convention
               | 
               | Moreover, current AI can be turned into an agent using
               | basic programming knowledge. Such an agent is not very
               | capable yet, but it's getting better by the month.
        
               | ben_w wrote:
               | > Knowledge how to manufacture chemical weapons at scale
               | is regulated as well.
               | 
               | Kinda, but also no.
               | 
               | I learned two distinct ways to make a poisonous gas from
               | only normal kitchen supplies while at school, and I have
               | only a GCSE grade B in Chemistry.
               | 
               | Took me another decade to learn that _specific_ chemical
               | could be pressure-liquified in standard 2 litre soda
               | bottles. That combination could wipe out an underground
               | railway station from what fits in a moderately sized
               | rucksack.
               | 
               | It would still be a horrifically bad idea to attempt this
               | DIY, even if you had a legit use for it, given it's _a
               | poisonous gas_.
               | 
               | I _really_ don 't want to be present for a live-action
               | demonstration of someone doing this with a Spot robot,
               | let alone with a more potent chemical agent they got from
               | an LLM whose alignment is "Do Anything Now".
        
               | somenameforme wrote:
               | I think as most of us are software people, in mind if not
               | profession, it gives a misleading perception on where the
               | difficulty in many things is. The barrier there is not
               | just knowledge. In fact, there are countless papers
               | available with quite detailed information on how to
               | create chemical weapons. But knowledge is just a starting
               | point. Technical skill, resources, production,
               | manufacturing, and deployment are all major steps where
               | again the barrier is not just knowledge.
               | 
               | For instance there's a pretty huge culture around
               | building your own nuclear fusion device at home. And
               | there are tremendous resources available as well as step
               | by step guides on how to do it. It's still
               | _exceptionally_ difficult (as well as quite dangerous),
               | because it 's not like you just get the pieces, put
               | everything together like legos, flick on the switch, and
               | boom you have nuclear fusion. There's a million things
               | that not only can but _will_ go wrong. So in spite of the
               | absolutely immense amount of information about there, it
               | 's still a huge achievement for any individual or group
               | to achieve fusion.
               | 
               | And now somebody trying to do any of these sort of things
               | with the guidance of... chatbots? It just seems like the
               | most probable outcome is you end up getting yourself
               | killed.
        
               | fragmede wrote:
               | What story about home made nuclear devices would be
               | complete without a mention of David Hahn, aka the
               | "Nuclear Boy Scout" who built a homemade neutron source
               | at the age of seventeen out of smoke detectors. He did
               | not achieve fusion, but he did get the attention of the
               | FBI, the NRC, and the EPA. He didn't have anywhere near
               | enough to make a dirty bomb, nor did he ever consider
               | making a bomb in the first place*.
               | 
               | Why do I bring up David Hahn if he never achieved fusion
               | and wasn't a terrorist? Because of how far he got as a
               | seventeen year old. A fourty year old with a FAANG salary
               | with the ideological bent of Theodore Kaczynski could do
               | stupid amounts of damage. First would be to not try and
               | build a nuclear fusion device. The difficult of building
               | one doesn't seem so important if you're a sociopath when
               | trying to be being a terrorist if every sociopath can go
               | out and buy a gun and head to the local mall. There were
               | _two_ major such incidents in the past weeks, with _12_
               | more mass shootings from Friday to Sunday over this past
               | Halloween weekend**. Instead of worrying about the far-
               | fetched, we would do better addressing something that
               | killed 18 people in Maine and 19 in Texas, and 11 more
               | across the country.
               | 
               | * https://www.pbs.org/newshour/science/building-a-better-
               | breed...
               | 
               | ** https://www.npr.org/2023/10/29/1209340362/mass-
               | shootings-hal...
        
               | emporas wrote:
               | Back in 2008, i remember reading books thousands of pages
               | long, about genetics in biology, and i was impressed by
               | how easy the subject is. I was an amateur in programming
               | at the time, but programming, regular programming of web
               | servers, web frameworks and so on, was so much harder.
               | 
               | The cost of DNA sequencing had dropped already from 100
               | to 1 million [1], but i had no idea at the time, that
               | genetic engineering was advancing at a rate that dwarfed
               | Moore's law.
               | 
               | Anyway my point is, that no one is getting upset about
               | censored LLM's or AI's, which will stop us from stitching
               | together a biological agent and scoop out half of earth's
               | human population. Books, magazines and traditional
               | computer programs can achieve said purpose easily.
               | (Scooping out half of earth's human population is
               | impossible of course, but useful as a thought
               | experiment.)
               | 
               | https://images.app.goo.gl/xtG2gJ2m49FmgYNb8
        
               | throwaway4aday wrote:
               | we should ban chemistry text books
        
             | king_magic wrote:
             | > but there's no fundamentally new risk here
             | 
             | This is incredibly naive. These models unlock capabilities
             | for previously unsophisticated actors to do extremely
             | dangerous things in almost undetectable ways.
        
               | throwaway4aday wrote:
               | you can't fix stupid
        
             | ben_w wrote:
             | > Yes, this presents additional risk from non-state actors,
             | but there's no fundamentally new risk here.
             | 
             | That doesn't seem right. Surely, making it easier for non-
             | state actors to do things that state actors only fail to do
             | because they agreed to treaties banning it, can only
             | increase the risk that non-state actors may do those
             | things?
             | 
             | Laser blinding weapons are banned by treaty, widespread
             | access to lasers lead to scenes like this a decade ago
             | during the Arab Spring: https://www.bbc.com/news/av/world-
             | middle-east-23182254
        
           | whymauri wrote:
           | As someone who has worked on ADMET risk for algorithmically
           | designed drugs, this is a nothing burger.
           | 
           | "Potentially lethal molecules" is a far cry away from
           | "molecule that can be formulated and widely distributed to a
           | lethal effect." It is as detached as "potentially promising
           | early stage treatment" is from "manufactured and patented
           | cure."
           | 
           | I would argue the Verge's framing is worse. "Potentially
           | lethal molecule" captures _every_ feasible molecule, given
           | that anyone who has worked on ADMET is aware of the age-old
           | adage: the dose makeths the poison. At a sufficiently high
           | dose, virtually any output from an algorithmic drug design
           | algorithm, be it combinatorial or 'AI', will be lethal.
           | 
           | Would a traditional, non-neural net algorithm produce
           | virtually the same results given the same objective function
           | and apriori knowledge of toxic drug examples? Absolutely. You
           | don't need a DNN for that, we've had the technology since the
           | 90s.
        
         | avmich wrote:
         | It's true that immediate problems with AI are different, but we
         | hope to be able to solve those problems and to have time for
         | that. The risks addressed in the article could leave us less
         | time and ability to properly solve when they grow to the
         | obvious size, so that requires thinking ahead.
        
         | nojito wrote:
         | How does providing research grants to small independent
         | researchers satisfying incumbents?
        
         | SoftTalker wrote:
         | Inclined to agree. Clearly Biden doesn't know the first thing
         | about it (I would say the same about any president BTW). So who
         | really wrote the regulations he is announcing, and who are they
         | listening to?
        
         | cma wrote:
         | Doesn't it mention all those things?
        
       | orbital-decay wrote:
       | _> They include requirements that the most advanced A.I. products
       | be tested to assure that they cannot be used to produce
       | biological or nuclear weapons_
       | 
       | How is "AI" defined? Does this mean US nuclear weapons
       | simulations will have to completely rely on hard methods, with
       | absolutely no ML involved for some optimizations? What does it
       | mean for things like AlphaFold?
        
         | paxys wrote:
         | What makes you think the US military will be subject to these
         | regulations?
        
           | 2devnull wrote:
           | If militaries are not subject to the regulation then it is
           | meaningless. Who else would be building weapons systems?
        
             | krisoft wrote:
             | The worry here is not about controlling militaries. There
             | are different processes for that.
             | 
             | The scenario people purport to worry about is one where a
             | future AI system can be asked by "anyone" to design
             | infectious materials. Imagine a dissatisfied and
             | emotionally unstable researcher who can just ask their
             | computer for the DNA sequence of an airborne super Ebola.
             | Then said researcher orders the DNA synthetized, does some
             | lab work to multiply it and releases it in the general
             | population.
             | 
             | I have no idea how realistic this danger is. But this is
             | what people seem to be thinking about.
        
               | orbital-decay wrote:
               | That is the question. AI is an ill-defined marketing BS,
               | what is the actual definition in the law? Artificial
               | Intelligence as used in the science/industry is a pretty
               | broad term, and even more narrow "machine learning" is
               | notoriously hard to define. Another question is that all
               | this is being used for more than a decade for a lot of
               | legitimate things which can also be easily misused to
               | create biological weapons (AlphaFold), how does it
               | regulate it? The article doesn't answer these questions,
               | what matters is where exactly the actual proposed law
               | draws the line in the sand. The devil is always in the
               | details.
        
         | marcosdumay wrote:
         | Now that you mentioned it... Does it outlaw the Intel and AMD's
         | amd64 branch predictors?
        
           | czl wrote:
           | > Does it outlaw the Intel and AMD's amd64 branch predictors?
           | 
           | Does better branch prediction enable better / faster weapons
           | development? Perhaps we need laws restricting general purpose
           | computing? Imagine what "terrorists" could do if they get
           | access to general purpose computing!
        
       | ilaksh wrote:
       | Good start. But if you are in or approaching WWIII, you will see
       | military AI control systems as a priority, and be looking for
       | radical new AI compute paradigms that push the speed, robustness,
       | and efficiency of general purpose AI far beyond any human ability
       | to keep up. This puts Taiwan even more in the hot seat. And aims
       | for a dangerous level of reliance on hyperspeed AI.
       | 
       | I don't see any way to continue to have global security without
       | resolving our differences with China. And I don't see any serious
       | plans for doing that. Which leaves it to WWIII.
       | 
       | Here is an article where the CEO of Palantir advocated for the
       | creation of superintelligent AI weapons control systems:
       | https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...
        
       | nh23423fefe wrote:
       | They can't regulate finance, they can't regulate AI either.
        
         | greenhearth wrote:
         | Um, they can regulate finance. Ask Bernie Madoff and that
         | crypto guy lol
        
           | sadhorse wrote:
           | Madoff pulled a ponzi scheme for years, despite multiple
           | complaints filed by third parties to the SEC. At the end the
           | 2008 crisis brought him down, his victims lost their money
           | and the SEC just tagged the bodies it found.
           | 
           | Same goes for the crypto guy, did regulations stop him from
           | defrauding billions and hurting thousands of victims?
        
             | ben_w wrote:
             | Nonetheless Madoff was caught, convicted, sent to prison,
             | and died there.
             | 
             | Regulations sure aren't perfect, but that doesn't mean they
             | don't exist or have no effect.
        
               | hellojesus wrote:
               | In this case they do create moral hazards though. The
               | regulation means investors are less likely to consider a
               | ponzi scheme as an outcome of their investment, so they
               | don't conduct due diligence as thoroughly.
               | 
               | The original Ponzi was brought down by the free markets:
               | a journalist caught wind of unbelievable returns and
               | tracked down why.
        
       | wolframhempel wrote:
       | I feel there is a strong interest by large incumbents in the AI
       | space to push for this sort of regulation. Models are
       | increasingly cheap to run and open source and there isn't too
       | much of a defensible moat in the model itself.
       | 
       | Instead, existing AI companies are using the government to
       | increase the threshold for newcomers to enter the field. A
       | regulation for all AI companies to have a testing regime that
       | requires a 20 headstrong team is easy to meet for incumbents, but
       | impossible for newcomers.
       | 
       | Now, this is not to diminish that there are genuine risks in AI -
       | but I'd argue that these will be exploited, if not by US
       | companies, then by others. And the best weapon against AI might
       | in fact be AI. So, pulling the ladder up behind the existing
       | companies might turn out to be a major mistake.
        
         | bizbizbizbiz wrote:
         | It increases the threshold to enter, but with the intention of
         | increasing public safety and accountability. There's also a
         | high threshold to enter for just about every other product you
         | can manufacture and purchase - food, pharmaceuticals, machinery
         | to name obvious examples - why should software be different if
         | it can affect someone's life or livelihood?
        
           | peyton wrote:
           | Feels a little like getting a license from Parliament to run
           | a printing press to catch people printing scandalous
           | pamphlets, no?
        
             | ben_w wrote:
             | Didn't the printing press lead to the modern idea of
             | copyright, the Reformation, and by extension contributed to
             | the 80 Year's War, and through that Westphalian
             | sovereignty?
        
           | highwaylights wrote:
           | There's two things in this take that IMHO are a bit off.
           | 
           | People are skeptical that introducing the regulatory
           | threshold has anything to do with the increasing public
           | safety or accountability, and instead lifts the ladder up to
           | stop others (or open-source models) catching up. This is a
           | pointless, self-destructive endeavour in either case, as no
           | other country is going to comply with these regulations and
           | if anything will view them as an opportunity to help
           | companies local to their jurisdiction (or their national
           | government) to catch up.
           | 
           | The other problem is that asking why software should be
           | different if it can affect someone's life or livelihood is
           | quite a broad ask. Do you mean self-driving cars? Medical
           | scanners? Diagnostic tests? I would imagine most people agree
           | with you that this should be regulated. If you mean "it
           | threatens my job and therefore must be stopped" then: welcome
           | to software, automating away other people's jobs is our bread
           | and butter.
        
           | polski-g wrote:
           | Because software is protected under the First Amendment:
           | https://www.eff.org/cases/bernstein-v-us-dept-justice
           | 
           | Government cannot regulate it.
        
             | ethbr1 wrote:
             | _Published_ software is protected.
             | 
             | Entities operating SaaS are in a much greyer area.
        
         | AlbertoGP wrote:
         | Yes, there are interests pushing for regulation using different
         | arguments.
         | 
         | The regulation in the article is about AIs giving assistance on
         | producing weapons of mass destruction and mentions nuclear and
         | biological. Yann LeCun posted this yesterday about the risk of
         | runaway AIs that would decide to kill or enslave humans, but
         | both arguments result in an oligopoly over AI:
         | 
         | > _Altman, Hassabis, and Amodei are the ones doing massive
         | corporate lobbying at the moment._
         | 
         | > _They are the ones who are attempting to perform a regulatory
         | capture of the AI industry._
         | 
         | > _You, Geoff, and Yoshua are giving ammunition to those who
         | are lobbying for a ban on open AI R &D._
         | 
         | > ...
         | 
         | > _The alternative, which will *inevitably* happen if open
         | source AI is regulated out of existence, is that a small number
         | of companies from the West Coast of the US and China will
         | control AI platform and hence control people 's entire digital
         | diet._
         | 
         | > _What does that mean for democracy?_
         | 
         | > _What does that mean for cultural diversity?_
         | 
         | https://twitter.com/ylecun/status/1718670073391378694
        
           | wolframhempel wrote:
           | I feel, when it comes to pushing regulation, governments
           | always start with the maximalist position since it is the
           | hardest to argue against.
           | 
           | - the government must regulate the internet to stop the
           | spread of child pornography
           | 
           | - the government must regulate social media to stop calls for
           | terrorism and genocide
           | 
           | - the government must regulate AI to stop it from developing
           | bio weapons
           | 
           | ...etc. It's always easiest to push regulation via these
           | angles, but then that regulation covers 100% of the regulated
           | subject, rather than the 0.01% of the "intended" subject
        
           | qzw wrote:
           | I find Lecun's argument very interesting, and the whole
           | discussion has parallels to the early regulation and debate
           | surrounding cryptography. For those of us who aren't on
           | twitter and aren't aware of all the players in this, can you
           | tell us who he's responding to as well as who "Geoff" and
           | "Yoshua" are?
        
             | idkwhatiamdoing wrote:
             | Probably Geoffrey Hinton and Yoshua Benigo who have had
             | major contributions to the field of A.I. in their
             | scientific careers.
        
             | AlbertoGP wrote:
             | As the sibling comment by idkwhatiamdoing says, Geoff is
             | Geoffrey Hinton: <<Geoffrey Hinton leaves Google and warns
             | of danger ahead>>
             | https://news.ycombinator.com/item?id=35771104
             | 
             | Yoshua is Yoshua Bengio: <<Yoshua Bengio: How Rogue AIs May
             | Arise>> https://news.ycombinator.com/item?id=36042126
             | 
             | LeCunn is replying to Max Tegmark: <<Ask HN: What is the
             | apocolyptic scenario for AI "breaking loose"?>>
             | https://news.ycombinator.com/item?id=35569306 <<Max
             | Tegmark: The Case for Halting AI Development | Lex Fridman
             | Podcast #371>> https://www.youtube.com/watch?v=VcVfceTsD0A
        
         | j45 wrote:
         | Big companies making it difficult for new players to get in in
         | the name of safety.
         | 
         | Too many small players have made the jump to the big leagues
         | already for those who don't want competition.
        
           | j45 wrote:
           | Just echoing what the article said - maybe succinctly.
           | 
           | If some people are going to have the tech it will create a
           | different kind of balance.
           | 
           | Tough issue to navigate.
        
         | gumballindie wrote:
         | > Instead, existing AI companies are using the government to
         | increase the threshold for newcomers to enter the field.
         | 
         | Precisely. And the same governments will make stealing your
         | data and ip legal. I believe that's how corruption works - pump
         | money into politicians and they make laws that favour
         | oligarchs.
        
         | daoboy wrote:
         | Andrew Ng would be inclined to agree.
         | 
         | "There are definitely large tech companies that would rather
         | not have to try to compete with open source, so they're
         | creating fear of AI leading to human extinction," he told the
         | news outlet. "It's been a weapon for lobbyists to argue for
         | legislation that would be very damaging to the open-source
         | community."
         | 
         | https://www.businessinsider.com/andrew-ng-google-brain-big-t...
        
           | ethbr1 wrote:
           | When I read the original announcement, I had hoped it was
           | more about the _transparency_ of testing.
           | 
           | E.g. "What tests did you run? What results did you get? Where
           | did you publish those results so they can be referenced?"
           | 
           | Unfortunately, this seems to be more targeted at banned
           | topics.
           | 
           | No "How I make nukulear weapon?" is less interesting than
           | "Oh, our tests didn't check whether output rental prices were
           | different between protected classes."
           | 
           | Mandating open and verified test results would be an
           | interesting, automatable, and useful regulation around ML
           | models.
        
       | ThinkBeat wrote:
       | > biological or nuclear weapons,
       | 
       | You know aside from the AIs the intelligence and military use /
       | will soon use.
       | 
       | > watermarked to make clear that they were created by A.I.
       | 
       | Good luck on that. It is fine that the systems do this. But if
       | you are making images for nefarious reasons then bypassing
       | whatever they ad should be simple.
       | 
       | screencap / convert between different formats, add / remove noise
        
       | pr337h4m wrote:
       | First Amendment hasn't been fully destroyed yet, and we're
       | talking about large 'language' models here, so most mandates
       | might not even be enforceable (except for requirements on selling
       | to the government, which can be bypassed by simply not selling to
       | the government).
       | 
       | Edited to add:
       | 
       | https://www.whitehouse.gov/briefing-room/statements-releases...
       | 
       | Except for the first bullet point (and arguably the second),
       | everything else is a directive to another federal agency - they
       | have NO POWER over general-purpose AI developers (as long as
       | they're not government contractors)
       | 
       | The first point: "Require that developers of the most powerful AI
       | systems share their safety test results and other critical
       | information with the U.S. government. In accordance with the
       | Defense Production Act, the Order will require that companies
       | developing any foundation model that poses a serious risk to
       | national security, national economic security, or national public
       | health and safety must notify the federal government when
       | training the model, and must share the results of all red-team
       | safety tests. These measures will ensure AI systems are safe,
       | secure, and trustworthy before companies make them public."
       | 
       | The second point: "Develop standards, tools, and tests to help
       | ensure that AI systems are safe, secure, and trustworthy. The
       | National Institute of Standards and Technology will set the
       | rigorous standards for extensive red-team testing to ensure
       | safety before public release. The Department of Homeland Security
       | will apply those standards to critical infrastructure sectors and
       | establish the AI Safety and Security Board. The Departments of
       | Energy and Homeland Security will also address AI systems'
       | threats to critical infrastructure, as well as chemical,
       | biological, radiological, nuclear, and cybersecurity risks.
       | Together, these are the most significant actions ever taken by
       | any government to advance the field of AI safety."
       | 
       | Since the actual text of the executive order has not been
       | released yet, I have no idea what even is meant by "safety tests"
       | or "extensive red-team testing". But using them as a condition to
       | prevent release of your AI model to the public would be blatantly
       | unconstitutional as prior restraint is prohibited under the First
       | Amendment. Prior restraint was confirmed by the Supreme Court to
       | apply even when "national security" is involved in New York Times
       | Co. v. United States (1971) - the Pentagon Papers case. The
       | Pentagon Papers were actually relevant to "national security",
       | unlike LLMs or diffusion models. More on prior restraint here:
       | https://firstamendment.mtsu.edu/article/prior-restraint/
       | 
       | Basically, this EO is toothless - have a spine and everything
       | will be all right :)
        
         | polski-g wrote:
         | Most restrictions probably aren't enforceable.
         | 
         | > After four years and one regulatory change, the Ninth Circuit
         | Court of Appeals ruled that software source code was speech
         | protected by the First Amendment and that the government's
         | regulations preventing its publication were unconstitutional.
         | 
         | https://en.wikipedia.org/wiki/Bernstein_v._United_States
        
         | ApolloFortyNine wrote:
         | Also the defense production act was never meant for anything
         | like this, and likely won't be allowed if challenged. If they
         | don't shut it down in some other way first.
         | 
         | Every other use of the act is to ensure production of
         | 'something' remains in the US. It'd even be possible to use the
         | act to require the model shared with the government, but not
         | sure how they justify using the act to add 'safety'
         | requirements.
         | 
         | Also any idea if this would apply to fine tunes? It's already
         | been shown you can bypass many protections simply by fine
         | tuning the model. And fine tuning the model is much more
         | accessible than creating an entire model.
        
         | gs17 wrote:
         | On the subject of toothlessness:
         | 
         | >Protect Americans from AI-enabled fraud and deception by
         | establishing standards and best practices for detecting AI-
         | generated content and authenticating official content. The
         | Department of Commerce will develop guidance for content
         | authentication and watermarking to clearly label AI-generated
         | content.
         | 
         | So the big American companies will be guided to watermark their
         | content. AI-enabled fraud and deception from outside the US
         | will not be affected.
         | 
         | --
         | 
         | >developing any foundation model
         | 
         | I'm curious why they specified this.
        
       | bilsbie wrote:
       | Can anyone understand how they can make all these regulations
       | without an act of congress?
        
         | kirykl wrote:
         | Perhaps if they classify the tech in some way it falls under
         | existing regulatory authority, but it could of course be
         | challenged
        
         | mrcwinn wrote:
         | Yes, it's easy to understand. Congress (our legislative branch)
         | grants authority to the departments (our executive branch) to
         | implement various passed laws. In this case, it looks like the
         | Biden administration is instructing HHS and other agencies to
         | study, better understand, and provide guidance on how AI
         | impacts existing laws and policies.
         | 
         | If Congress were responsible for exactly how every law was
         | implemented, which inevitably runs headlong into very tactical
         | and operational details, the Congress would effectively become
         | the Executive.
         | 
         | Of course, if a department in the executive branch oversteps
         | the powers granted to it by the legislative, affected parties
         | have recourse via the judicial branch. It's imperfect but not a
         | bad system overall.
        
           | bilsbie wrote:
           | That makes sense but isn't it reasonable to think congress
           | should be involved if regulating a brand new technology?
        
             | barryrandall wrote:
             | The legislature has the right and ability to do so at any
             | time it so chooses, and has chosen not to. As our
             | legislative branch is currently non-functional, it's
             | reasonable to expect that legislative action will not be
             | taken in any kind of time frame that matters.
        
               | meragrin_ wrote:
               | The executive branch cannot just make up laws because the
               | legislative branch is "non-functional". The executive
               | branch merely enforces the laws. If there is no law
               | regulating AI, it is not reasonable for the executive
               | branch to just up and decide to create regulations and be
               | allowed to enforce them.
        
               | Karunamon wrote:
               | They most certainly can, and often do. The absolute worst
               | thing that can happen when the executive branch oversteps
               | their authority is a court ordering them to stop.
        
             | hellojesus wrote:
             | Any body which is delegated authority will push it as far
             | as possible, until legally challenged, and then just keep
             | doing it anyway. That's what the Biden admin did with
             | regards to student loans and rent moratoriums.
             | 
             | In this case, they are framing AI as a homeland security
             | threat, among other things possibly, to give themselves the
             | latitude to create new regulations.
             | 
             | We could complain about this being out of scope, but that
             | ultimately needs to be decided by the judicial system after
             | folks with standing sue or, ideally, the legislative branch
             | could pass more guidance on to what extent this falls
             | within the delegated authority.
        
         | marcusverus wrote:
         | Easy! Government lawyers troll through the 180,000 pages of
         | existing federal regulations, looking for some tangentially
         | related law which is broad enough so as to be interpreted to
         | include AI--thus giving the Executive branch the power to
         | regulate AI.
        
       | yoran wrote:
       | "Every industry that has enough political power to utilise the
       | state will seek to control entry." - George Stigler, Nobel prize
       | winner in Economics, and worked extensively on regulatory capture
       | 
       | This explains why BigTech supports regulation. It distorts the
       | free market by increasing the barriers to entry for new,
       | innovative AI companies.
        
       | imranhou wrote:
       | This is clever, begin with a point that most people can agree on.
       | Once that foundation is set, you can continue to build upon it,
       | claiming that you're only making minor adjustments.
       | 
       | The real challenge for the government isn't about what can be
       | managed legally. Rather, like many significant societal issues,
       | it's about what malicious organizations or governments might do
       | beyond regulation and how to stop them. In this situation, that's
       | nearly impossible.
        
       | Koshkin wrote:
       | DPRK will make this their law ASAP
        
       | coding123 wrote:
       | Unfortunately he doesn't know what he signed.
        
       | photochemsyn wrote:
       | If they try to limit LLMs from discussing nuclear, biological and
       | chemical issues, they'll have no choice but to ban all related
       | discussion because of the 'dual-use technology' issue - including
       | of nuclear energy production, antibiotic and vaccine production,
       | insecticide manufacturing, etc. Similarly, illegal drug synthesis
       | only differs from legal pharmaceutical synthesis in minor ways.
       | ChatGPT will tell you everything you want about how to make
       | aspirin from willow bark using acetic anhydride - and if you
       | replace the willow bark with morphine from opium poppies, you're
       | making heroin.
       | 
       | Also, script kiddies aren't much of a threat in terms of physical
       | weapons compared to cyberattack issues. Could one get an LLM to
       | code up a Stuxnet attack of some kind? Are the regulators going
       | to try to ban all LLM coding related to industrial process
       | controllers? Seems implausible, although concerns are justified I
       | suppose.
       | 
       | I'm sure the regulatory agencies are well aware of this and are
       | just waving this flag around for other reasons, such as gaining
       | censorship power over LLM companies. With respect to the DOE's
       | NNSA (see article), ChatGPT is already censorsing 'sensitive
       | topics':
       | 
       | > "Details about any specific interactions or relationships
       | between the NNSA and Israel in the context of nuclear power or
       | weapons programs may not be publicly disclosed or discussed... As
       | of my last knowledge update in January 2022, there were no
       | specific bans or regulations in the U.S. Department of Energy
       | (DOE) that explicitly prohibited its employees from discussing
       | the Israeli nuclear weapons program."
       | 
       | I'm guessing the real concern is that LLMs don't start burbling
       | on about such politically and diplomatically embarrassing
       | subjects at length without any external controls. In this case,
       | NNSA support for the Israeli nuclear weapons program would
       | constitute a violation of the Non-Proliferation Treaty.
        
       | AlexanderTheGr8 wrote:
       | As far as I can tell, the only concerning thing in this is
       | "Require that developers of the most powerful AI systems share
       | their safety test results and other critical information with the
       | U.S. government."
       | 
       | They are being intentionally vague here. Define "most powerful".
       | And what do they mean by "share". Do we need approval or just
       | acknowledgement?
       | 
       | This line is a slippery slope for requiring approval for any AI
       | model which effectively kills start-ups, who cannot afford
       | extensive safety precautions
        
       | normalaccess wrote:
       | All joking aside I firmly believe that this "crisis" is
       | manufactured or at least heavily influenced by those that want to
       | shut down the internet and free communications. Up until now they
       | have been unsuccessful. Copyright infringement, hate speech,
       | misinformation, disinformation, child exploitation, deep fakes,
       | none have worked to garner support. Now we have an existential
       | threat. Video, audio, text, nothin is off limits and soon it will
       | be in real time (note: the GOV tries to stay 25 years ahead of
       | the private sector).
       | 
       | This meme video incapsulates this perfectly.
       | 
       | https://youtu.be/-gGLvg0n-uY?si=B719mdQFtgpnfWvH
       | 
       | Mark my words, in five years or less we will be begging the
       | governments of earth to implement permanent global real time
       | tracking for every man woman and child on earth.
       | 
       | Privacy is dead. And WE killed it.
        
         | normalaccess wrote:
         | It's already begun...
         | 
         | https://youtube.com/shorts/Q_FUrVqvlfM?si=0EFPy02k4Xs60SPC
        
       | whywhywhywhy wrote:
       | Any major restrictions will be handing the future to China,
       | Russia and UAE for the short term gain of presumably some
       | kickbacks from incumbents.
        
       | honeybadger1 wrote:
       | Expect trash that protects big business and puts a boot on
       | everyone else's neck.
        
       | honeybadger1 wrote:
       | This will just make it harder for businesses not lining the
       | pockets of congress and buddying up with the government.
        
       | brodouevencode wrote:
       | How much will this regulation cost in 5, 10, 50 years? Who will
       | write the regulations?
        
       | siliconc0w wrote:
       | Both approaches - watermarking and 'requiring testing' seem
       | pretty pointless. Bad actors won't watermark and tools will
       | quickly emerge to remove them. The 'megasyn' AI that generated
       | bioweapon molecules wasn't even an LLM and doesn't need insane
       | amounts of compute.
        
       | ThrowawayTestr wrote:
       | I'm so glad this country is run by a geriatric that can barely
       | pronounce AI let alone understand it.
        
       | Nifty3929 wrote:
       | I'm worried about the idea of a watermark.
       | 
       | The watermark could be "Created by DALL-E3" or it could be
       | "Created by Susan Johnson at 2023-01-01-02-03-23:547 in
       | <Lat/Long> using prompt 'blah' with DALL-E3"
       | 
       | One of those watermarks seems not too bad. The other seems a bit
       | worse.
        
       | almatabata wrote:
       | These regulations will only impact the public. I expect the army
       | and secret service to gain access to the complete unrestricted
       | model officially or unofficially. I would like to see the final
       | law to check if they have a carve out for the military usage.
       | 
       | The threat includes the whole world including every single
       | country in the world. You will see US using AI to mess with China
       | and Russian. And you will see Russian and China use AI to mess
       | with US. No regulation will stop this and it will inevitably
       | happen.
       | 
       | Maybe in a 100 years you will have the equivalent of the geneva
       | convention but with AI when we have wrought enough chaos on each
       | other.
        
       | RecycledEle wrote:
       | In Robert Heinlein's Starship Troopers, only those who had served
       | in the military could vote on going to war. (I know that I'm
       | oversimplifying.)
       | 
       | I want a society where you have to prove competence in a field to
       | regulate that field.
        
       | DebtDeflation wrote:
       | >The National Institute of Standards and Technology will set the
       | rigorous standards for extensive red-team testing to ensure
       | safety before public release.
       | 
       | So if, for example, Llama3 does not pass the government's safety
       | test, then Meta will be forbidden from releasing the model?
       | Welcome to a world where only OpenAI, Anthropic, Google, and
       | Amazon are allowed to release foundation models.
        
       ___________________________________________________________________
       (page generated 2023-10-30 23:00 UTC)