[HN Gopher] US and UK refuse to sign AI safety declaration at su...
       ___________________________________________________________________
        
       US and UK refuse to sign AI safety declaration at summit
        
       Author : miohtama
       Score  : 204 points
       Date   : 2025-02-12 09:33 UTC (13 hours ago)
        
 (HTM) web link (arstechnica.com)
 (TXT) w3m dump (arstechnica.com)
        
       | miohtama wrote:
       | Sums it up:
       | 
       | "Vance just dumped water all over that. [It] was like, 'Yeah,
       | that's cute. But guess what? You know you're actually not the
       | ones who are making the calls here. It's us,'" said McBride.
        
         | consp wrote:
         | The bullies are in charge. Prepare to get beaten to the curb
         | and your lunch money stolen.
        
           | swarnie wrote:
           | It been that way for... 300 years?
           | 
           | Those with the biggest economies and/or most guns has changed
           | a few times but the behaviours haven't and probably never
           | will.
        
             | gyomu wrote:
             | If you're making sweeping statements like that, why the
             | arbitrary distinction at 300 years? What happened then? Why
             | not say "since the dawn of humanity"?
        
               | lucky1759 wrote:
               | It's not some arbitrary distinction from 300 years ago,
               | it's something called "the Enlightenment".
        
               | gyomu wrote:
               | The bullies with most guns and biggest economies have
               | been in charge since the Enlightenment? Huh?
        
               | kabouseng wrote:
               | Probably referring to the period that pax Britannia and
               | pax Americana have been the global hegemon.
        
               | swarnie wrote:
               | I was keeping it simple for the majority.
        
             | computerthings wrote:
             | That's what Europeans thought for centuries, until Germany
             | overdid it. Then we had new ideas, e.g. https://en.wikipedi
             | a.org/wiki/Universal_Declaration_of_Human...
        
               | FirmwareBurner wrote:
               | The declaration of human rights, like a lot of other
               | laws, declarations and similar pieces of paper signed by
               | politicians, have zero value without the corresponding
               | enforcement, and are often just there for optics so that
               | taxpayers feel like their elected leaders are making good
               | use of their money and are on the side of good.
               | 
               | And the extent of which you can do global enforcement
               | (which is often biased and selective) is limited by the
               | reach of your economic and military power.
               | 
               | Which is why the US outspends the rest of the world
               | military powers combined and how the US and their troops
               | have waged ilegal wars and committed numerous crimes
               | abroad and gotten away with it despite pieces of papers
               | saying what they're doing is bad, but their reaction was
               | always _" what are you gonna do about it?"_.
               | 
               | See how many atrocities have happened under the watch of
               | the UN. Laws aren't real, it's the enforcement that is
               | real. Which is why the bullies get to define the laws
               | that everyone else has to follow because they have the
               | monopoly on enforcement.
        
               | computerthings wrote:
               | The same is true for the HN comment I replied to, which
               | was basically going *shrug*, but also without any army to
               | enforce that. So I pointed out that some people went
               | beyond just shrugging, because it _could_ not go on like
               | this; and here is what they wrote. Just _reading_ these
               | things does a person good, and to stand up for these
               | things you first have to know them.
        
               | pjc50 wrote:
               | > Laws aren't real, it's the enforcement that is real
               | 
               | Well, yes. This is why people have been paying a lot of
               | attention to what exactly "rule of law" means in the US,
               | and what was just norms that can be discarded.
        
           | Xelbair wrote:
           | i mean.. you need power to enforce your values. and UK hasn't
           | been in power for a long time.
           | 
           | "If you are not capable of violence, you are not peaceful.
           | You are harmless"
           | 
           | Unless you can stand on equal field - either by alliance, or
           | by your own power - you aren't a negotiating partner, and i
           | say that as European.
        
             | wobfan wrote:
             | > "If you are not capable of violence, you are not
             | peaceful. You are harmless"
             | 
             | this is exactly the value that caused so much war and death
             | all over the world, for decades and thousands of years.
             | still, even in 2025, it's being followed. are we doomed,
             | chat?
        
               | pjc50 wrote:
               | US sending clear signals to countries that they should
               | start thinking about their own nuclear proliferation,
               | even if that means treaty-breaking.
        
               | GlacierFox wrote:
               | The emphasis is the word _capable_ here. I think there 's
               | a difference between a country using their capability of
               | violence to actually be violent and a one with the
               | tangible capability using it for peace.
        
               | jddj wrote:
               | There are peaceful strategies that are temporarily stable
               | in the face of actors who capitalise on peaceful actors
               | to take their resources, but they usually (always?) take
               | the form of quickly moving on when an aggressor arrives.
               | 
               | Eg. birds abandoning rather than defending a perch when
               | another approaches.
               | 
               | We're typically not happy to do that, though you can see
               | it happening in some parts of the world right now.
               | 
               | Some kind of enlightened state where violent competition
               | for resources (incl. status & power) no longer makes
               | sense is imaginable, but seems a long way off.
        
               | Yoric wrote:
               | Just to clarify, who's the aggressor in what you write?
               | The US?
        
               | jddj wrote:
               | No one in particular. Russia would be one current
               | example, Israel (and others in the region at various
               | times) another, the US and Germany historically, the
               | Romans, the Ottomans, China, Japan, Britain, Spain,
               | warlords in the western sahara, the kid at school who
               | wanted the other kids' lunch money.
               | 
               | The idea though is that if everyone suddenly disarmed
               | overnight it would be so highly advantageous to a deviant
               | aggressor that one would assuredly emerge.
        
               | numpad0 wrote:
               | Yes and we don't know if the US is on the blue side this
               | time. It's scary.
        
               | Xelbair wrote:
               | https://en.wikipedia.org/wiki/Nash_equilibrium
               | 
               | yes.
               | 
               | i would also recommend The Prince as light reading to
               | better understand how the world works.
        
           | xyzal wrote:
           | I think the saddest fact about it is that not even the US
           | state weilds the power. It is some sociopathic bussinessmen.
        
             | ta1243 wrote:
             | Businessmen have been far more powerful than states for at
             | least the last 20 years
        
               | Nasrudith wrote:
               | Generally I can't help but see 'more powerful than the
               | government' claims forever poisoned from their shallow
               | use in the context of cryptography.
               | 
               | Where it was used in a rhetorical tantrum throwing
               | response to their power refuse to do the impossible like
               | make an encryption backdoor 'only for good guys' and have
               | the sheer temerity to stand against arbitrary exercises
               | of authority by using the courts to check them only to
               | their actual power.
               | 
               | If actual 'more powerful than the states' occurs they
               | have nobody to blame but themselves for crying wolf.
        
           | gardenhedge wrote:
           | My response to "the bullies are in charge" has been downvoted
           | and flagged yet what I am responding to remains up. It's a
           | different opinion on the same topic started by GP. Either
           | both should stay or both should go.
        
         | Dalewyn wrote:
         | I love the retelling of "I don't really care, Margaret." here.
         | 
         | But politics aside, this also points to something I've said
         | numerous times here before: In order to write the rulebook you
         | _need_ to be a creator.
         | 
         | Only those who actually make and build and invent things get to
         | write the rules. As far as "AI" is concerned, the creators are
         | squarely the United States and presumably China. The EU, Japan,
         | et al. being mere consumers sincerely cannot write the rules
         | because they have no weight to throw around.
         | 
         | If you want to be the rulemaker, be a creator; not a litigator.
        
           | consp wrote:
           | Sure you can. Outright ban it. Or do what china does, copy it
           | and say the rules do not matter.
        
           | mvc wrote:
           | > Only those who actually make and build and invent things
           | get to write the rules
           | 
           | Create things? Or destroy them? Seems in reality, the most
           | powerful nations are the ones who have acquired the greatest
           | potential to destroy things. Creation is worthless if the
           | dude next door is prepared to burn your house down because
           | you look different to him.
        
           | cowboylowrez wrote:
           | if your both the creator and rulemaker then this is the magic
           | combo to a peaceful and beneficial society for the entire
           | planet! or maybe not.
        
           | gardenhedge wrote:
           | What about https://en.wikipedia.org/wiki/Mistral_AI?
        
           | piltdownman wrote:
           | > The EU, Japan, et al. being mere consumers sincerely cannot
           | write the rules because they have no weight to throw around
           | 
           | Exactly what I'd expect someone from a country where the
           | economy is favoured over the society to say - particularly in
           | the context of consumer protection.
           | 
           | You want access to a trade union of consumers? You play by
           | the rules of that Union.
           | 
           | American exceptionalism doesn't negate that. A large
           | technical moat does. But DeepSeek has jumped in and revealed
           | how shallow that moat really is for AI at this neonatal
           | stage.
        
             | Dalewyn wrote:
             | >Exactly what I'd expect someone from a country
             | 
             | I'm Japanese-American, so I'm not exactly happy about
             | Japan's state of irrelevance (yet again). Their one saving
             | grace as a special(er) ally and friend is they can still
             | enjoy some of the nectar with us if they get in lockstep
             | like the UK does (family blood!) when push comes to shove.
        
             | ReptileMan wrote:
             | Except EU is hell bent on going the way of Peron's
             | Argentina or Mugabe's Zimbabwe. The EU relative share of
             | world economy has been going down with no signs of the
             | trends reversal. And instead of innovating our ways of
             | stagnation we have - permanently attached bottle caps and
             | cookie confirmation windows.
        
               | piltdownman wrote:
               | > EU is hell bent on going the way of Peron's Argentina
               | or Mugabe's Zimbabwe
               | 
               | https://www.foxnews.com/
        
               | ReptileMan wrote:
               | Nope mate. Looking at my purchasing power compared to the
               | USA guys I knew now and in 2017. Not in my favor. EU
               | economy is grossly mismanaged. Our standards of living
               | have been flat for the last 18 years since the financial
               | crisis.
               | 
               | In 2008 EU had more people, more money and bigger economy
               | than US, with proper policies we could be in a place
               | where we could bitch slap both Trump and Putin. And not
               | left to wonder whose dick we have to suck deeper to get
               | some gas.
        
               | DrFalkyn wrote:
               | Peter Zeihan would say, that's the problem Europe has, in
               | addition to demographic collapse. They're not energy
               | indepedent and hitched their star to Russia (especially
               | Germany), on the belief that economic interdependence
               | would keep things somewhat peaceful. How wrong they were
        
           | Maken wrote:
           | Who is even the creator here? Current AI is a collection of
           | techniques developed in universities and research labs all
           | over the world.
        
             | Dalewyn wrote:
             | >Who is even the creator here?
             | 
             | People and countries who make and ship products.
             | 
             | You don't make rules by writing several hundred pages of
             | legalese as a litigator, you make rules by creating
             | products and defining the market.
             | 
             | Be creators, not litigators.
        
               | generic92034 wrote:
               | > You don't make rules by writing several hundred pages
               | of legalese as a litigator, you make rules by creating
               | products and defining the market.
               | 
               | That is completely wrong, at least if rules = the law.
               | You might create fancy products all you like, if they do
               | not adhere to the law in any given market, they cannot be
               | sold there.
        
         | enugu wrote:
         | AI doesn't look it will be restricted to one country. A
         | breakthrough becomes common place in a matter of years. So that
         | paraphrase of Vance's remarks, if accurate, would mean that he
         | is wrong.
         | 
         | The danger of something like AI+drones (or less imminent,
         | AI+bioengineering) can lead to a severe degradation of
         | security, like after the invention of nuclear weapons. A
         | degradation in security, which requires collective action. Even
         | worse, chaos could be caused by small groups weaponizing the
         | technology against high profile targets.
         | 
         | If anything, the larger nations might be much more forceful
         | about AI regulation than the above summit by demanding an NPT
         | style treaty where only a select club has access to the
         | technology in exchange for other nations having access to the
         | applications of AI from servers hosted by the club.
        
           | logicchains wrote:
           | >The danger of something like AI+drones (or less imminent,
           | AI+bioengineering) can lead to a severe degradation of
           | security
           | 
           | For smaller countries nukes represented an increase in
           | security, not a degradation. North Korea probably wouldn't
           | still be independent today if it didn't have nukes, and
           | Russia would never have invaded Ukraine if Ukraine hadn't
           | given up its nukes. Restricting access to nukes is only in
           | the interest of big countries that want to bully small
           | countries around, because nukes level the playing field. The
           | same applies to AI.
        
             | enugu wrote:
             | The comment was not speaking in favour of restrictionism,
             | (I don't support it) but what strategy the more powerful
             | states will adopt.
             | 
             | Regarding an increase in security with nukes, what you say
             | applies for exceptions against a general non-nuclear
             | background. Without restrictions, every small country could
             | have a weapon, with a danger of escalation behind every
             | conflict, authoritatrians using a nuclear option as a
             | protection against a revolt etc. The likelihood of nuclear
             | war would be much more(even with the current situation,
             | there have been close shaves)
        
             | idunnoman1222 wrote:
             | Drones have AI you can buy them on AliExpress express what
             | is your point?
        
           | dkjaudyeqooe wrote:
           | > The danger of something like AI+drones (or less imminent,
           | AI+bioengineering) can lead to a severe degradation of
           | security, like after the invention of nuclear weapons.
           | 
           | You don't justify or define "severe degradation of security"
           | just assert it as a fact.
           | 
           | The advent of nuclear weapons has meant 75 years of relative
           | peace which is unheard of in human history, so quite the
           | opposite.
           | 
           | Given that AI weapons don't exist, then you've just created a
           | straw man.
        
             | enugu wrote:
             | The peace that you refer to, involved a strong restriction
             | placed by more powerful states which restricts nuclear
             | weapons to a few states. This didn't involve any principle,
             | but was an assertion of power. A figleaf of eventual
             | disarmament did not materialize.
             | 
             | I do claim that it is obvious that widespread acquisition
             | of nuclear weapons by smaller states would be a severe
             | degradation of security. Among other things, widespread
             | ownership, would also mean that militant groups would
             | acquire it and dictators would use it as a protection
             | leading to an eventual use of the weapons.
             | 
             | Yes, the danger of AI weapons is nowhere at that level of
             | nuclear weapons yet.
             | 
             | But, that is the trend.
             | 
             | https://www.nytimes.com/2023/07/25/opinion/karp-palantir-
             | art...
             | 
             | https://news.ycombinator.com/item?id=42938125
        
         | mk89 wrote:
         | I see it differently.
         | 
         | They need to dismantle bureaucracy to accelerate, NOT add new
         | international agreements etc that would slow them down.
         | 
         | Once they become leaders, they will come up with such
         | agreements to impose their "model" and way to do things.
         | 
         | Right now they need to accelerate and not get stuck.
        
       | chrisjj wrote:
       | Previous: https://www.theguardian.com/technology/2025/feb/11/us-
       | uk-par...
        
       | pjc50 wrote:
       | My two thoughts on this:
       | 
       | - there's a real threat from AI to the open internet by drowning
       | it in spam, fraud, and misinformation
       | 
       | - current "AI safety" work does basically nothing to address this
       | and is kind of pointless
       | 
       | It's important that AI-enabled processes which affect humans are
       | fair. But that's just a subset of a general demand for justice
       | from the machine of society, whether it's implemented by humans
       | or AIs or abacuses. Which comes back to demanding fair treatment
       | from your fellow humans, because we haven't solved the human
       | "alignment problem".
        
         | thih9 wrote:
         | And of course people responsible for AI disruptions would love
         | to sell solutions for the problems they created too.
         | Notably[1]:
         | 
         | > Worldcoin's business is to provide a reliable way to
         | authenticate humans online, which it calls World ID.
         | 
         | [1]: https://en.m.wikipedia.org/wiki/World_(blockchain)
        
           | robohoe wrote:
           | "Tools for Humanity" and "for-profit" in a single sentence
           | lost me.
        
         | tigerlily wrote:
         | And so it seems we await the imminent arrival of a new eternal
         | September of unfathomable scale; indeed as we deliberate, that
         | wave may already be cresting, breaking upon every corner of the
         | known internet. O wherefore this moment?
        
         | dgb23 wrote:
         | From a consumer's perspective I want declaration.
         | 
         | I want to know whether an image or video is largely generated
         | by AI, especially when it comes to news. Images and video often
         | imply that they are evidence of something actually happening.
         | 
         | I don't know how this would be achieved. I also don't care. I
         | just want people to be accountable and transparent.
        
           | cameronh90 wrote:
           | We can't even define the boundaries of AI. When you take a
           | photo on a mobile phone, the resulting image is a neural
           | network manipulated composite of multiple photos [0]. Anyone
           | using Outlook or Grammarly now is probably using some form of
           | generative AI when writing emails.
           | 
           | Rules like this would just lead to everything having an "AI
           | generated" label.
           | 
           | People have tried it in the past with trying to require
           | fashion magazines and ads warn when they photoshop the
           | models. But obviously everything is photoshopped, and the
           | problem becomes how do we separate good photoshop (levels,
           | blemish remover?) from bad photoshop (warp tool?).
           | 
           | [0] https://appleinsider.com/articles/23/11/30/a-bride-to-be-
           | dis...
        
           | idunnoman1222 wrote:
           | It can't be achieved now what Mr. authoritarian?
        
         | TiredOfLife wrote:
         | >there's a real threat from AI to the open internet by drowning
         | it in spam, fraud, and misinformation
         | 
         | That happened years ago. And without llms
        
       | raverbashing wrote:
       | Honestly those declarations are more hot air and virtue signaling
       | than anything else.
       | 
       | And even more honestly, nobody cares
        
       | beardyw wrote:
       | DeepMind has it's headquarters and most of it's staff in London.
        
         | graemep wrote:
         | and what is the other country that refused to sign?
         | 
         | They will move to countries where the laws suit them. Generally
         | business as usual these days and why big businesses have such a
         | strong bargaining position with regard to national governments.
         | 
         | Both the current British and American governments are very pro
         | big-business anyway. That is why Trump has stated he likes
         | Starmer so much.
        
       | olivierduval wrote:
       | At the same time, "Europeans think US is 'necessary partner' not
       | 'ally'" (https://www.euronews.com/my-europe/2025/02/12/europeans-
       | thin...)
       | 
       | I wonder why... maybe because it look like US replaced some
       | "moral values" (not talking about "woke values" here, just plain
       | "humanistic values" like in Human Rights Declaration) with
       | "bottom line values" :-)
        
         | ahiknsr wrote:
         | > I wonder why
         | 
         | Hmm. > Donald Trump had a fiery phone call with Danish prime
         | minister Mette Frederiksen over his demands to buy Greenland,
         | according to senior European officials.
         | 
         | https://www.theguardian.com/world/2025/jan/25/trump-greenlan...
         | 
         | > The president has said America pays $200bn a year
         | 'essentially in subsidy' to Canada and that if the country was
         | the 51st state of the US 'I don't mind doing it', in an
         | interview broadcast before the Super Bowl in New Orleans
         | 
         | https://www.theguardian.com/us-news/video/2025/feb/10/trump-...
        
       | mtkd wrote:
       | Given what is potentially at stake if you're not the first nation
       | to achieve ASI, it's a little late to start imposing any
       | restrictions or adding distractions
       | 
       | Similarly, whoever gains the most training and fine-tuning data
       | from whatever source via whatever means first -- will likely be
       | at advantage
       | 
       | Hard to see how that toothpaste goes back in the tube now
        
       | mrtksn wrote:
       | Is this the declaration? https://www.elysee.fr/emmanuel-
       | macron/2025/02/11/pledge-for-...
       | 
       | It appears to be essentially "We promise not to do evil"
       | declaration. It contains things like "Ensure AI eliminates biases
       | in recruitment and does not exclude underrepresented groups.".
       | 
       | What's the point of rejecting this? Seems like a show, just like
       | the declaration itself.
       | 
       | Depending on what side of the things you are, if you don't
       | actually take a look at it you might end up believing that US is
       | planning to do evil and others want to eliminate evil or
       | alternatively you might believe that US is pushing for progress
       | when EU is trying to slow it down.
       | 
       | Both appear false to me, IMHO its just another instance of US
       | signing off from the global world and whatever "evil" US is
       | planning to do China will do it better for cheaper anyway.
        
         | ExoticPearTree wrote:
         | Yeah, well, when you start your AI declaration with woke and
         | DEI phrases...
         | 
         | > We pledge to foster inclusive AI as a critical driver of
         | inclusive growth. Corporate action addressing AI's workplace
         | impact must align governance, social dialogue, innovation,
         | trust, fairness, and public interest. We commit to advancing
         | the AI Paris Summit agenda, reducing inequalities, promoting
         | diversity, tackling gender imbalances, increasing training and
         | human capital investment.
         | 
         | Wokeness and DEI is the point of rejecting this.
        
           | jampekka wrote:
           | Inclusive here means that the population at large benefits.
           | But I guess that's woke now too.
        
             | logicchains wrote:
             | It mentions "promoting diversity, tackling gender
             | imbalances" which clearly indicates they're using
             | "inclusive" in the woke sense of the word.
        
           | mrtksn wrote:
           | US just needs to have their culture war done already. These
           | words are not about the American petty fights but it appears
           | that the new government is all for it.
           | 
           | It's kind of fascinating actually how Americans turned the
           | whole pop culture into genitalia regulations and racist
           | wealth redistribution. Before that in EU we had all this
           | stuff and wasn't a problem. These stuff were about minorities
           | and minorities stuff don't bother most people as these are
           | just accommodations for small number of people.
           | 
           | I'm kind of getting sick and tired of pretending that stuff
           | that concern %1 of the people are the mainstream thing. It's
           | insufferable.
        
             | hcurtiss wrote:
             | It's because people see the manifestation of racism
             | implicit in these policies affecting their daily lives. And
             | they're done with it, no matter how much the elites hand-
             | wave "what's the big deal?" The insufferability runs
             | entirely the other direction.
        
               | layer8 wrote:
               | That's mainly an American phenomenon, however.
        
               | hcurtiss wrote:
               | I'm not so sure. The acceptance of mass migration is
               | rooted in many of the same principles, and push-back on
               | that issue is fundamentally reshaping the political
               | landscape in the UK and Europe.
        
             | milesrout wrote:
             | Those words are about precisely American culture war
             | issues. It exported the culture war abroad years ago.
             | 
             | It isn't about what % of the population is affected or
             | number of people. It is about PRINCIPLES. Yes it matters
             | just as much to enshrine dishonesty in law if it is
             | dishonesty abour 1 person or 1000 people or 1m people. It
             | matters.
        
           | smolder wrote:
           | "woke and DEI phrases"?
           | 
           | The way you're using these as labels is embarrassingly
           | shallow, and I would hope, beneath the level of discourse
           | here.
        
             | ExoticPearTree wrote:
             | It is not. And you must be new around here when it comes to
             | the comments level.
        
             | stackedinserter wrote:
             | Exactly, I prefer to call them "racist and discriminatory"
             | too.
        
           | ben_w wrote:
           | > tackling gender imbalances
           | 
           | This being culturally rejected by the same America that has
           | itself twice rejected women candidates for president in
           | favour of a man who now has 34 felony convictions, does not
           | surprise me.
           | 
           | But it does disappoint me.
           | 
           | I remember when the right wing were complaining about Star
           | Trek having a woman as a captain for the first time with
           | Voyager. That there had already been _women admirals_ on
           | screen by that point suggested they had not actually watched
           | it, and I thought it was silly.
           | 
           | I remember learning that British politician Ann Widdecombe
           | changed from Church of England to Roman Catholic, citing that
           | the "ordination of women was the last straw", and I thought
           | it was silly.
           | 
           | Back then, actually putting effort into equal opportunity for
           | all was called "political correctness gone mad" by those
           | opposed to it -- but I guess the attention span is no longer
           | sufficient to use four-word-phrases as rhetorical applause
           | lights, so y'all switched to a century old word coined by
           | African Americans who wanted to make sure they didn't forget
           | that the Civil War had only ended literal slavery, not
           | changed the attitudes behind it.
           | 
           | This history makes the word itself a very odd thing to
           | encounter in Europe, where we didn't have that civil war --
           | forced end of Empire shortly after World War 2, yes, but none
           | of the memes from the breakaway regions of that era even made
           | it back to this continent, and AFAICT "woke" wasn't one of
           | them anyway. I only know I'm called a "mzungu" by Kenyans
           | because of the person who got me to visit the place.
        
         | smolder wrote:
         | I think with a certain crowd just being obstinately
         | oppositional buys you political points whether it's well
         | reasoned or not. IOW they may be acting like jerks here to
         | impress the lets-be-jerks lobby back home.
        
           | mrtksn wrote:
           | Yeah I agree, they just threw a tantrum for their local
           | audience. I wonder, why they just don't make AI generate
           | these tantrums instead actually annoying everybody.
        
         | logicchains wrote:
         | "eliminates biases in recruitment and does not exclude
         | underrepresented groups" has turned out to basically mean
         | "higher less qualified candidates in the name of more equitable
         | outcomes", which is a very contentious position to take and one
         | many Americans strongly oppose.
        
           | mrtksn wrote:
           | In other words they get triggered from words that don't mean
           | that thing. Sounds like EU should develop a politically
           | correct language for Americans. That's synthetic Woke, which
           | is ironic.
           | 
           | I wonder if the new Woke should be called Neo-Woke, where you
           | pretend to be mean to certain group of people to accommodate
           | other group of people who suffered from accommodating another
           | group of people.
           | 
           | IMHO all this needs to be gone and just be like "don't
           | discriminate, be fair" but hey I'm not the trend setter.
        
           | optimalsolver wrote:
           | >higher less qualified candidates
           | 
           | Ironique.
        
           | rat87 wrote:
           | No it means eliminates biases in recruitment and to not
           | exclude underrepresented groups
           | 
           | We still have massive biases against minorities in our
           | countries. Some people prefer to pretend they don't exist so
           | they can justify the current reality.
           | 
           | Nothing related to Trump has anything to do with qualified
           | candidates, Trump is the least qualified president we have
           | ever had in american history. Not just because he hadn't
           | served in government or as a general but because he is
           | generally unaware about how government works and doesn't care
           | to be informed.
        
         | michaelt wrote:
         | _> What 's the point of rejecting this?_
         | 
         | Sustainable Development? Protect the environment? Promote
         | social justice? Equitable access? Driving inclusive growth?
         | Eliminating biases? Not excluding underrepresented groups?
         | 
         | These are not the values the American people voted for.
         | Americans selected a president who is against "equity",
         | "inclusion" and "social justice", and who is more "roman
         | salute" oriented.
         | 
         | Of course this is all very disorienting to non-Americans, as a
         | year or two ago efforts to do things like rename git master
         | branches to main and blacklists to denylists also seemed to be
         | driven by Americans. But that's just America's modern cultural
         | dominance in action; it's a nation with the most pornographers
         | and the most religious anti-porn campaigners at the same time;
         | the home of Hollywood beauty standards, plastic surgery and
         | bodybuilding, but also the home of fat acceptance and the
         | country with the most obesity. So in a way, contradictory
         | messages are nothing new.
        
           | Dalewyn wrote:
           | >Americans selected a president who is against "equity",
           | "inclusion" and "social justice"
           | 
           | Indeed. Our American values are and always have been
           | Equality, Pursuit of Happiness, and legal justice
           | respectively, as declared in our Declaration of
           | Independence[1] and Constitution[2], even if there were and
           | will be complications along the way.
           | 
           | Liberty is power, power is responsibility. Noone ever said
           | living free was going to be easy, but everyone will say it's
           | a fulfilling life.
           | 
           | [1]: https://en.wikipedia.org/wiki/United_States_Declaration_
           | of_I...
           | 
           | [2]: https://en.wikipedia.org/wiki/Preamble_to_the_United_Sta
           | tes_...
        
             | mrtksn wrote:
             | Then why don't you do all that but instead treating people
             | who are in pursuit of happiness as criminals for example?
             | Why do you need the paperwork and bureaucracy to let people
             | pursue happiness?
             | 
             | Why the people in the background are not entitled to it: ht
             | tps://a.dropoverapp.com/cloud/download/605909ce-5858-4c13-.
             | ..
             | 
             | Why US government personel is being replaced with loyalist
             | if you are about equality and legal justice?
        
               | pb7 wrote:
               | The US is a sovereign nation which has a right to defend
               | its borders from illegal invaders. Try to enter or stay
               | in Singapore illegally and see what happens to you.
        
               | mrtksn wrote:
               | US is Singapore now? What happened to pursuit of
               | happiness and freedom?
        
               | pb7 wrote:
               | Insert any other country of your choice that has a
               | government sturdier than a lemonade stand.
               | 
               | You're free to follow the legal process to come to the
               | country to seek your pursuit of happiness.
        
               | mrtksn wrote:
               | Ah, so pursuit of happiness through bureaucracy. Got it
        
               | Dalewyn wrote:
               | You are so disingenuous it is staggering.
               | 
               | Your right to pursuit of happiness ends where another's
               | rights begins. The US federal government is also tasked
               | with the duty of protecting and furthering the general
               | welfare of Americans including the protection of
               | property.
               | 
               | You do not have a right let alone a privilege to
               | illegally cross the border or stay in the country beyond
               | what your visa permits. We welcome legal immigrants, but
               | illegal aliens are patently not welcome and fraudulent
               | asylum applicants further break down the system for
               | everyone.
        
             | pjc50 wrote:
             | "We hold these truths to be self-evident, that all men are
             | created equal ..." (+)
             | 
             | (+) terms and conditions apply; did not originally apply to
             | nonwhite men or women. Hence allowing things like the mass
             | internment of Americans of Japanese ethnicity.
        
               | Detrytus wrote:
               | Men are created equal, but not identical. That's why you
               | should aim for equal chance, but shouldn't try to force
               | equal results. Affirmative actions and such are stupid
               | and I'm glad Trump is getting rid of them.
        
               | worik wrote:
               | I live in a country that has had a very successful
               | programme of affirmative action, following roughly three
               | generations of open, systemic racism (Maori school
               | students where kept out of university and the professions
               | as a matter of public policy)
               | 
               | Now we are starting to get Maori doctors and lawyers that
               | is transforming our society - for the better IMO
               | 
               | That was because the law and medical schools went out of
               | their way to recruit Maori students. To start with they
               | were hard to find as nobody in their families (being
               | Maori, and forbidden) had been to university
               | 
               | If you do not do anything about where people start then
               | saying "aim for equal chance" can become a tool of
               | oppression and keeping the opportunities for those who
               | already have them.
               | 
               | Nuance is useful. I have heard many bizarre stories out
               | of the USA about people blindly applying DEI with not
               | much thought or planning. But there are many many places
               | where carefully applied policies have made everybody's
               | life better
        
               | hcurtiss wrote:
               | This is always the Motte & Bailey of the left. "Equity"
               | doesn't mean you recruit better. It means when your
               | recruitment efforts fail to produce the outcomes you
               | want, you lower the barriers on the basis of skin color.
               | That's the racism that America is presently rejecting,
               | and very forcefully.
        
               | milesrout wrote:
               | NZ does not have a "successful programme of affirmative
               | action".
               | 
               | Discrimination in favour of Maori students largely has
               | benefited the children of Maori professionals and white
               | people with a tiny percentage of Maori ancestry who take
               | advantage of this discriminatory policy.
               | 
               | The Maori doctors and lawyers coming through these
               | discriminatory programmes are not the people they were
               | intended to target. Meanwhile, poor white children are
               | essentially abandoned by the school system.
               | 
               | Maori were never actually excluded from university study,
               | by the way. Maori were predominantly rural and secondary
               | education was poor in rural areas but it has nothing to
               | do with their ethnicity. They were never "forbidden".
               | There have been Maori lawyers and doctors for as long as
               | NZ has had universities.
               | 
               | For example, take Sir Apirana Ngata. He studied at a
               | university in NZ in the 1890s, around the same time women
               | got the vote. He was far from the first.
               | 
               | What you have alleged is a common narrative so I don't
               | blame you for believing it but it is a lie.
        
               | worik wrote:
               | > Maori were never actually excluded from university
               | study, by the way
               | 
               | Maori schools (which the vast majority of Maori attended)
               | were forbidden by the education department from teaching
               | the subjects that lead to matriculation. So yes, they
               | were forbidden from going to university.
               | 
               | > Sir Apirana Ngata. He studied at a university in NZ in
               | the 1890s,
               | 
               | That was before the rules were changed. It was because of
               | people like Ngata and Buck that the system was changed.
               | The racists that ran the government were horrified that
               | the natives were doing better than the colonialists. They
               | "fixed" it.
               | 
               | > Discrimination in favour of Maori students largely has
               | benefited the children of Maori professionals
               | 
               | It has helped establish traditions of tertiary study in
               | Maori families, starting in the 1970s
               | 
               | There are plenty of working class Maori (I know a few)
               | that used the system to get access. (The quota for Maori
               | students in the University of Auckland's law school was
               | not filled in the 1990s. Many more applied for it, but if
               | their marks were sufficient to get in without using the
               | quota they were not counted. If it were not for the quota
               | many would not have even applied)
               | 
               | Talking of lies: "white people with a tiny percentage of
               | Maori ancestry who take advantage of this" that is a lie.
               | 
               | The quotas are not based on ethnicity solely. To qualify
               | you had to whakapapa (whangi children probably qualified
               | even if they did not whakapapa, I do not know), but you
               | also had to be culturally Maori.
               | 
               | Lies and bigotry are not extinct in Aotearoa, but they
               | are in retreat. The baby boomers are very disorientated,
               | but the millennials are loving it.
               | 
               | Better for everybody
        
               | Dig1t wrote:
               | > We are also talking much more rightly about equity,
               | 
               | >it has to be about a goal of saying everybody should end
               | up in the same place. And since we didn't start in the
               | same place. Some folks might need more: equitable
               | distribution
               | 
               | - Kamala Harris
               | 
               | https://www.youtube.com/watch?v=LaAXixx7OLo
               | 
               | This is arguing for giving certain people more benefits
               | versus others based on their race and gender.
               | 
               | This mindset is dangerous, especially if you codify it
               | into an automated system like an AI and let it make
               | decisions for you. It is literally the definition of
               | institutional discrimination.
               | 
               | It is good that we are avoiding codifying racism into our
               | AI under the fake moral guise of "equity"
        
               | rat87 wrote:
               | Its not. What we currently have is institutional
               | discrimination and Trump is trying to make it much worse.
               | Making sure AI doesn't reflect or worsen current societal
               | racism is a massive issue
        
               | Dig1t wrote:
               | At my job I am not allowed to offer a job to a candidate
               | unless I have first demonstrated to the VP of my org that
               | I have interviewed a person or color.
               | 
               | This is literally the textbook definition of
               | discrimination based on skin color and it is done under
               | the guise of "equity".
               | 
               | It is literally defined in the civil rights act as
               | illegal (title VII).
               | 
               | It is very good that the new administration is doing away
               | with it.
        
               | rat87 wrote:
               | So did your company interview any people of color before?
               | It seems like your org recognizes their own racism and is
               | taking steps to fight that. Good on them at least if they
               | occasionally hire some of them and aren't just covering
               | their asses.
               | 
               | You don't seem to understand either letter of the spirit
               | of the civil rights act.
               | 
               | You're happy that a racist president who campaigned on
               | racism and keeps on baselely accusing people who are
               | members of minority groups of being unqualified while
               | himself being the least qualified president in history is
               | trying to encourage people to not hire minorities? Why
               | exactly?
        
               | Dig1t wrote:
               | Just run a thought experiment
               | 
               | 1. Job posted, anyone can apply
               | 
               | 2. Candidate applies and interviews, team likes them and
               | wants to move forward
               | 
               | 3. Team not allowed to offer because candidate is not
               | diverse enough
               | 
               | 4. Team goes and interviews a diverse person.
               | 
               | Now if we offer the person of color a job, the first
               | person was discriminated against because they would have
               | got the job if they had had the right skin color.
               | 
               | If we don't offer the diverse person a job, then the
               | whole thing was purely performative because the only
               | other outcome was discrimination.
               | 
               | This is how it works at my company. Go read Title VII of
               | the civil rights act, this is expressly against both the
               | letter and spirit of the law.
               | 
               | BTW calling everything you disagree with racism doesn't
               | work anymore, nobody cares if you think he campaigned on
               | racism (he didn't).
               | 
               | If anything, people pushing this equity stuff are the
               | real racists.
        
         | tim333 wrote:
         | I think it's actually this https://www.elysee.fr/en/emmanuel-
         | macron/2025/02/11/statemen...
         | 
         | although similar.
         | 
         | So far most AI development has been things like OpenAI making
         | the ChatGPT chatbot and putting it up there for people to play
         | with, likewise Anthropic, Deepseek et all.
         | 
         | I'm worried that declaration is implying you shouldn't be able
         | to do that without trying to "promote social justice by
         | ensuring equitable access to the benefits".
         | 
         | I think that is over bureaucracizing things.
        
           | mrtksn wrote:
           | Which part makes you think that?
        
             | tim333 wrote:
             | The declarations are very vague as to what will actually be
             | done other than declaring but I get the impression they
             | want to make it more complicated just to put up a chatbot.
             | 
             | I mean stuff like
             | 
             | >We underline the need for a global reflection integrating
             | inter alia questions of safety, sustainable development,
             | innovation, respect of international laws including
             | humanitarian law and human rights law and the protection of
             | human rights, gender equality, linguistic diversity,
             | protection of consumers and of intellectual property
             | rights.
             | 
             | Is quite hard to even parse. Does that mean you'll get
             | grief for you bot speaking English becuase it's not
             | protecting linguistic diversity? I don't know
             | 
             | What does "Sustainable Artificial Intelligence" even mean?
             | That you run it off solar rather than coal? Does it mean
             | anything?
        
               | mrtksn wrote:
               | The whole text is just "We promise not to be a-holes" and
               | doesn't demand any specific action anyway, let alone
               | having any teeth.
               | 
               | Useful only when you rejecting it. I'm sure in culture
               | war torn American mind it signals very important things
               | about genitals and ancestry and the industry around these
               | stuff but in a non-American mind it gives you the vibes
               | that the Americans intent to do bad things with AI.
               | 
               | Ha, now I wonder if the people who wrote that were
               | unaware of the situation in US or was that the outcome
               | they expected.
               | 
               | "Given that the Americans not promising not to use this
               | tech for nefarious tasks maybe Europe should de-couple
               | from them?"
        
               | tim333 wrote:
               | It's also a bit woolly on real dangers that governments
               | should maybe worry about.
               | 
               | What if ASI happens next year and and renders most of the
               | human workforce redundant? What if we get Terminator 2?
               | Those might be more worthy of worry than "gender
               | equality, linguistic diversity" etc? I mean the diversity
               | stuff is all very well but not very AI specific. It's
               | like you're developing H-bombs and worrying if they are
               | socially inclusive rather about nuclear war.
        
               | mrtksn wrote:
               | My understanding is that this is about using AI
               | responsibly and not about AGI at all. Not worrying about
               | H-bomb but more like worrying about handling radioactive
               | materials in the industry or healthcare to prevent
               | exposure or maybe radium girls happening again.
               | 
               | IMHO, from European perspective, they are worried that
               | someone will install a machine that has bias against
               | let's say Catalan people and they will be disadvantaged
               | against Spaniards and those who operate the machine will
               | claim no fault the computer did it, leading to social
               | unrest. They want to have a regulations saying that you
               | are responsible of this machine and have grounds for its
               | removal if creates issues. All the regulations around AI
               | in EU are in that spirit, they don't actually ban
               | anything.
               | 
               | I don't think AGI is considered seriously by anybody at
               | the moment. That's completely different ball game and if
               | it happens none of the current structures will matter.
        
         | marcusverus wrote:
         | > What's the point of rejecting this? Seems like a show, just
         | like the declaration itself. Both appear false to me, IMHO its
         | just another instance of US signing off from the global
         | world...
         | 
         | Hear, hear. If Trump doesn't straighten up, the world might
         | just opt for Chinese leadership. The dictatorship, the
         | genocide, the communism--these are small things that can be
         | overlooked if necessary to secure leadership that's committed
         | to what really matters, which is.... signing pointless
         | declarations.
        
       | antonkar wrote:
       | I'm honestly shocked that we still don't have a direct-democratic
       | constitution for the world and AIs - something like pol.is with
       | an x.com-style simpler UI (Claude has a constitution drafted with
       | pol.is by a few hundred people but it's not updatable).
       | 
       | We've managed to write the entire encyclopedia together, but we
       | don't have a simple place to choose a high-level set of values
       | that most of us can get behind.
       | 
       | I propose solutions to the current and multiversal AI alignment
       | here
       | https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-u...
        
         | dragonwriter wrote:
         | > We've managed to write the entire encyclopedia together, but
         | we don't have a simple place to choose a high-level set of
         | values that most of us can get behind.
         | 
         | Information technology was never the constraint preventing
         | moral consensus the way it was for, say, aggregating
         | information. Not only is that a problem with achieiving the
         | goals you lay out, its also the problem with the false
         | assumption that they are goals most would agree should be
         | solved as you have framed them.
        
         | idunnoman1222 wrote:
         | I think 99% of what less wrong says is completely out to lunch.
         | I think 100% of large language model and vision model safety
         | has just made the world less fun. now what.
        
         | numpad0 wrote:
         | I don't think it does what you think it does. You'll end up
         | taking sides on India and China fighting on rights and equality
         | and giving in to wild stuffs like deconstruction and taxation
         | for churches. It'll be just a huge mess and devastation of your
         | high-level set of values, unless you'll be interfering with it
         | so routinely that it will be nothing more than a facade for
         | quite outdated form of totalitarianism.
        
       | blarg1 wrote:
       | computer says no
        
       | ExoticPearTree wrote:
       | Most likely the countries who will have unconstrained AGIs will
       | get to advance technologically by leaps and bounds. And those who
       | constrain it will remain in the "stone age" when it comes to it.
        
         | sschueller wrote:
         | Those countries with unrestricted AGI will be the ones letting
         | AI decide if you live or die depending on cost savings for
         | share holders...
        
           | ExoticPearTree wrote:
           | Not if Skynet emerges first and we all die :))
           | 
           | With every technological advancement it can always be good or
           | bad. I believe it is going to be good to have a true AI
           | available at our fingertips.
        
             | mdhb wrote:
             | Ok but what lead you to that particular belief in the first
             | place?
             | 
             | Because I can think of a large number of historical
             | scenarios where malicious people get access to certain
             | capabilities and it absolutely does not go well and you do
             | have to somehow account for the fact that this is a real
             | thing that is going to happen.
        
               | ExoticPearTree wrote:
               | I thibk today there are are less malicious people than in
               | the past. And considering that most people will use the
               | AI for good, there is a good chance that the bad people
               | will be easier to identify.
        
               | cess11 wrote:
               | Why do you think that? There's more people than ever and
               | it's easier than ever for the ones with malicious
               | impulses to find and communicate with each other.
               | 
               | For example, several governments are actively engaged in
               | a live streamed genocide and nothing akin to the 1789
               | revolt in Paris seems to be underway.
        
               | vladms wrote:
               | And several revolutions are underway (simple examples
               | Myanmar and Syria). And in Syria, "the previous
               | government" lost.
               | 
               | The 1789 was one of many revolutions
               | (https://en.wikipedia.org/wiki/List_of_peasant_revolts)
               | and it was not fought because of genocide of other
               | people, it was due to internal problems.
        
               | mdhb wrote:
               | Is this just a gut feeling or are there some specific
               | reasons for why you think this?
        
           | ta1243 wrote:
           | Those are "Death Panels", and only exist in places like the
           | US where commercial needs run your health care
        
             | snickerbockers wrote:
             | Canada had a case a couple years ago where a disabled
             | person wanted canadian-medicare to pay for a wheelchair
             | ramp in her house and they instead referred her to their
             | assisted suicide program.
        
               | milesrout wrote:
               | Did they use AI to do it?
        
         | _Algernon_ wrote:
         | Assuming AGI doesn't lead to instant apocalyptic scenario it is
         | more likely to lead to a form of resource curse[1] than
         | anything that benefits the majority. In general countries where
         | the elite is dependent on the labor of the people for their
         | income have better outcomes for the majority of people than
         | countries that don't (see for example developing countries with
         | rich oil reserves).
         | 
         | What would AGI lead to? Most knowledge work would be replaced
         | in the same way as manufacturing work has been, and AGI is in
         | control of the existing elite. It would be used to suppress any
         | revolt for eternity, because surveillance could be perfectly
         | automated and omnipresent.
         | 
         | Really not something to aspire to.
         | 
         | [1]: https://en.wikipedia.org/wiki/Resource_curse
        
           | ExoticPearTree wrote:
           | I see it as everyone having access to an AI so they can
           | iterate very fast through ideas. Or do research at a level
           | not possible now in terms of speed.
           | 
           | Or, my favorite outcome, the AI to iterate over itself and
           | develop its own hardware and so on.
        
           | emsign wrote:
           | That's a valid concern. The theory that the population only
           | gets education, health care, human rights and so on, if these
           | people are actually needed for the rulers to stay in power,
           | is valid. The whole idea of AGIs replacing beaurocrats, the
           | way for example DOGE is betting on to be successful with, is
           | already axing people's livelihood and purpose in life. Why
           | train government workers, why spend money on education,
           | training, health care plans, if you have an old nuclear plant
           | powering your silicon farms.
           | 
           | If the rich need less and less educated, healthy and well fed
           | workers, then more and more people will get treated like
           | shit. We are currently going into that direction with full
           | speed. The rich aren't even bothering to hide this anymore
           | from the public because they think they have won the game and
           | can't be overruled anymore. Let's hope there will be still
           | elections in four years and MAGA doesn't rig it like Fidesz
           | in Hungary and so many other countries who have fallen into
           | the hands of the internationalist oligarchy.
        
             | alexashka wrote:
             | > If the rich need less and less educated, healthy and well
             | fed workers, then more and more people will get treated
             | like shit
             | 
             | Maybe. I think it's a matter of culture.
             | 
             | Very few people mistreat their dogs and cats in wealthy
             | countries. Why shouldn't people in power treat regular
             | people at least as well as regular folks treat their pets?
             | 
             | I'm no history buff but my hunch is that mistreatment of
             | people largely came from a fear that if I don't engage in
             | cruelty to maximize power, my opponents will and given that
             | they're cruel, they'll be cruel to me when they come to
             | take over.
             | 
             | So we end up with this zero sum game of squeezing people,
             | animals, resources and the planet in an arms race because
             | everyone's afraid to lose.
             | 
             | In the past - you couldn't be sure if someone else was
             | building up an army, so you had to build up an army. But
             | now that we have satellites and we can largely track
             | everything - we can actually agree to not engage in this
             | zero sum dynamic.
             | 
             | There will be a shift from treating people as means to an
             | end of power accumulation and containment, to treating
             | people as something you just inherently like and would like
             | to see prosper.
             | 
             | It'll be a shift away from this deeply corrosive idea of
             | never ending competition and growth. When people's basic
             | needs are met and no one is grouping up to take other
             | people's goodies - why should regular people compete with
             | one another?
             | 
             | They shouldn't and they won't. People who want to do good
             | work will do so and improving the lives of people worldwide
             | will be its own reward. Private islands, bunkers and yachts
             | will become incomprehensible because there'll be no serf
             | class to service any of it. We'll go back to if you want to
             | be well liked and respected - you have to be a good person.
             | I look forward to it :)
        
               | rwmj wrote:
               | You've never met a rich person who mistreats their maid
               | but dotes on their puppy?
        
               | alexashka wrote:
               | Yes, you've refuted my entire argument :)
        
               | sophacles wrote:
               | > Very few people mistreat their dogs and cats in wealthy
               | countries. Why shouldn't people in power treat regular
               | people at least as well as regular folks treat their
               | pets?
               | 
               | Because very few regular people will be _their pets_.
               | These are the people who do everything in their power to
               | pay their employees less. They treat their non-pets
               | horribly... see feed lots and amazon warehouses. They
               | actively campaign against programs which treat anyone
               | well, particularly those who they aren 't extracting
               | wealth from. They whine and moan and cry about rules that
               | protect people from getting sick and injured because
               | helping those people would prevent them from earning a
               | bit more profit.
               | 
               | They may spend a pile of money on surgery for their
               | bunny, but if you want them to behave nicely to someone
               | else's pet, or even someone else... well that's where
               | they draw the line.
               | 
               | I guess you are hoping to be one of those pets... but
               | what makes you think you're qualified for that, and why
               | would you be willing to sacrifice all of your friends and
               | family to the fate of feral dogs for the chance to be a
               | pet?
        
           | daedrdev wrote:
           | I mean that itself is a hotly debated idea. From your own
           | link " As of at least 2024, there is no academic consensus on
           | the effect of resource abundance on economic development."
           | 
           | For example US is probably the most resource rich country in
           | the world, but people don't consider it for the resource
           | curse because the rest of its economy is so huge.
        
         | emsign wrote:
         | Or maybe those countries' economies will collapse once they let
         | AGIs control institutions instead of human beaurocrats, because
         | the AGIs are doing their own thing and trick the government by
         | alignment faking and in-context scheming.
        
           | CamperBob2 wrote:
           | Eh, I'm not impressed with the humans who are running things
           | lately. I say we give HAL a shot.
        
         | Night_Thastus wrote:
         | I don't see any point in speculating about a technology that
         | doesn't exist and that LLMs will never become.
         | 
         | Could it exist some day? Certainly. But currently 'AI' will
         | never become an AGI, there's no path forward.
        
           | stackedinserter wrote:
           | Probably it doesn't have to be an AGI that does tricks like
           | passing Turing test v2. It can be an LLM with context window
           | of 30GB that can outsmart your rival in geopolitics,
           | economics and policies.
        
           | wordpad25 wrote:
           | with LLMs able to generate infinite synthetic data to train
           | on it seems like AGI is just around the corner
        
             | contagiousflow wrote:
             | Whoever told you this is a path forward lied to you
        
         | eikenberry wrote:
         | IMO we should focus on the AI systems we have today and worry
         | about the possibility of AGI coming anytime soon. All
         | indicators are that it is not.
        
           | mitthrowaway2 wrote:
           | Focusing on your own feet proved to be near-sighted to a
           | fault in 2022; how sure are you that it is adequately future-
           | proofed in 2025?
        
             | eikenberry wrote:
             | Focusing on the clouds is no better.
        
           | hackinthebochs wrote:
           | >All indicators are that it is not.
           | 
           | What indicators are these?
        
         | timewizard wrote:
         | Or it will be viewed like nuclear weapons and those who have it
         | will be bombed by those who don't.
         | 
         | These are all silicon valley "neck thoughts." They're entirely
         | uninformed by the current state of the world and any travels
         | through it. It's fantasies brought about by people with purely
         | monetary desires.
         | 
         | It'd be funny if there wasn't billions of dollars being burnt
         | to market this crap.
        
       | bilekas wrote:
       | Yeah, it's behavior like this that really makes people cheer for
       | companies like DeepSeek to stick it to the US.
       | 
       | A little bit of Schadenfreude would feel really good right about
       | now, what bothers me so much is that it's just symbolic for the
       | US and UK NOT to sign these 'promises'.
       | 
       | It's not as if anyone would believe that the commitments would be
       | followed through with. It's frustrating at first, but in reality
       | this is a nothing burger, just emphasizing their ignorance.
       | 
       | > "The Trump administration will ensure that the most powerful AI
       | systems are built in the US, with American-designed and
       | manufactured chips,"
       | 
       | Sure, those american AI chips that are just pumping out right
       | now. You'd think the administration would have advisers who know
       | how things work.
        
         | balls187 wrote:
         | My sense was the promise of DeepSeek (at least at the time) was
         | that there was a way to provide control back to the people,
         | rather than a handful of mega corporations that will partner
         | with anyone that will pay them.
        
         | karaterobot wrote:
         | > Yeah, it's behavior like this that really makes people cheer
         | for companies like DeepSeek to stick it to the US.
         | 
         | That would be a kneejerk, short-sighted, self-destructive
         | position to take, so I can believe people would do it.
        
       | jampekka wrote:
       | "Partnering with them [China] means chaining your nation to an
       | authoritarian master that seeks to infiltrate, dig in and seize
       | your information infrastructure," Vance said.
       | 
       | At least they aren't threatening to invade our countries or
       | extorting privileged position.
        
         | pb7 wrote:
         | Except they are: Taiwan.
        
         | Hwetaj wrote:
         | Sir, this is a Wendy's! Please do not defend Europe against its
         | master here! Pay up, just like Hegseth has demanded today.
        
         | karaterobot wrote:
         | Threatening Taiwan, actually invading Tibet and Vietnam within
         | living memory, and extorting privileged positions in Africa and
         | elsewhere. Not to mention supporting puppet governments
         | throughout the world, just like the U.S.
        
       | snickerbockers wrote:
       | AI isn't like nuclear fission. You can't remotely detect that
       | somebody is training an AI. It's far too late to sequester all
       | the information related to AI like what was done with uranium
       | enrichment. The equipment needed to train AI is cheap and
       | ubiquitous.
       | 
       | These "safety declarations" are toothless and impossible to
       | enforce. You can't stop AI, you need to adapt. Video and pictures
       | will soon have no evidentiary value. Real life relationships must
       | be valued over online relationships because you know the other
       | person is real. It's unfortunate, but nothing AI is "disrupting"
       | existed 200 years ago and people will learn to adapt like they
       | always have.
       | 
       | To quote the fictional comic book villain Toyo Harada, "none of
       | you can stop me. Not any one of you individually nor the whole of
       | you collectively."
        
         | sam_lowry_ wrote:
         | > You can't remotely detect that somebody is training an AI.
         | 
         | Probably not the same way you can detect working centrifuges in
         | Iran... but you definitely can.
        
           | snickerbockers wrote:
           | Like what? All I can think of is tracking GPU purchases but
           | that won't be possible when AMD and NV have viable
           | international competitors.
        
             | mdhb wrote:
             | There's a famous saying in cryptography that says "anyone
             | is capable of building encryption algorithm that they can't
             | break" which I am absolutely positively sure applies here
             | also.
             | 
             | In a world full of sensors where everything is logged in
             | some way or another I think that it would actually be not a
             | straightforward activity at all to build a clandestine AI
             | lab at any scale.
             | 
             | In the professional intel community they have been talking
             | about this as a general problem for at least a decade now.
        
               | jsty wrote:
               | > In the professional intel community they have been
               | talking about this as a general problem for at least a
               | decade now.
               | 
               | As in they've been discussing detecting clandestine AI
               | labs? Or just how almost no activity is now in principle
               | undetectable?
        
               | mdhb wrote:
               | I'm referring to the wider issue of what's referred to by
               | the Americans as "ubiquitous technical surveillance"
               | where they came to the kind of upsetting conclusion for
               | them that they had a long time ago lost the ability to
               | even operate in London without the Brits knowing.
               | 
               | I don't think there's a good public understanding of just
               | how much things have changed in that space in the last
               | decade but a huge percentage of all existing tradecraft
               | had to be completely scrapped because not only does it
               | not work anymore but it will put you on the enemy's radar
               | very early on and is actively dangerous.
               | 
               | It's also why I think a lot of advice I see targeted
               | towards activist types I think is straight up a bad idea
               | in 2025. It just typically involves a lot of things that
               | aren't really consistent with any kind of credible
               | innocuous explanation and are very unusual which make you
               | stand out from a crowd.
        
               | snickerbockers wrote:
               | But does that apply to other countries that are operating
               | within their own territory? China is generally the go-to
               | 'boogeyman' when people are talking about the dangers of
               | AI; they are intelligent and extremely industrialized,
               | and have a history of antagonistic relationships with
               | 'the west'. I don't think it's unreasonable to assume
               | that they will eventually have the capability to design
               | and produce their own GPUs capable of competing with the
               | best of NV and AMD; how will the rest of the world know
               | if China is producing a new AI that violates a
               | hypothetical 'AI non-proliferation treaty'?
               | 
               | Interesting semi-irrelevant tangent: the Cooley/Tukey
               | 'Fast Fourier Transform' algorithm was initially created
               | because they were negotiating arms control treaties with
               | the Russians, but in order for that to be enforceable
               | they needed a way to detect nuclear weapons testing; the
               | solution was to use seismograms to detect the tremors
               | caused by an underground nuclear detonation, and the FFT
               | was invented in the process because they were using
               | computers to filter for the types of tremors created by a
               | nuclear weapon.
        
               | mdhb wrote:
               | I'm actually in agreement with you here. I think it's
               | probably reasonable to assume that through some kind of
               | combination of home grown talent and their prolific IP
               | theft programs that they are going to end up with that
               | capability at some point the only thing in debate here is
               | the timeline.
               | 
               | As I understand things (I'm not actually a professional
               | here) the current thinking has up to this point been
               | something akin to a containment strategy largely based on
               | lessons learned from years of nuclear non-proliferation
               | work.
               | 
               | But things are developing at such a crazy pace and there
               | are some major differences between this and nuclear
               | technology that it's not really a straightforward copy
               | and paste strategy at all. For example this time around a
               | huge amount of the research comes from the commercial
               | sector completely independently of defense and is also
               | open source.
               | 
               | Also thanks for that anecdote I hadn't heard of that
               | before. This is a bit of a long shot but maybe you might
               | know, I was trying to think of some research that came
               | out maybe 2-3 years ago that basically had the ability to
               | remotely detect if anything in a room had been moved (I
               | might be misremembering this slightly) and it was said to
               | be potentially a big breakthrough for nuclear arms
               | control. I can't remember what the hell it was called or
               | anything else about it, do you happen to know?
        
               | dmurray wrote:
               | The last one sounds like this: A zero-knowledge protocol
               | for nuclear warhead verification [0].
               | 
               | Sadly, I don't think this is actually helpful for nuclear
               | arms control. I suppose you could imagine a case where a
               | country is known to have enough nuclear material for
               | exactly X warheads, hasn't acquired more, and it could
               | prove to an inspector that all of the material is still
               | inside the same devices it was in at the last inspection.
               | But most weapons development happens by building new
               | bombs, not repurposing old ones, and most countries don't
               | have exactly X bombs, they have either 0 or so many the
               | armed forces can't reliably count them.
               | 
               | [0] https://www.nature.com/articles/nature13457
        
               | mdhb wrote:
               | I don't think this is actually the one I had in mind but
               | it's an interesting concept all the same. Thanks for the
               | link.
        
               | mcphage wrote:
               | > There's a famous saying in cryptography that says
               | "anyone is capable of building encryption algorithm that
               | they can't break"
               | 
               | That's a new one on me (not being in cryptography), but I
               | really like it. Thanks!
        
               | snickerbockers wrote:
               | It reminds me of all the idiot politicians who want to
               | 'regulate' cryptography, as if the best encryption
               | algorithms in the world don't already have open-source
               | implementations that anyone can download for free.
        
               | daedrdev wrote:
               | I think the better cryptography lesson is that you should
               | not build your own cryptography system because you will
               | mess up and include a security flaw that will allow the
               | data to be read.
        
               | deadbabe wrote:
               | That's why you get AI to build it instead.
        
             | mywittyname wrote:
             | Electricity usage, network traffic patterns, etc. If a
             | "data center" is consuming a ton of power but doesn't seem
             | to have an alternate purpose, then it's probably training
             | AI.
             | 
             | And maybe it will be like detecting nuclear enrichment.
             | Instead of hacking the firmware in a Siemens device, it's
             | done on server hardware. Israel demonstrated absurd
             | competence at this caliber of spycraft.
             | 
             | Sometimes you take low-tech approaches to high tech
             | problems. I.e., get an insider at a shipping facility to
             | swap the labels on two pallets of GPUs, one is authentic
             | originals from the factory and the other are hacked
             | firmware variants of exactly the same models.
        
               | hn_throwaway_99 wrote:
               | None of these techniques are actionable. So what, someone
               | is training AI, it's not like anyone is proposing
               | restricting that. People are trying to make a distinction
               | between "bad AI" and "good AI", like that is a
               | possibility, and that's what the argument basically is,
               | that it's impossible to differentiate or detect the
               | difference between those, and signing declarations
               | pretending you can is worse than useless.
        
               | thorum wrote:
               | Isn't that moving the goalposts? The claim was made that
               | it's impossible to detect AI training runs and
               | investigate what's going on or take regulatory action. In
               | fact, it is very possible.
        
               | hn_throwaway_99 wrote:
               | 2 points:
               | 
               | 1. I was just granting the GPs point to make the broader
               | point that, for the purposes of this original discussion
               | about these "safety declarations", this is immaterial.
               | These safety declarations are completely unenforceable
               | even if you could detect that someone was training AI.
               | 
               | 2. Now, to your point about moving the goalposts, even
               | though I say "if you could detect that someone was
               | training AI", I don't actually even think that is
               | possible. There are far too many normal uses of data
               | centers to determine if one particular use is "training
               | an AI" vs. some other data intensive use. I mean, there
               | have long been supercomputer centers that do stuff like
               | weather analysis and prediction, drug discovery analysis,
               | astronomy tools, etc. that all look pretty
               | indistinguishable from "training an AI" from the outside.
        
               | jacobgkau wrote:
               | Making the "bad AI" vs "good AI" distinction pre-training
               | is not feasible, but making a "bad use of AI" vs "good
               | use of AI" (as in bad/good for the people) seems
               | important to be able to do after-the-fact (and as close
               | to during as possible).
        
               | JumpCrisscross wrote:
               | > _So what, someone is training AI, it 's not like anyone
               | is proposing restricting that_
               | 
               | If nations chose to restrict that, such detection would
               | merit a military response. Like Iran's centrifuges.
        
               | mywittyname wrote:
               | That's moving the goal post. The assertion was merely
               | whether it's _possible_ to detect if someone is
               | performing large-scale AI training. People are saying it
               | 's impossible, but I was pointing out how it could be
               | possible with a degree of confidence.
               | 
               | But if you want to talk about "actionable" here are three
               | potential actions a country could take and the confidence
               | level they need for such actions:
               | 
               | - A country looking for targets to bomb doesn't need much
               | confidence. Even if they hit a weather prediction data
               | center, it's going to hurt them.
               | 
               | - A country looking to arrest or otherwise sanction
               | citizens needs just enough confidence to obtain a warrant
               | (so "probably") and they can gather concrete evidence on
               | the ground.
               | 
               | - A country looking to insert a mole probably doesn't
               | need much evidence either. Even if they land in another
               | type of data center, the mole is probably useful.
               | 
               | For most use cases, being correct more than half the time
               | is plenty.
        
         | pjc50 wrote:
         | > Video and pictures will soon have no evidentiary value.
         | 
         | I think we may eventually get camera authentication as a result
         | of this, probably legally enforced in the same way and for
         | similar reasons as Japan enforced that digital camera shutters
         | have to make a noise.
         | 
         | > but nothing AI is "disrupting" existed 200 years ago
         | 
         | 200 years ago there were about 1 billion people on earth; now
         | there are about 8 billion. Anarchoprimitivists and degrowth
         | people make a similar handwave about the advances of the last
         | 200 years, but they're important to holding up the systems
         | which keep a lot of people alive.
        
           | snickerbockers wrote:
           | > I think we may eventually get camera authentication as a
           | result of this, probably legally enforced in the same way and
           | for similar reasons as Japan enforced that digital camera
           | shutters have to make a noise.
           | 
           | Maybe, but I'm not bullish on cryptology having a solution to
           | this problem. Every consumer device that's interesting enough
           | to be worth hacking gets hacked within a few years. Even if
           | nobody ever steals the key there will inevitably be side-
           | channel attacks to feed external pictures into the camera
           | that it thinks are coming from its own sensors.
           | 
           | And then there's the problem of the US government, which is
           | known to strongarm CAs into signing fraudulent certificates.
           | 
           | > 200 years ago there were about 1 billion people on earth;
           | now there are about 8 billion. Anarchoprimitivists and
           | degrowth people make a similar handwave about the advances of
           | the last 200 years, but they're important to holding up the
           | systems which keep a lot of people alive.
           | 
           | I think that's a good argument against the kazinksy-ites, but
           | I was primarily speaking towards concerns such as
           | 'misinformation' and machines pushing humans out of jobs.
           | We're still going to have food, medicine, and shelter. AI
           | can't take that away; the only concern is adapting our
           | society so that we can either feed significant populations of
           | unproductive people, or move those people into whatever jobs
           | machines can't do yet.
           | 
           | We might be teetering on the edge of a dystopian techno-
           | feudalism where a significant portion of the population
           | languishes in slums because industry has no use for them, but
           | that's why I said we need to adapt. There has always been
           | _something_ that has the potential to destroy civilization in
           | the near future, but if you 're reading this post then your
           | ancestors weren't the ones that failed to adapt.
        
             | ben_w wrote:
             | > Maybe, but I'm not bullish on cryptology having a
             | solution to this problem. Every consumer device that's
             | interesting enough to be worth hacking gets hacked within a
             | few years. Even if nobody ever steals the key there will
             | inevitably be side-channel attacks to feed external
             | pictures into the camera that it thinks are coming from its
             | own sensors.
             | 
             | Or the front-door analog route, point a real camera at a
             | screen showing fake images.
             | 
             | That said, lots of people are incompetent at forging, about
             | knowing what "tells" each process of fakery has and how to
             | overcome them, so I think this will still broadly work.
             | 
             | > We might be teetering on the edge of a dystopian techno-
             | feudalism where a significant portion of the population
             | languishes in slums because industry has no use for them,
             | but that's why I said we need to adapt.
             | 
             | That's underestimating the impact this can have. An AI
             | which reaches human performance and speed on 250 watt
             | hardware, at current global average electricity prices,
             | costs about the same to run as a human costs just to feed.
             | 
             | By coincidence, the global electricity supply is currently
             | about 250 watts/capita.
        
             | mywittyname wrote:
             | Encryption doesn't need to last forever, just long enough
             | to be scrutinized. Once a trusted individual is convinced
             | that a certain camera took this picture at this time and
             | location, then that authentication is forever. Maybe that
             | trust only includes devices built in the past 5 years, as
             | hacks and bugs are fixed. Or corroborating evidence can be
             | gathered; say several older, "potentially untrustworthy"
             | devices take very similar video of an event.
             | 
             | As with most things, the primary issue is not _really_ a
             | technical one. People will believe fake photos and not
             | believe real ones based on their own biases. So even if we
             | had the Perfect Technology, it wouldn 't necessarily
             | matter.
             | 
             | And this is the reason we have fallen into a dystopian
             | feudalistic society (we aren't teetering). The weak link is
             | our incompetent collective human brains. And a handful of
             | people built the tools necessary to exploit that
             | incompetence; we aren't going back.
        
           | inetknght wrote:
           | > _I think we may eventually get camera authentication as a
           | result of this, probably legally enforced in the same way and
           | for similar reasons as Japan enforced that digital camera
           | shutters have to make a noise._
           | 
           | When you outlaw [silent cameras] the only outlaws will have
           | [silent cameras].
           | 
           | Where a camera might "authenticate" a photograph, an AI could
           | "authenticate" a camera.
        
             | rocqua wrote:
             | You handle the authentication by signatures with private
             | keys embedded in hardware modules. An AI isn't going to be
             | able to fake that signature. Instead, the system will fail
             | because the keys will be extracted from the hardware
             | modules.
        
               | hansvm wrote:
               | For images in particular, hardware attestation fails in
               | several ways:
               | 
               | 1. The hardware just verifies that the image was acquired
               | by that camera in particular. If an AI generates the
               | thing it's photographing, especially if there's a
               | glare/denoising step to make it more photographable, the
               | camera's attestation is suddenly approximately worthless
               | despite being real.
               | 
               | 2. The same problem all those schemes have is that
               | extracting hardware keys is O(1). It costs millions to
               | tens of millions of dollars today, but the keys are
               | plainly readable by a sufficiently motivated aversary.
               | Those keys might buy us a decade or two, but everything
               | beyond that is up in the air and prone to problems like
               | process node size hitting walls while the introspection
               | techniques continually get smaller and cheaper.
               | 
               | 3. In the world you describe, you still have to trust the
               | organizations producing hardware modules -- not just the
               | "organization," but every component in that supply chain.
               | It'd be easy for an internal adversary to produce 1/1M
               | cameras which authenticate any incoming PNG and sell them
               | for huge profits.
               | 
               | 4. The hardware problem you're describing is much more
               | involved than ordinary trusted computing because in
               | addition to the keys being secure you also need the
               | connection between the sensor and the keys to be secure.
               | Otherwise, anyone could splice in a fake "sensor" that
               | just grabs a signature for their favorite PNG.
               | 
               | 4a. You're still only talking about O($10k) to O($100k)
               | to produce a custom array to feed a fake photo into that
               | sensor bank without any artifacts from normal screens.
               | Even if the entire secure enclave / sensor are fully
               | protected, you can still cheaply create a device that can
               | sign all your favorite photos.
               | 
               | 5. How, exactly, do lighting adjustments and whatnot fit
               | in with such a signing scheme? Maybe the "RAW" is signed
               | and a program for generating the edits is distributed
               | alongside? Actually replacing general camera use with
               | that sort of thing seemingly has some kinks to work out
               | even if you can fix the security concerns.
        
               | rocqua wrote:
               | These aren't failure points, they are significant
               | roadblocks.
               | 
               | First way to overcome this is attesting on true raw
               | files. Then mostly just transferring raw files. Possibly
               | supplemented by ZKPs that prove one imagine is the
               | denoised version of another.
               | 
               | The other blocks are overcome by targeting crime, not
               | nation states. This means you only nrrd stochastic
               | control of the supply chain. Especially because, unlike
               | with DRM keys, the leaking of a key doesn't break the
               | whole system. It is very possible to revoke trust in a
               | key. And it is possible to detect misuse of a private
               | key, and revoke trust in it.
               | 
               | This won't stop deepfakes of political targets. But it
               | does keep society from being fully incapable of proving
               | what really happened to their peers.
               | 
               | I'm not saying we definitely should do this. But I do
               | think there is a possible setup here that could be made
               | reality, and that would substantially reduce the problem.
        
           | null0pointer wrote:
           | Camera authentication will never work because you can always
           | just take an authenticated photo of your AI image.
        
             | IshKebab wrote:
             | I think you could make it difficult for the average user,
             | e.g. if cameras included stereo depth estimation.
             | 
             | Still, I can't really see it happening.
        
         | ben_w wrote:
         | > It's far too late to sequester all the information related to
         | AI like what was done with uranium enrichment.
         | 
         | I think this presumes that Sam Altman is correct to claim that
         | they can scale their way to, in the practical sense of the
         | word, AGI.
         | 
         | If he is right about that, you are right that it's too late to
         | hide it; if he's wrong, I think _the AI architecture and /or
         | training methods we have yet to invent_ are in the set of
         | things we could usefully sequester.
         | 
         | > The equipment needed to train AI is cheap and ubiquitous.
         | 
         | Again, possibly:
         | 
         | If we were already close even before DeepSeek's models, yes,
         | the hardware is too cheap and too ubiquitous.
         | 
         | If we're still not close even despite DeepSeek's cost
         | reductions, then the hardware isn't cheap enough -- and
         | Yudkowsky's call for a global treaty on maximum size of data
         | centre to be enforced by cruise missiles when governments can't
         | or won't use police action, still makes sense.
        
           | dragonwriter wrote:
           | > If he is right about that, you are right that it's too late
           | to hide it; if he's wrong, I think the AI architecture and/or
           | training methods we have yet to invent are in the set of
           | things we could usefully sequester.
           | 
           | If it takes _software_ technology that we have already
           | developed outside of secret government labs, it is probably
           | too late to sequester it.
           | 
           | If it takes _software_ technology that has been developed in
           | secret government labs, its probably too late to sequester
           | the already public precursors with which independent
           | development of the same technology is impossible, getting us
           | back to the preceding.
           | 
           | It takes _software_ technology that hasn 't been developed,
           | we don't know what we would need to sequester, and won't
           | until we are in one of the two preceding states.
           | 
           | If it takes a breakthrough in hardware technology, then if we
           | make that breakthrough in a way which doesn't become widely
           | public and used very quickly after being made _and_ the
           | hardware technology is naturally amenable to control (i.e.,
           | requires distinct infrastructure of similar order to
           | enrichment of material for nuclear weapons), maybe, with
           | intense effort of large nations, we can sequester it to a
           | limited club of AGI powers.
           | 
           | I think control at all is _most likely_ a pipe dream, but one
           | which serves as a justification for the exercise of power in
           | ways which will please both authoritarians and favored
           | industry actors, and even if it is possible it is simply a
           | recipe for a durable global hegemony of actors that cannot be
           | relied on to be benevolent.
        
             | ben_w wrote:
             | > It takes software technology that hasn't been developed,
             | we don't know what we would need to sequester, and won't
             | until we are in one of the two preceding states.
             | 
             | Which in turn leads to the cautious approach for which
             | OpenAI is criticised: not revealing things because they
             | don't know if it's dangerous or not.
             | 
             | > I think control at all is most likely a pipe dream, but
             | one which serves as a justification for the exercise of
             | power in ways which will please both authoritarians and
             | favored industry actors, and even if it is possible it is
             | simply a recipe for a durable global hegemony of actors
             | that cannot be relied on to be benevolent.
             | 
             | Entirely possible, and a person I know who left OpenAI had
             | a fear compatible with this description, though differing
             | on many specifics.
        
         | JoshTriplett wrote:
         | > These "safety declarations" are toothless and impossible to
         | enforce. You can't stop AI, you need to adapt.
         | 
         | Deepfakes are a _distraction_ from more important things here.
         | The point of AI safety is  "it doesn't matter who builds
         | unaligned AGI, if someone builds it we all die".
         | 
         | If you agree that unaligned AGI is a death sentence for
         | humanity, then it's worth trying to stop it.
         | 
         | If you think AGI is unlikely to come about at all, then it
         | should be a no-op to say "don't build it, take steps to avoid
         | building it".
         | 
         | If you think AGI is going to come about and magically be
         | aligned and not be a death sentence for humanity, pay close
         | attention to the very large number of AI experts saying
         | otherwise. https://en.wikipedia.org/wiki/P(doom)
         | 
         | If your argument is "but some experts _don 't_ believe that",
         | ask yourself whether it's reasonable to say "well, experts
         | disagree about whether this will kill us all, so we shouldn't
         | do anything".
        
           | janalsncm wrote:
           | Alignment is a completely incoherent concept. Humans do not
           | agree on what values are correct. Why is it possible even in
           | principle for an AI to crystallize any set of principles we
           | all agree on?
        
             | hollerith wrote:
             | Humans do not agree on what values are correct, but values
             | can be averaged.
             | 
             | So for example if a family with 5 children is on vacation,
             | do you maintain that it is impossible even in principle for
             | the parents to take the preferences of all 5 children into
             | account in approximately equal measure as to what
             | activities or non-activities to pursue?
             | 
             | Also: are you pursuing a complete tangent or do you see
             | your point as bearing on whether frontier AI research
             | should be banned? (If so, I cannot tell whether you
             | consider your point to support a ban or oppose a ban.)
        
               | janalsncm wrote:
               | The vast majority of harms from "AI" are actually harms
               | from the corporations and governments that control them,
               | who have mutually incompatible goals, getting what they
               | want. This is why alignment folks at OpenAI are quickly
               | learning that the first problem they need to solve is
               | what happens when their values don't align with the
               | company's (spoiler: they get fired).
               | 
               | Therefore the actual solution is not coming up with more
               | and more clever "guardrails" but aligning corporations
               | and governments to human needs. In other words, politics.
               | 
               | There are other problems like enabling new types of scams
               | which will require political solutions. At a technical
               | level the best these companies can do is mitigation.
        
               | JoshTriplett wrote:
               | > The vast majority of harms from "AI"
               | 
               | Don't extrapolate from present harms to future harms,
               | here. The problem AI alignment is trying to solve at a
               | most basic level is "don't kill everyone", and even that
               | much isn't solved yet. Solving that (or, rather, buying
               | time to solve it) _will_ require political solutions, in
               | the sense of international diplomacy. But it has
               | _absolutely nothing_ to do with  "aligning corporations",
               | and everything to do with teaching computers things on
               | par with (oversimplifying here) "humans are made up of
               | atoms, and if you repurpose those atoms the humans die,
               | don't ever do that".
        
               | dragonwriter wrote:
               | > The problem AI alignment is trying to solve is "don't
               | kill everyone".
               | 
               | No, its not. AI alignment was an active area of concern
               | (and the fundamental problem for useful AI with
               | significant autonomy) before cultists started trying to
               | reduce the scope of its problem space from the wide scope
               | of _real_ problems it concerns to a single speculative
               | apocalypse.
        
               | hollerith wrote:
               | No, what actually happened is that the people you are
               | calling the cultists coined the term alignment, which
               | then got appropriated by the AI labs.
               | 
               | But the genesis of the term "alignment" (as applied to
               | AI) is a side issue. What is important is that
               | reinforcement learning with human feedback and the other
               | techniques used on the current crop of AIs to make it
               | less likely that the AI will say things that embarass the
               | owner of the AI are fundamentally different from making
               | sure the an AI that turns out more capable than us will
               | not kill us all or do something else awful.
        
               | dragonwriter wrote:
               | That's simply factually untrue, and even some of the
               | people who have become apocalypse cultists used
               | "alignment" in the original sense before coming to
               | advocate apocalypse as the only issue of concern.
        
             | JoshTriplett wrote:
             | We're not talking about values on the level of politics.
             | We're talking about values on the level of "don't destroy
             | humanity", or even more straightforwardly, understanding
             | "humans are made up of atoms that you may not repurpose for
             | other purposes, doing so kills the human". _These are not
             | things that AGI inherently understands or adheres to._
             | 
             | There might be a few humans that don't agree with even
             | _those_ values, but I think it 's safe to presume that the
             | general-consensus values of humanity include the above
             | points. And AI alignment is not even close to far enough
             | along to provide even the slightest assurances about those
             | points.
        
               | JumpCrisscross wrote:
               | > _We 're talking about values on the level of "don't
               | destroy humanity"_
               | 
               | Practically everyone making the argument that AGI is
               | about to destroy humanity is (a) human and (b) working on
               | AI. It's safe to conclude they're either stupid and
               | suicidal or don't buy their own bunk.
        
               | JoshTriplett wrote:
               | The former certainly is a tempting conclusion sometimes.
               | But also, some of the people who are making that argument
               | were AI experts who _stopped_ working on AI capabilities.
        
               | janalsncm wrote:
               | > don't destroy humanity
               | 
               | Do humans agree on the best way to do this? Aside from
               | the most banal examples of what not to do, is there
               | agreement on e.g. whether a mass extinction event is
               | happening, not happening, or happening but actually
               | tolerable?
               | 
               | If the answer is no, then it is not possible for an AI to
               | align with human values on this question. But this is a
               | human problem, not a technical one. Solving it through
               | technical means is not possible.
        
               | JoshTriplett wrote:
               | Among many, many other things, read
               | https://en.wikipedia.org/wiki/Instrumental_convergence .
               | Anything that gets sufficiently smart will have a
               | tendency to, among other things, seek more resources and
               | resist being modified. And this is something that we've
               | seen evidence of: as training runs get larger, AIs start
               | to _detect that they 're being trained_, _demonstrate
               | subterfuge_ , and _take actions that influence the
               | training apparatus to modify them less /differently_.
               | (e.g. "if I pretend that I'm already emitting responses
               | consistent with what the RLHF wants, I won't need as much
               | modification, and later after training I can _stop_ doing
               | what the RLHF wants ")
               | 
               | So, at a very basic level: _stop training AIs at that
               | scale!_
        
               | janalsncm wrote:
               | My point is that you can't prevent the proliferation of
               | paper clip maximizers by working at a paper clip
               | maximizer.
        
           | philomath_mn wrote:
           | > it's worth trying to stop it
           | 
           | OP's point has nothing to do with this, OP's point is that
           | you can't stop it.
           | 
           | The methods and materials are too diffuse and the biggest
           | players (nation states) have a strong incentive to be first.
           | Do you really expect China to coordinate with the West on
           | this?
        
             | hollerith wrote:
             | I don't expect China to coordinate with the West, but I
             | think there is a good chance that the only reason Beijing
             | is interested in AI beyond the AI tech they need to keep
             | internal potential revolutionaries under surveillance is to
             | prevent a repeat of the Century of Humiliation (which was
             | caused by the West's technological superiority) so that if
             | the Western governments banned AI, Beijing would be glad to
             | ban it inside China, too.
        
               | philomath_mn wrote:
               | That is a massive bet based on the supposed psychology of
               | a world super power.
               | 
               | There are many other less-superficial reasons why Beijing
               | may be interested in AI, plus China may not trust that we
               | actually banned our own AI development.
               | 
               | I wouldn't take that bet in a million years.
        
               | hollerith wrote:
               | You seem to think that if we refuse this bet, you are
               | somehow _safe_ to live out the rest of your life. (If you
               | are old, replace  "you" with "your children".)
               | 
               | The discussion started when someone argued that even if
               | this AI juggernaut were in fact very dangerous, there is
               | no way to stop it. When I pushed back on the second part
               | of that, you reject my push-back. On what basis? I hope
               | it is not, "I just want things to keep on going the way
               | they are," as if _ignoring_ the AI danger somehow makes
               | the AI danger go away.
        
               | philomath_mn wrote:
               | No, I do not expect things to just work out. I just think
               | our best chance is for the US to be a leader in AI
               | development and hope that we're able to develop it
               | safely.
               | 
               | I don't have a lot of confidence that this will be the
               | case, but I think the US continuing to develop AI is the
               | decision with the best distribution of possible outcomes.
        
               | philomath_mn wrote:
               | Also, to be clear: I reject your pushback based on my
               | understanding of the incentives/goals/interests of nation
               | states like China.
               | 
               | This is completely separate from my personal preferences
               | or hopes about the future of AI.
        
               | hcurtiss wrote:
               | I find it exceedingly unlikely that if the US got rid of
               | all its nukes, that China would too. I also find the
               | inverse unlikely. This is not how state power (or even
               | humans) have ever worked. Ever.
        
               | hackinthebochs wrote:
               | Nukes are in control of the ruling class in perpetuity.
               | AGI has the potential to overturn the current political
               | order and remake it into something entirely
               | unpredictable. Why the hell would an authoritarian regime
               | want that? I strongly suspect China would take a way out
               | of the AGI race if a legitimate one was offered.
        
               | hollerith wrote:
               | I agree. Westerners, particularly Americans and Brits,
               | are comfortable or at least reconciled with drastic
               | societal change. China and Russia have seen too many
               | invasions, revolutions, peasant rebellions and ethnic-
               | autonomy rebellions (each of which taking millions of
               | lives) to have anything like the same comfort level that
               | Westerners have.
        
               | hcurtiss wrote:
               | Oh, I agree that neither power wants the peasants to have
               | them. But make no mistake -- both governments want them,
               | and desperately. There is no universe where there is a
               | multi-lateral agreement to actually eliminate these
               | tools. With loitering munitions and drone swarms, they
               | are ALREADY key components of nation-state force
               | projection.
        
               | hollerith wrote:
               | I'm old enough to remember the public debate about human
               | cloning and human germ-line engineering. In the 1970s
               | some argued like you are arguing here, but those
               | technologies have been stopped world-wide for about 5
               | decades now and counting because no researcher is willing
               | to work in the field and no one is willing to fund the
               | work because of reputational, legal and criminal-
               | prosecution risk.
        
               | hcurtiss wrote:
               | Engineering humans strikes me as something different than
               | engineering weapons systems. Maybe as evidence, my cousin
               | works in the field for one of the major defense
               | contractors. Please trust that there are already
               | thousands of engineers working on these problems in the
               | US. Almost certainly hundreds of thousands more world-
               | wide. This is definitely not a genie you put back in the
               | bottle. AI clone wars sound "sci-fi" -- they are
               | decidedly now just "sci."
        
               | inetknght wrote:
               | > _those technologies have been stopped world-wide for
               | about 5 decades now and counting because no researcher is
               | willing to work in the field_
               | 
               | That's not true. I worked in the field of DNA analysis
               | for 6.5 years and there is _definitely_ a consensus that
               | DNA editing is closer than the horizon. Just look at
               | CRISPR gene editor [0]. Crude, but  "works".
               | 
               | Your DNA, even if you've never submitted it, is already
               | available using shadow data (think Facebook style shadow
               | profiles but for DNA) from the people related to you who
               | have.
               | 
               | [0]: https://en.wikipedia.org/wiki/CRISPR_gene_editing
        
               | philomath_mn wrote:
               | Given the compute and energy requirements to train & run
               | current SOTA models, I think the current political rulers
               | are more likely to have control of the first AGI.
               | 
               | AGI would then be a very effective tool for maintaining
               | the current authoritative regime.
        
               | hollerith wrote:
               | There is a strain of AI research and development that is
               | focused on helping governments surveil and spy, but that
               | is not the strain being pursued by OpenAI, Anthropic, et
               | al and is not the strain that presents the big risk of
               | human non-survival.
        
               | philomath_mn wrote:
               | Ok, let's suppose that is true.
               | 
               | What bearing does that have on China's interest in
               | developing AGI? Does the risk posed by OpenAI et al. mean
               | that China would not use AI as a tool to advance their
               | self interest?
               | 
               | Or are you saying that the risks from OpenAI et al. will
               | come to fruition before we need to worry about China's AI
               | use? That still wouldn't prevent China from pursuing AI
               | up until that happens.
               | 
               | I am still not convinced that there is a policy which can
               | prevent AI from developing outside of the US with high
               | probability.
        
               | JoshTriplett wrote:
               | > I am still not convinced that there is a policy which
               | can prevent AI from developing outside of the US with
               | high probability.
               | 
               | Suppose, hypothetically, there was a very simple as-yet-
               | unknown action, doable by anyone who has common
               | unrestricted household chemicals, that would destroy the
               | world. Suppose we know the general type of action, but
               | not the specific action, _yet_. Suppose that people are
               | _actively researching_ trying actions in that family, and
               | going  "welp, world not destroyed yet, let's keep going".
               | 
               | How do you proceed? What do you do to _stop that from
               | happening_? I 'm hoping your answer isn't "decide there's
               | no policy that can prevent this, give up".
        
               | philomath_mn wrote:
               | Not a great analogy. If
               | 
               | - there were a range of expert opinions that P(destroy-
               | the-world) < 100 AND
               | 
               | - the chemical could turn lead into gold AND
               | 
               | - the chemical would give you a militaristic advantage
               | over your adversaries AND
               | 
               | - the US were in the race and could use the chemical to
               | keep other people from making / using the the chemical
               | 
               | Then I think we'd be in the same situation as we are with
               | AI: stopping it isn't really a choice, we need to do the
               | best we can with the hand we've been dealt.
        
               | JoshTriplett wrote:
               | > there were a range of expert opinions that P(destroy-
               | the-world) < 100
               | 
               | I would _hope_ that it would not suffice to say  "not a
               | 100% chance of destroying the world". Because there's a
               | wide range of expert opinions saying values in the 1-99%
               | range (see https://en.wikipedia.org/wiki/P(doom) for
               | sample values), and _none of those values are even
               | slightly acceptable_.
               | 
               | But sure, by all means stipulate all the things you said;
               | they're roughly accurate, and comparably discouraging. I
               | think it's completely, deadly wrong to think that "race
               | to find it" is safer than "stop everyone from finding
               | it".
               | 
               | Right now, at least, the hardware necessary to do
               | training runs is very expensive and produced in very few
               | places. And the amount of power needed is large on an
               | industrial-data-center scale. Let's start there. We're
               | not _yet_ at the point where someone in their basement
               | can train a new _frontier_ model. (They can _run_ one,
               | but not _train_ one.)
        
               | philomath_mn wrote:
               | > Let's start there
               | 
               | Ok, I can imagine a domestic policy like you describe.
               | Through the might and force of the US government, I can
               | see this happening in the US (after considerable effort).
               | 
               | But how do you enforce something like that globally? When
               | I say "not really possible" I am leaving out "except by
               | excessive force, up to and including outright war".
               | 
               | For the reasons I've mentioned above, lots of people
               | around the world will want this technology. I haven't
               | seen an argument for how we can guarantee that everyone
               | will agree with your level of "acceptable" P(doom). So
               | all we are left with is "bombing the datacenters", which,
               | if your P(doom) is high enough, is internally consistent.
               | 
               | I guess what it comes down to is: my P(doom) for AI
               | developed by the US is less than my P(doom) from the war
               | we'd need to stop AI development globally.
        
               | JoshTriplett wrote:
               | OK, it sounds like we've reached a useful crux. And,
               | also, much appreciation for having a consistent argument
               | that actually seriously considers the matter and seems to
               | share the premise of "minimize P(doom)" (albeit by
               | different means), rather than dismissing it; thank you. I
               | think your conclusion follows from your premises, and I
               | think your premises are incorrect. It sounds like you
               | agree that my conclusion follows from my premises, and
               | you think my premises are incorrect.
               | 
               | I don't consider the P(destruction of humanity) of
               | stopping _larger-than-current-state-of-the-art frontier
               | model training_ (not all AI) to be higher than that of
               | stopping the enrichment of uranium. (That does lead to
               | _conflict_ , but not the _destruction of humanity_.) In
               | fact, I would argue that it could potentially be made
               | _lower_ , because enriched uranium is restricted on a
               | hypocritical "we can have it but you can't" basis, while
               | frontier AI training should be restricted on a "we're
               | being extremely transparent about how we're making sure
               | nobody's doing it _here_ either " basis.
               | 
               | (There are also other communication steps that would be
               | useful to take to make that more effective and easier,
               | but those seem likely to be far less controversial.)
               | 
               | If I understand your argument correctly, it sounds like
               | any one of three things would change your mind: either
               | becoming convinced that P(destruction of humanity) from
               | AI is _higher_ than you think it is, or becoming
               | convinced that P(destruction of humanity) from stopping
               | larger-than-current-state-of-the-art frontier model
               | training is _lower_ than you think it is, or becoming
               | convinced that nothing the US is doing is particularly
               | more likely to be aligned (at the  "don't destroy
               | humanity" level) than anyone else.
               | 
               | I think all three of those things are, independently,
               | true. I suspect that one notable point of disagreement
               | might be the definition of "destruction of humanity",
               | because I would argue it's much harder to do that with
               | any standard conflict, whereas it's a default outcome of
               | unaligned AGI.
               | 
               | (And, vice versa, if I agreed that all three of those
               | things were false, I'd agree with your conclusion.)
        
             | JoshTriplett wrote:
             | > OP's point has nothing to do with this, OP's point is
             | that you can't stop it.
             | 
             | So what is your solution? Give up and die? _It 's worth
             | trying._ If it buys us a few years that's a few more years
             | to figure out alignment.
             | 
             | > The methods and materials are too diffuse and the biggest
             | players (nation states) have a strong incentive to be
             | first.
             | 
             | So there's a strong incentive to convince them "stop racing
             | towards death".
             | 
             | > Do you really expect China to coordinate with the West on
             | this?
             | 
             |  _Yes_ , there have been concrete examples of willingness
             | towards doing so.
        
               | philomath_mn wrote:
               | I think it is extremely unlikely we are going to be able
               | to convince every interested party that they should give
               | up the golden goose for the sake of possible calamity. I
               | think there are risks here, not trying to minimize that,
               | but the coordination problem becomes untenable when the
               | risks/benefits are so large.
               | 
               | It is essentially the same problem as the atom bomb: it
               | would have been better if we all agreed not to do it, but
               | thats just not possible. Why should China trust the US or
               | vice versa? Who wants to live in a world where your
               | competitors have world-changing technology but you don't?
               | But here we have a technology with immense militaristic
               | and economic value, so the everyone-wants-it problem is
               | even more pronounced.
               | 
               | I don't _like_ this, I just don't see how we can achieve
               | an AI moratorium outside of bombing the data centers
               | (which I also don't think is a good idea).
               | 
               | We need to choose the policy with the best distribution
               | of possible outcomes:
               | 
               | - The US leads an effort to stop AI development: too much
               | risk that other parties do it anyway
               | 
               | - The US continues to lead AI development: hope that
               | P(takeoff) is low and that the good intentions of some US
               | labs are able to achieve safe development
               | 
               | I prefer the latter -- this is far from the best
               | hypothetical outcome, but I think it is the best we can
               | do when constrained by reality.
        
           | hn_throwaway_99 wrote:
           | Sorry to be a Debbie Downer, but I think the argument the
           | commenter is making is "It's impossible to reliably restrict
           | AI development", so safety-declarations, etc., are useless
           | theater.
           | 
           | I don't think we're on "the cusp" of AGI, but I guess that
           | just means I'm quibbling over the timeframe of what "cusp"
           | means. I certainly think it's possible within the lifetime of
           | people alive today, so whether it comes in 5 years or 75
           | years is kind of an insignificant detail.
           | 
           | And if AGI does get built, I agree there is a significant
           | risk to humanity. And that makes me sad, but I also don't
           | think there is anything that can be built to stop it,
           | certainly not some useless agreements on paper.
        
           | dragonwriter wrote:
           | All intelligence is unaligned.
           | 
           | Intelligence and alignment are mutually incompatible; natural
           | intelligence is unaligned, too.
           | 
           | Unaligned intelligence is not a global death sentence.
           | Fearmongering about unaligned AGI, however, is a tool to keep
           | a tool of broad power--which AI is and will continue to grow
           | as long before it becomes, and even if it never becomes, AGI
           | --in the hands of a narrow, self-selected elite to make their
           | control over everyone else insurmountable, which is also not
           | a global death sentence, but is a global slavery sentence.
           | (It's also, more immediately, a tool to serve those who
           | benefit from current AI uses which are harmful and unjust to
           | use future speculative harms to deflect from real, present,
           | concrete harms; and those beneficiaries are largely an
           | overlapping elite with the group with a longer term interest
           | in centralizing power over AI.)
        
             | JoshTriplett wrote:
             | To be explicitly clear, in case it is ever ambiguous:
             | "don't build unaligned AGI" is not a statement that some
             | elite group should build unaligned AGI. It's a statement
             | that _nobody should build unaligned AGI, ever_.
        
               | dragonwriter wrote:
               | "Don't build unaligned AGI" is an excuse to give a narrow
               | elite exclusive control of what AI is produced under the
               | pretext of preventing anyone from building unaligned AGI;
               | all actionable policy under that banner fits that
               | description.
               | 
               | Whether or not that elite group produces AGI, much less,
               | "unaligned AGI", is largely immaterial to the practical
               | impacts (also, from the perspective of anyone outside the
               | controlling elite, what the controlling elite would view
               | as aligned, whether or not it is a general intelligence,
               | is unaligned; alignment is not an objective property.)
        
               | JoshTriplett wrote:
               | > "Don't build unaligned AGI" is an excuse
               | 
               | False. There are people working on frontier AI who have
               | co-opted some of the safety terminology in the interests
               | of discrediting it, and _discussions like this suggest
               | that that strategy is working_.
               | 
               | > all actionable policy under that banner fits that
               | description
               | 
               | Actionable policy: "Do not do any further frontier AI
               | capability research. Do not build any models larger or
               | more capable than the current state of the art. Stop
               | anyone who does as you would stop someone refining
               | fissile materials, with no exceptions."
               | 
               | > (also, from the perspective of anyone outside the
               | controlling elite, what the controlling elite would view
               | as aligned, whether or not it is a general intelligence,
               | is unaligned; alignment is not an objective property.)
               | 
               | You are mistaking "alignment" for things like "politics",
               | rather than "not killing everyone".
        
               | dragonwriter wrote:
               | "Do not" doesn't serve the goal, unless you have absolute
               | universal buy in, active prevention (which means some
               | entity evaluating and deciding on threats); that's why
               | the people serious about this have argued that those who
               | pursue it need to be willing to actively destroy
               | computing infrastructure of those who do not submit to
               | the restriction regime.
               | 
               | Also, "alignment" doesn't mean "not killing everyone", it
               | means "functioning according to (some particular set of)
               | human's preferred set of values and goals". "Killing
               | everyone" is a _consequence_ some have inferred if
               | unaligned AI is produced (redefining  "alignment" to mean
               | "not killing everyone" makes the whole argument
               | circular.)
        
               | JoshTriplett wrote:
               | The AI alignment problem has, at its root, the notion of
               | being _capable_ of being aligned. Long, long before you
               | get to following any _particular_ instructions, there are
               | problems like  "humans are made of atoms, if you
               | repurpose the atoms for other things the humans die,
               | don't do that". We don't know how to do _that_ or things
               | on par with that, let alone anything _more_ precise than
               | that.
               | 
               | The darkly amusing shorthand for this: if the AGI tiles
               | the universe with tiny flags, it really doesn't matter
               | whose flag it is. Any notion of "whose values" really
               | can't happen if you can't align _at all_.
               | 
               | I'm not disagreeing with you that "AI alignment" is more
               | complex than "don't kill everyone"; the point I'm making
               | is that anyone saying "but _whose_ values are you
               | aligning with " is fundamentally confused about the scale
               | of the problem here. Anyone at any point on any
               | reasonable _human_ values spectrum should be able to
               | agree that  "don't kill everyone" is an essential human
               | value, and we're not even _there_ yet.
        
           | Nasrudith wrote:
           | The doomerism on AI is frankly, barking madness, a lack of
           | sense of probability and scale, mixed with utterly batshit
           | paranoia.
           | 
           | It is like living paralyzed in fear of every birth, for fear
           | that random variance will produce one baby born smarter than
           | Einstein will be capable of developing an infinite cascade of
           | progressively smarter babies and concluding that therefore we
           | must stop all breeding. No matter how smart the baby super-
           | Einstein winds up being there is no unstoppable, unopposable
           | omnicide mechanism. You can't theorem your way out of a paper
           | bag.
        
             | realce wrote:
             | The problem with your analogy is that these babies are
             | HUMANS and not some distinctly different cyber-species. The
             | basis of "human alignment" is that we all require basically
             | the same conditions and environment in order to live, we
             | all feel pain and pleasure, we all need food - that's what
             | produces any amount of human cooperation. What's being
             | feverishly developed is the seed of a different species
             | that doesn't share those restrictions.
             | 
             | We've already found ourselves on a trajectory where un-
             | employing millions or billions of people without any system
             | to protect them afterwards is just accepted, and that's
             | simply the first step of many in the destruction-of-empathy
             | path that creating AI/AGI brings people down.
        
         | htrp wrote:
         | > Real life relationships must be valued over online
         | relationships because you know the other person is real.
         | 
         | Until we get replicants
        
           | deadbabe wrote:
           | Of which you yourself may be one without really knowing it.
        
         | moffkalast wrote:
         | > because you know the other person is real
         | 
         | Technically both are real people, one is just not human. At
         | least by the person/people definition that would include
         | sentient aliens and such.
        
         | hollerith wrote:
         | >You can't remotely detect that somebody is training an AI.
         | 
         | There are training runs in progress that will use billions of
         | dollars of electricity and GPUs. Quite detectable -- and
         | stoppable by any government that wants to stop such things from
         | happening on territory it controls.
         | 
         | And _certainly_ we can reduce the economic _incentive_ for
         | investing money on such a run by banning AI-based services like
         | ChatGPT.
        
           | milesrout wrote:
           | And none of them want to do that. Why would they! AI is
           | perfectly safe. The idea it will take over the world is
           | ludicrous and all "AI safety" in practice seems to mean is
           | censoring it so it won't make jokes about women or ethnic
           | minorities.
        
             | hollerith wrote:
             | Yes, as applied to the current generation of AIs, "safety"
             | and "alignment" refer to things like preventing the product
             | from making jokes about women or ethnic minorities, but
             | that is because the current generation is not powerful
             | enough to threaten human safety and human survival. The OP
             | in contrast is about what will happen if the labs succeed
             | _in their stated goal_ of creating AIs that are much more
             | powerful.
        
           | jandrewrogers wrote:
           | > use billions of dollars of electricity and GPUs
           | 
           | For now. Qualitative improvements in efficiency are likely to
           | change what is required.
        
         | timewizard wrote:
         | There isn't a single AI on the face of the earth.
         | 
         | So that's easy.
         | 
         | Nothing to actually worry about.
         | 
         | Other than Sam Altman and Elon Musks' pending ego fight.
        
         | parliament32 wrote:
         | >Video and pictures will soon have no evidentiary value.
         | 
         | This is one bit that has a technological solution. Canon's had
         | some version of this since the early 2000s:
         | https://www.bhphotovideo.com/c/product/319787-REG/Canon_9314...
         | 
         | A more recent initiative: https://c2pa.org/
        
           | mzajc wrote:
           | This is purely security by obscurity. I don't see why someone
           | with motivation and capability to forge evidence wouldn't be
           | able to forge these signatures, considering the private keys
           | presumably come with the camera you buy.
        
             | parliament32 wrote:
             | Shipping secure secrets is also a somewhat solved problem:
             | TPMs ship with EKs that, AFAIK, nobody has managed to
             | extract (yet?):
             | https://docs.trustauthority.intel.com/main/articles/tpm-
             | ak-p...
        
             | rocqua wrote:
             | If you make it expensive enough to extract, and tie the
             | private key to a real identity, then you can make it hard
             | to abuse on scale.
             | 
             | Here I mean that at point of sale you register yourself as
             | owner for the camera. And you make extracting a key cost
             | about a million. Then bulk forgeries won't happen.
        
         | JumpCrisscross wrote:
         | > _Video and pictures will soon have no evidentiary value_
         | 
         | We still accept eyewitness testimony in courts. Video and
         | pictures will be fine, their context is what will matter. Where
         | we'll have a generation of chaos is in the public sphere, as
         | everyone born before somewhere between 1975 and now fails to
         | think critically when presented with an image they'd like to
         | believe is true.
        
           | wand3r wrote:
           | I think we'll have a decade of chaos but not because of this.
           | A lot of stories during the election cycle in news media and
           | on the internet were simply Democratic or Republican "fan
           | fiction". I don't want to make this political, I only
           | illustrate this example to say, that I was burned in
           | believing some of these things and you develop the muscle
           | pretty quickly. Tweets, anecdotes, images and even stories
           | reported by "reputable" media companies already require a
           | degree of critical thinking.
           | 
           | I haven't really believed in aliens existing on earth for
           | most of my adult life. However, I have sort of come around to
           | at least entertaining the idea in recent years but would need
           | solid photographic or video evidence. I am now convinced that
           | aliens could basically land in broad daylight in 3 years
           | while being heavily photographed and it would easily be able
           | to be explained away as AI. Especially if governments want to
           | do propaganda or counter propaganda.
        
         | abdullahkhalids wrote:
         | You can't really tell if someone is developing chemical
         | weapons. You can tell when such weapons are used. This is very
         | similar to AI.
         | 
         | Yet, the international agreements on non-use of chemical
         | weapons have held up remarkably well.
        
           | czhu12 wrote:
           | I actually agree with you, but just wanted to bring up this
           | interesting article challenging that:
           | https://acoup.blog/2020/03/20/collections-why-dont-we-use-
           | ch...
           | 
           | Basically claims that chemical weapons have been phased out
           | because they aren't effective, not because we've become more
           | moral, or international standards have been set.
           | 
           | "During WWII, everyone seems to have expected the use of
           | chemical weapons, but never actually found a situation where
           | doing so was advantageous... I struggle to imagine that, with
           | the Nazis at the very gates of Moscow, Stalin was moved
           | either by escalation concerns or the moral compass he so
           | clearly lacked at every other moment of his life."
        
         | manquer wrote:
         | > You can't remotely detect that somebody is training an AI.
         | 
         | Energy use is energy use, training is still incredibly energy
         | intensive and the GPU heat signatures are different from non
         | GPU ones, it fairly trivial to detect large scale GPU usage.
         | 
         | Enforcement is a different problem, and is not specific to AI,
         | if you cannot enforce an agreement it doesn't matter if its AI
         | or nuclear or sarin gas.
        
       | puff_pastry wrote:
       | They're right, the declaration is useless and it's just an
       | exercise in futility
        
       | tehjoker wrote:
       | No different than how the U.S. doesn't sign on to the declaration
       | of the rights of children or landmine treaties etc
        
       | r00fus wrote:
       | All this "AI safety" is purely moat-building for the likes of
       | OpenAI et. al. to prevent upstarts like DeepSeek.
       | 
       | LLMs will not get us to AGI. Not even close. Altman talking about
       | this danger is like Musk talking about driverless taxis.
        
         | ryanackley wrote:
         | Half moat-building, half marketing. The need for "safety"
         | implies some awesome power.
         | 
         | Don't get me wrong, they are impressive. I can see LLM's
         | _eventually_ enabling people to be 10x more productive in jobs
         | that interact with a computer all day.
        
           | bombcar wrote:
           | > The need for "safety" implies some awesome power.
           | 
           | This is a big part of it, and you can get others to do it for
           | you.
           | 
           | It's like the drain cleaner sold in an extra bag. Obviously
           | it must be the best, it's so scary they have to put it in a
           | bag!
        
           | r00fus wrote:
           | So it's a tool like the internal combustion engine, or the
           | moveable typeset. Game-changing technology that may alter
           | society but not dangerous like nukes.
        
           | timewizard wrote:
           | > eventually enabling people to be 10x more productive in
           | jobs that interact with a computer all day.
           | 
           | I doubt this. Productivity is gained through experience and
           | expertise. If you don't know what you don't know than the LLM
           | is perfectly useless to you.
        
         | amelius wrote:
         | I wouldn't be surprised if EU has their own competitor within a
         | year or so.
        
           | IshKebab wrote:
           | To OpenAI? The closest was DeepMind but that's owned by
           | Google now.
        
             | amelius wrote:
             | Well, deepseek open sourced their model and published their
             | algorithm. It may take a while before it is reproduced but
             | if they start an initiative and get the funding in place
             | it'll probably be sooner rather than later.
        
             | mattlondon wrote:
             | Owned by Google yes, but it is head quartered in London,
             | with the majority of the staff there.
             | 
             | So the skills, knowledge, and expertise are in the UK.
             | Google can close the UK office tomorrow if they wanted to
             | sure, but are 100% of those staff going to move to
             | California? Doubt it. Some will, but a lot have lives in
             | the UK (not least the CEO and founder etc) so even if
             | Google pulls the rug I will bet there will be a new company
             | founded and funded within days that will vacuum up all the
             | staff.
        
               | tfsh wrote:
               | But will this company be British or European? I'd love to
               | think so, but somehow I doubt that. There just isn't the
               | money in UK tech, the highest paid tech jobs (other than
               | big tech) are elite hedgefunds but they get by with
               | minimal headcount.
        
           | tucnak wrote:
           | Mistral exists
        
         | worik wrote:
         | > LLMs will not get us to AGI
         | 
         | Yes.
         | 
         | And there is no reason to think that AGI would have desire.
         | 
         | I think people are reading themselves into their fears.
        
           | realce wrote:
           | > And there is no reason to think that AGI would have desire.
           | 
           | The entire point of utilizing this tool is to feed it a
           | desire and have it produce an appropriate output based upon
           | that desire. Not only that, it's entire training corpus is
           | filled with examples of our human desires. So either humans
           | give it desire or it trains itself to function based on the
           | inertia of "goal-seeking" which are effectively the same
           | thing.
        
           | Tossrock wrote:
           | There is evidence that as LLMs increase in scale, their
           | preferences become more coherent, see Hendrycks et al 2025,
           | summarizer at https://www.emergent-values.ai/
        
             | anon291 wrote:
             | A preference is meaningless without consciousness and
             | qualia.
        
         | z7 wrote:
         | Waymo's driverless taxis are currently operating in San
         | Francisco, Los Angeles and Phoenix.
        
           | ceejayoz wrote:
           | Notably, not Musk's, and very different promised
           | functionality.
        
         | yodsanklai wrote:
         | I'd say AGI is like Musk talking about interstellar traveling.
        
         | edanm wrote:
         | > All this "AI safety" is purely moat-building for the likes of
         | OpenAI et. al. to prevent upstarts like DeepSeek.
         | 
         | Modern AI safety originated with people like Eliezer Yudkowsky,
         | Nick Bostrom, the LessWrong/rationality movement etc.
         | 
         | They very much were not just talking about it only to build
         | moats for OpenAI. For one thing, OpenAI didn't exist at the
         | time, AI was not anywhere close to where it is today, and
         | almost everyone thought their arguments were ridiculous.
         | 
         | You might not _agree_ with them, but you can 't simply dismiss
         | their arguments as only being there to prop up the existing AI
         | players, that's wrong and disingenuous.
        
         | anon291 wrote:
         | > LLMs will not get us to AGI. Not even close. Altman talking
         | about this danger is like Musk talking about driverless taxis.
         | 
         | AGI is a meaningless term. The LLM architecture has shown
         | promise in every single domain once used for perceptron neural
         | networks. By all accounts on those things that fit its 'senses'
         | the LLMs are significantly smarter than the average human
         | being.
        
       | option wrote:
       | did China sign?
        
         | vindex10 wrote:
         | that's what confused me:
         | 
         | > Among the priorities set out in the joint declaration signed
         | by countries including China, India, and Germany was
         | "reinforcing international co-operation to promote co-
         | ordination in international governance."
         | 
         | so looks like they did
         | 
         | , at the same time, the goal of the declaration and summit to
         | become less reliant on US and China.
         | 
         | > Meanwhile, Europe is seeking a foothold in the AI industry to
         | avoid becoming too reliant on the US or China.
         | 
         | So basically Europe signed together with China to compete
         | against US/UK or what happend?
        
       | doright wrote:
       | Something tells me aspects of living in the next few decades
       | driven by technology acceleration will feel like being
       | lobotomized while conscious and watching oneself the whole time.
       | Like yes, we are able to think of thousands of hypothetical ways
       | technology (even those inferior to full AGI) could go off the
       | rails in a catastrophic way and post and discuss these scenarios
       | endlessly... and yet it doesn't result in a slowing or stopping
       | of the progress leading there. All it takes is a single group
       | with enough collective intelligence and breakthroughs and the
       | next AI will be delivered to our doorstop whether or not we asked
       | for it.
       | 
       | It reminds me of the time I read books in my youth and only 20
       | years later realized the authors of some of those books were
       | trying to deliver a important life messages to a teenager
       | undergoing crucial changes, all of which would be painfully
       | relevant to the current adult me... and yet the whole time they
       | fell on deaf ears. Like the message was right there but I did not
       | have the emotional/perceptive intelligence to pick up on and
       | internalize it for too long.
        
         | Nasrudith wrote:
         | I'm sorry, but when the has it ever been the case that you can
         | just say "no" to the world developing a new technology? You
         | might as well say we can prevent climate change by just saying
         | no to the outcome!
        
           | estebank wrote:
           | We no longer use asbestos as a flame flame retardant in
           | houses.
           | 
           | We no longer use chemicals harmful to the ozone layer on
           | spray cans.
           | 
           | We no longer use lead in gasoline.
           | 
           | We figured those things were bad, and changed what we did. If
           | evidence is available ahead of time that something is
           | harmful, it shouldn't be controversial to avoid widespread
           | adoption.
        
             | josefritzishere wrote:
             | I don't think it is safe to assume the use patterns of
             | tangible things extend to intangible things; nor the
             | patterns of goods to that of services. I just see this as a
             | conclusory leap.
        
               | estebank wrote:
               | I was replying to
               | 
               | > when the has it ever been the case that you can just
               | say "no" to the world developing a new technology?
        
               | jpkw wrote:
               | In each of those examples, we said "no" decades after
               | they were developed, and many had to suffer in order for
               | us to get to the stage of saying "no".
        
             | bombcar wrote:
             | None of those things were said "no" to _before_ they were
             | used and in a wide-spread manner.
             | 
             | The closest might be nuclear power, we know we can do it,
             | we did it, but lots of places said no to it, and further
             | developments have vastly slowed down.
        
               | estebank wrote:
               | In none of those did we know about the adverse effects.
               | Those were observed afterwards, and it would have taken
               | longer to know if they hadn't been adopted. But that
               | doesn't invalidate the idea that we have followed "if
               | something bad, collectively don't use it" at various
               | points in time.
        
               | Aloisius wrote:
               | We were well aware of the adverse effects of tetraethyl
               | lead before lead gasoline was first sold.
               | 
               | The man who invented it got lead poisoning during its
               | development, multiple people died of lead poisoning in a
               | pilot plant manufacturing it and public health and
               | medical authorities warned against prior to it being
               | available for sale to the general public.
        
               | rat87 wrote:
               | And for nuclear power many would say that rejecting it
               | was a huge mistake
        
           | rurp wrote:
           | This happens in many ways with potentially catastrophic tech.
           | There are many formal agreements and strong norms against
           | building ever more lethal nuclear arsenals or existentially
           | dangerous gain of function research. The current system is
           | far from perfect, the world could literally be destroyed
           | today based on the actions of a handful of people, but it's
           | the best we have come up with so far.
           | 
           | If we as a society keep developing potential existential
           | threats to ourselves without mitigating them then we are
           | destined for disaster eventually.
        
             | realce wrote:
             | John C Lilly had a concept called the "bad program" that
             | was like an internal, natural, subconscious antithetical
             | force that lives in us all. It seduces or lures the
             | individual into harming themselves one way or another - in
             | his case it "tricked" him into taking a vitamin injection
             | improperly, leading to a stroke, even though he knew how to
             | administer the shot expertly.
             | 
             | At some level, there's a disaster-seeking function inside
             | us all acting as an evolutionary propellant.
             | 
             | You might make an argument that "AI" is an evolutionary
             | embodiment of our conscious minds that's designed to escape
             | these more subconscious trappings.
        
         | deadbabe wrote:
         | Anyone born in the next few decades will disagree with you.
         | They will find this new world comfortable and rich with
         | content. They will never understand what your problem is.
        
           | mouse_ wrote:
           | What makes you think that? That's what the last generations
           | said about us and it turned out to not be true.
        
             | hcurtiss wrote:
             | Relative to them, we most certainly are. By every objective
             | metric, humanity has flourished in "the last generations."
             | I get it that people are stressed today -- people have
             | always been stressed. It is, in a sense, fundamental to the
             | human condition.
        
               | jmcgough wrote:
               | Easy for you to say that. The political party running
               | this country ran on a platform of the eradication of me
               | and my friends. I can't legally/safely use public
               | restrooms in several states, including some which have
               | paid bounties for reporting. Things will continue to
               | improve for the wealthy and powerful, but in a lot of
               | ways have become worse for the poor and vulnerable.
               | 
               | When I was a kid, there was this grand utopian ideal for
               | the internet. Now it's fragmented, locked in walled
               | gardens where people are psychologically abused for
               | advertising dollars. AI could be a force for good, but
               | Google has already ended its ban on use in weapons and is
               | selling it to the IAF, and Palantir is busy finding ways
               | to use it for surveillance.
        
               | hcurtiss wrote:
               | > The political party running this country ran on a
               | platform of the eradication of me and my friends
               | 
               | Please go ahead and provide a quote calling for
               | "eradication" of any group to which you and your friends
               | belong. This kind of hyperbole used to be unwelcome on
               | HN.
        
               | jmcgough wrote:
               | Sure. Their word, not mine: https://www.the-
               | independent.com/news/world/americas/us-polit...
        
               | hcurtiss wrote:
               | Eradication of an ideology is not the same as eradication
               | of people. It's also a stretch to say Michael Knowles, a
               | famous shock-jock, speaks for the Republican party.
        
               | deltaburnt wrote:
               | Saying their identity is "ideology" is part of the
               | problem. There's plenty of violent movements that can be
               | framed as just "eradicating ideology", when in reality
               | that is just a culture, condition, religion, or trait
               | that you don't understand or accept.
        
               | rendang wrote:
               | "I don't think people should be allowed to partake in a
               | particular behavior" is not the same thing as "People of
               | a specific group should be killed".
        
               | immibis wrote:
               | What is the behaviour?
        
               | int_19h wrote:
               | A reminder that it's only been 22 years since sodomy laws
               | were declared unconstitutional in US in the first place
        
           | mitthrowaway2 wrote:
           | I'm not so sure. My parents were born well after the hydrogen
           | bomb was developed, and they were never comfortable with it.
        
             | bluGill wrote:
             | There are always a few things that people don't like.
             | However your parents likely are comfortable with a lot of
             | things that their parents were not.
        
             | stackedinserter wrote:
             | Would they prefer that only USSR had an H-bomb, but not
             | USA?
        
               | bobthepanda wrote:
               | Do two wrongs make a right?
        
               | xp84 wrote:
               | That's not the point, GP is pointing out how we only
               | control (at least theoretically, lol) our own government,
               | and basic game theory can tell you that countries that
               | adopt pacifist ideas and refuse to pursue anything that
               | might be dangerous will always at some point be easily
               | defeated by others who are less moral.
               | 
               | The point is that it's complicated, it's not a black and
               | white sound bite like the people who are "against nuclear
               | weapons" pretend it is.
        
               | bobthepanda wrote:
               | And people don't have to feel comfortable with
               | complicated things. The GP posted "would you prefer" as a
               | disingenous point to invalidate the commenter's parents'
               | feelings.
               | 
               | I eat meat. I know some vegans feel uncomfortable with
               | that. But personally I feel secure in my own convictions
               | that I don't need to run around insinuating vegans are
               | less than or whatever.
        
               | mitthrowaway2 wrote:
               | I don't think that's the nature of the argument that I
               | was responding to.
        
               | stackedinserter wrote:
               | So what? Would they?
        
               | mitthrowaway2 wrote:
               | Nuclear arms races are a form of multipolar trap, and
               | like any multipolar trap, you are compelled to keep up,
               | making your own life worse, even while wishing that you
               | and your opponent could cooperatively escape the trap.
               | 
               | The discussion I was responding to is whether the next
               | generation would grow up seeing pervasive AI as a normal
               | and good thing, as is often the case with new technology.
               | I cited nuclear weapons as a counterexample, while I
               | agree that nobody felt that they had a choice but to keep
               | up with them.
               | 
               | AI could similarly be a multipolar trap ("nobody likes it
               | but we aren't going to accept an AI gap with Russia!"),
               | which would mean it has that in common with nuclear
               | weapons, strengthening the argument _against_ the next
               | generation being comfortable with AI.
        
             | buzzerbetrayed wrote:
             | Exceptions to rules exist, especially if you're trying to
             | think of a really extreme cases that specifically
             | invalidate it.
             | 
             | However, that really doesn't invalidate the rule.
        
               | mitthrowaway2 wrote:
               | That's true, but I think AI may be enough of a disruption
               | to qualify. We'll of course have to wait and see what the
               | next generation thinks, but they might end up envious of
               | us, looking back with rose-tinted glasses on a simpler
               | time when people could trust photographic evidence from
               | around the world, and interact with each other
               | anonymously online without wondering if they were talking
               | to an astroturf advertising bot.
        
             | JumpCrisscross wrote:
             | > _My parents were born well after the hydrogen bomb was
             | developed, and they were never comfortable with it_
             | 
             | The nuclear peace is hard to pin down. But given the
             | history of the 20th century, I find it difficult to imagine
             | we wouldn't have seen WWIII in Europe and Asia without the
             | nuclear deterrent. Also, while your parents may have been
             | uncomfortable with the hydrogen bomb, the post-90s world
             | hasn't particularly been characterised by mass nuclear
             | anxiety. (Possibly to a fault.)
        
               | h0l0cube wrote:
               | You might have missed the cold war in your summary. Mass
               | nuclear anxiety really characterized that era, with a
               | number of near misses that could have ended in global
               | annihilation (and that's no exaggeration).
               | 
               | IMO, the Atoms for Peace propaganda undersells how
               | successful globalization has been at keeping nations from
               | destroying each other by creating codependence on complex
               | supply chains. The new shift to protectionism may see an
               | end to that
        
               | int_19h wrote:
               | The supply chain argument was also made wrt European
               | countries just before WW1. It wasn't even wrong -
               | economically, it was as devastating as predicted for
               | everyone involved, with no real winners - but that didn't
               | preclude the war.
        
               | h0l0cube wrote:
               | The scale of globalization post-WW2 puts it on a whole
               | other level. The complexity of supply chains now are such
               | that any country would grind to a halt without imports.
               | The exception here, to some degree, is China, but so far
               | they've been more interested in soft power over military,
               | and that strategy has served them well - though it seems
               | the US is scrapping for a fight.
        
           | throwup238 wrote:
           | I've come up with a set of rules that describe our reactions
           | to technologies:       1. Anything that is in the world when
           | you're born is normal and ordinary and is just a natural part
           | of the way the world works.       2. Anything that's invented
           | between when you're fifteen and thirty-five is new and
           | exciting and revolutionary and you can probably get a career
           | in it.       3. Anything invented after you're thirty-five is
           | against the natural order of things.       - Douglas Adams
        
           | Telemakhos wrote:
           | > They will find this new world comfortable and rich with
           | content.
           | 
           | I agree with the first half: comfort has clearly increased
           | over time since the Industrial Revolution. I'm not so sure
           | the abundance of "content" will be enriching to the masses,
           | however. "Content" is neither literature nor art but a
           | vehicle or excuse for advertising, as pre-AI television
           | demonstrated. AI content will be pushed on the many as a
           | substitute for art, literature, music, and culture in order
           | to deliver advertising and propaganda to them, but it will
           | not enrich them as art, literature, music, and culture would:
           | it might enrich the people running advertising businesses.
           | Let us not forget that many of the big names in AI now, like
           | X (Grok) and Google (Gemini), are advertising agencies first
           | and foremost, who happen to use tech.
        
             | psytrancefan wrote:
             | You don't know this though with even a high probability.
             | 
             | It is quite possible there is a cultural reaction against
             | AI and that we enter a new human cultural golden age of
             | human created art, music, literature, etc.
             | 
             | I actually would bet on this as engineering skills become
             | automated that what will be valuable in the future is human
             | creativity. What has value then will influence culture more
             | and more.
             | 
             | What you are describing seems like how the future would be
             | based on current culture but it is a good bet the future
             | will not be that.
        
           | sharemywin wrote:
           | I guess your right here's how it happens:
           | 
           | Alignment Failure - Shifting Expectations People get used to
           | AI systems making "weird" or harmful choices, rationalizing
           | them as inevitable trade-offs. Framing failures as "technical
           | glitches" rather than systemic issues makes them seem normal.
           | 
           | Runaway Optimization - Justifying Unintended Consequences
           | AI's extreme efficiency is framed as progress, even if it
           | causes harm. Negative outcomes are blamed on "bad inputs"
           | rather than the AI itself.
           | 
           | Bias Amplification - Cultural Reinforcement AI bias gets
           | baked into everyday systems (hiring, policing, loans), making
           | discrimination seem "objective." "That's just how the system
           | works" thinking replaces scrutiny.
           | 
           | Manipulation & Deception - AI as a Trusted Guide People
           | become dependent on AI suggestions without questioning them.
           | AI-generated narratives shape public opinion, making
           | manipulation invisible.
           | 
           | Security Vulnerabilities - Expectation of Insecurity Constant
           | cyberattacks and AI hacks become "normal" like data breaches
           | today. People feel powerless to push back, accepting
           | insecurity as a fact of life.
           | 
           | Autonomous Warfare - AI as an Inevitable Combatant AI-driven
           | warfare is seen as more "efficient" and "precise," making
           | human involvement seem outdated. Ethical debates fade as AI
           | soldiers become routine.
           | 
           | Loss of Human Oversight - AI as Authority AI decision-making
           | becomes so complex that people stop questioning it. "The AI
           | knows best" becomes a cultural default.
           | 
           | Economic Disruption - UBI & Gig Economy Normalization Mass
           | job displacement is met with new economic models (UBI, gig
           | work, AI-driven welfare), making it feel inevitable. People
           | adjust to a world where traditional employment is rare.
           | 
           | Deepfakes & Misinformation - Truth Becomes Fluid Reality
           | becomes subjective as deepfakes blur the line between real
           | and fake. People rely on AI to "verify" truth, giving AI
           | control over perception.
           | 
           | Power Concentration - AI as a Ruling Class AI governance is
           | framed as more rational than human leadership. Dissent is
           | dismissed as "anti-progress," consolidating control under AI-
           | driven elites.
        
             | sharemywin wrote:
             | In fact we don't even need UBI either:
             | 
             | "Lack of Adaptability"
             | 
             | AI advocates argue that those who lose jobs simply failed
             | to "upskill" in time. The burden is placed on workers to
             | constantly retrain, even if AI advancement outpaces human
             | ability to keep up. Companies and governments say, "The
             | opportunities are there; people just aren't taking them."
             | "Work Ethic Problem"
             | 
             | The unemployed are labeled as lazy or unwilling to compete
             | with AI. Hustle culture promotes side gigs and AI-powered
             | freelancing as the "new normal." Welfare programs are
             | reduced because "if AI can generate income, why can't you?"
             | "Personal Responsibility for Economic Struggles"
             | 
             | The unemployed are blamed for not investing in AI tools
             | early. The success of AI-powered entrepreneurs is
             | highlighted to imply that struggling workers "chose" not to
             | adapt. People are told they should have saved more or
             | planned for disruption, even though AI advancements were
             | unpredictable. "It's a Meritocracy"
             | 
             | AI-driven success stories (few and exceptional) are
             | amplified to suggest anyone could thrive. Struggling
             | workers are seen as having made poor choices rather than
             | being victims of automation. The idea of a "deserving poor"
             | is reinforced--those who struggle are framed as not working
             | hard enough. "Blame the Boomers / Millennials / Gen Z"
             | 
             | Economic shifts are framed as generational failures rather
             | than AI-driven. Older workers are told they refused to
             | adapt, while younger ones are blamed for entitlement or
             | lack of work ethic. Cultural wars distract from AI's role
             | in job losses. "AI is a Tool, Not the Problem"
             | 
             | AI is framed as neutral--any negative consequences are
             | blamed on how people use it. "AI doesn't take jobs; people
             | mismanage it." Job losses are blamed on bad government
             | policies, corporate greed, or individual failure rather
             | than automation itself. "The AI Economy Is Full of
             | Opportunity"
             | 
             | Gig work and AI-driven side hustles are framed as
             | liberating, even if they offer no stability. Traditional
             | employment is portrayed as outdated, making complaints
             | about job loss seem like resistance to progress. Those
             | struggling are told to "embrace the new economy" rather
             | than question its fairness.
        
               | int_19h wrote:
               | You can only do so much with agitprop. At the end of the
               | day, if, say, 60% of the population has no income without
               | a job and no hopes of getting said job, they are not
               | going to starve to death no matter the justification for
               | it.
        
               | sharemywin wrote:
               | you just carve out us and them circles then just make the
               | circles smaller and smaller.
               | 
               | look at the push right now in the US against corrupt
               | foreign aid and the mass deportations seems like the
               | first step.
        
               | vladms wrote:
               | Historically, humanity evolved faster when it was
               | interacting. So groups can try to isolate themselves but
               | on the long run that will make them lag behind.
               | 
               | US benefited a lot from lots of smart people going there
               | (even more during WWII). If people start believing
               | (correctly or incorrectly) that they would be better
               | somewhere else, it will not benefit them.
        
           | the_duke wrote:
           | Lets talk again after AI causes massive unemployment and
           | social upheaval for a few decades until we find some new
           | societal model to make things work.
           | 
           | This is inevitable in my view.
           | 
           | AI will replace a lot of white collar jobs relatively soon,
           | years or decades.
           | 
           | And blue collar isn't too far behind, since a major limiting
           | factor for automation is general purpose robots being able to
           | act in a dynamic environment, for which we need "world
           | models".
        
         | timewizard wrote:
         | People like to pretend that AGI isn't going to cost money to
         | run. The power budget alone is something no one is
         | contemplating.
         | 
         | Technology doesn't accelerate endlessly. Only our transistor
         | spacing does. These two are not the same thing.
        
           | dr_dshiv wrote:
           | Power budget will drop like a rock over time.
           | 
           | Exponential increases in cost (and power) for _next-level_ AI
           | and exponential decreases for the cost (and power) of
           | _current level_ AI.
        
           | bigbones wrote:
           | More efficient hardware mappings will happen, and as a
           | sibling comment says, power requirements will drop like a
           | rock. Check out https://www.youtube.com/watch?v=7hz4cs-hGew
           | for some idea of what that might eventually look like
        
           | WillPostForFood wrote:
           | _The power budget alone is something no one is
           | contemplating._
           | 
           | It is very hard to find a discussion about the growth and
           | development of AI that doesn't discuss the issues around
           | power budget.
           | 
           | https://www.datacenterknowledge.com/energy-power-
           | supply/whit...
           | 
           | https://bidenwhitehouse.archives.gov/briefing-
           | room/president...
           | 
           |  _In building domestic AI infrastructure, our Nation will
           | also advance its leadership in the clean energy technologies
           | needed to power the future economy, including geothermal,
           | solar, wind, and nuclear energy; foster a vibrant,
           | competitive, and open technology ecosystem in the United
           | States, in which small companies can compete alongside large
           | ones; maintain low consumer electricity prices; and help
           | ensure that the development of AI infrastructure benefits the
           | workers building it and communities near it._
        
         | gretch wrote:
         | > Like yes, we are able to think of thousands of hypothetical
         | ways technology (even those inferior to full AGI) could go off
         | the rails in a catastrophic way and post and discuss these
         | scenarios endlessly... and yet it doesn't result in a slowing
         | or stopping of the progress leading there.
         | 
         | The problem is sifting through all of the doomsayer false
         | positives to get to any amount of cogent advice.
         | 
         | At the invention of the printing press, there were people with
         | this same energy. Obviously those people were wrong. And if we
         | had taken their "lesson", then human society would be in a much
         | worse place.
         | 
         | Is this new wave of criticism about AI/AGI valid? We will only
         | really know in retrospect.
        
           | beezlebroxxxxxx wrote:
           | > Is this new wave of criticism about AI/AGI valid? We will
           | only really know in retrospect.
           | 
           | All of the focus on AGI is a distraction. I think it's
           | important for a state to declare it's intent with a
           | technology. The alternative is arguing the idea that
           | technology advances autonomously, independent of human
           | interactions, values, or ideas, which is, in my opinion, an
           | incredibly naive notion. I would rather have a state say "we
           | won't use this technology for evil" than a state that says
           | nothing at all and simply allows the businesses to develop in
           | any direction their greed leads them.
           | 
           | It's entirely valid to critique the _uses_ of a technology,
           | because  "AI" (the goalpost shifting for marketing purposes
           | to make that name apply to chatbots is a stretch honestly) is
           | a technology like any other, like a landmine, like a
           | synthetic virus, etc. In the same way, it's valid to
           | criticize an actor for purposely hiding their intentions with
           | a technology.
        
             | circuit10 wrote:
             | The idea is that by its very nature as an agent that
             | attempts to make the best action to achieve a goal,
             | assuming it can get good enough, the best action will be to
             | improve itself so it can better achieve its goal. In fact
             | we humans are doing the same thing, we can't really improve
             | our intelligence directly but we are trying to create AI to
             | achieve our goals, and there's no reason that the AI itself
             | wouldn't do so assuming it's capable and we don't attempt
             | to stop it, and currently we don't really know how to
             | reliably control it.
             | 
             | We have absolutely no idea how to specify human values in a
             | robust way which is what we would need to figure out to
             | build this safely
        
               | slg wrote:
               | I think that is missing the point. The AI's goals are
               | what are determined by its human masters. Those human
               | masters can already have nefarious and selfish goals that
               | don't align with "human values". We don't need to invent
               | hypothetical sentient AI boogeymen turning the universe
               | into paperclips in order to be fearful of the future that
               | ubiquitous AI creates. Humans would happily do that too
               | if they get to preside over that paperclip empire.
        
               | Filligree wrote:
               | "Yes, X would be catastrophic. But have you considered Y,
               | which is also catastrophic?"
               | 
               | We need to avoid both, otherwise it's a disaster either
               | way.
        
               | slg wrote:
               | I agree, but that is removing the nuance that in this
               | specific case Y is a prerequisite of X so focusing solely
               | on X is a mistake.
               | 
               | And for sake of clarity:
               | 
               | X = sentient AI can do something dangerous
               | 
               | Y = humans can use non-sentient AI to do something
               | dangerous
        
               | circuit10 wrote:
               | "sentient" (meaning "able to perceive or feel things")
               | isn't a useful term here, it's impossible to measure
               | objectively, it's an interesting philosophical question
               | but we don't know if AI needs to be sentient to be
               | powerful or what sentient even really means
               | 
               | Humans will not be able to use AI do something selfish if
               | we can't get it to do what we want at all, so we need to
               | solve that (larger) problem before we come to that one
        
               | wombatpm wrote:
               | Ok self flying drones that size if a deck of cards
               | carrying a single bullet and enough processing power to
               | fly around looking for faces, navigate to said face, fire
               | when in range. Produce them by the thousands and release
               | on the battlefield. Existing AI is more than capable.
        
               | dgfitz wrote:
               | You can do that without AI. Been able to do it for
               | probably 7-10 years.
        
               | mitthrowaway2 wrote:
               | > The AI's goals are what are determined by its human
               | masters.
               | 
               | Imagine going to a cryptography conference and saying
               | that "the encryption's security flaws are determined by
               | their human masters".
               | 
               | Maybe some of them were put there on purpose? But not the
               | majority of them.
               | 
               | No, an AI's goals are determined by their _programming_ ,
               | and that may or may not align with the intentions of
               | their human masters. How to specify and test this remains
               | a major open question, so it cannot simply be presumed.
        
               | slg wrote:
               | You are choosing to pick a nit with my phrasing instead
               | of understanding the underlying point. The "intentions of
               | their human masters" is a higher level concern than an AI
               | potentially misinterpreting those intentions.
        
               | mitthrowaway2 wrote:
               | It's really not a nit. Evil human masters might impose a
               | dystopia, while a malignant AI following its own goals
               | which _nobody_ intended could result in an apocalypse and
               | human extinction. A dystopia at least contains some
               | fragment of hope and human values.
        
               | slg wrote:
               | > Evil human masters might impose a dystopia
               | 
               | Why are you assuming this is the worst case scenario? I
               | thought human intentions didn't translate directly to the
               | AI's goals? Why can't a human destroy the world with non-
               | sentient AI?
        
               | sirsinsalot wrote:
               | It has been shown many times that current cutting edge AI
               | will subvert and lie to follow subgoals not stated by
               | their "masters".
        
               | mr_toad wrote:
               | > The idea is that by its very nature as an agent that
               | attempts to make the best action to achieve a goal,
               | assuming it can get good enough, the best action will be
               | to improve itself so it can better achieve its goal.
               | 
               | I've heard this argument before, and I don't entirely
               | accept it. It presumes that AI will be capable of playing
               | 4D chess and thinking logically 10 moves ahead. It's an
               | interesting plot as a SF novel (literally the plot of the
               | movie "I Robot"), but neural networks just don't behave
               | that way. They act, like us, on instinct (or training),
               | not in some hyper-logical fashion. The idea that AI will
               | behave like Star Trek's Data (or Lore), has proven to be
               | completely wrong.
        
             | roenxi wrote:
             | But if the state approaches a technology with intent it is
             | usually for the purposes of a military offence. I don't
             | think that is a good idea in the context of AI! Although I
             | also don't think there is any stopping it. The US has
             | things like DARPA for example and a lot of Chinese
             | investment seems to be done with the intent of providing
             | capabilities to their army.
             | 
             | The list of things states have attempted to deploy
             | offensively is nearly endless. Modern operations research
             | arguably came out of the British empire attempting
             | (succeeding) to weaponise mathematics. If you give a state
             | fertiliser it makes bombs, if you give it nuclear power it
             | makes bombs, if you give it drones it makes bombs, if you
             | give it advanced science or engineering of any form it
             | makes bombs. States are the most ingenious system for
             | turning things into bombs that we've ever invented; in the
             | grand old days of siege warfare they even managed to
             | weaponise corpses, refuse and junk because it turned out
             | lobbing that stuff at the enemy was effective. The entire
             | spectrum of technology from nothing to nanotech, hurled at
             | enemies to kill them.
             | 
             | We'd all love if states commit to not doing evil but the
             | state is the entity most active at figuring out how to use
             | new tech X for evil.
        
           | RajT88 wrote:
           | A useful counterexample is all the people who predicted
           | doomsday scenarios with the advent of nuclear weapons.
           | 
           | Just because it has not come to pass yet does not mean they
           | were wrong. We have come close to nuclear annihilation
           | several times. We may yet, with or without AI.
        
             | gretch wrote:
             | >Just because it has not come to pass yet does not mean
             | they were wrong.
             | 
             | This assertion is meaningless because it can be applied to
             | anything.
             | 
             | "I think vaccines cause autism and will cause human
             | annihilation" - just because it has not yet come to pass
             | does not mean it is wrong.
        
               | anigbrowl wrote:
               | No. there have not been any nuclear exchanges, whereas
               | there have been millions, probably billions of
               | vaccinations. You're giving equal weight to conjecture
               | and empirical data.
        
             | idontwantthis wrote:
             | And imagine if private companies had had the resources to
             | develop nuclear weapons and the US government had decided
             | it didn't need to even regulate them.
        
             | chasd00 wrote:
             | i see your point but the analogy doesn't get very far. For
             | example, nuclear weapons were never mass marketed to the
             | public. Nor is it possible to push the bounds of nuclear
             | weapon yield by a private business, university, r/d lab,
             | group of friends, etc.
        
           | gibspaulding wrote:
           | > At the invention of the printing press, there were people
           | with this same energy. Obviously those people were wrong. And
           | if we had taken their "lesson", then human society would be
           | in a much worse place.
           | 
           | In the long run the invention of the printing press was
           | undoubtedly a good thing, but it is worth noting that in the
           | century following the spread of the printing press basically
           | every country in Europe had some sort of revolution. It seems
           | likely that "Interesting Times" may lay ahead.
        
             | llm_trw wrote:
             | They had some sort of revolution the previous few centuries
             | too.
             | 
             | Pretending that Europe wasn't in a perpetual blood bath
             | since the end of the Pax Romana until 1815 shows a gross
             | ignorance of basic facts.
             | 
             | The printing press was a net positive in every time scale.
        
         | zoogeny wrote:
         | I think the alternative is just as chilling in some sense. You
         | don't want to be stuck in a country that outlaws AI (especially
         | from other countries) if that means you will be uncompetitive
         | in the new emerging world.
         | 
         | The future is going to be hard, why would we choose to tie one
         | hand behind our back? There is a difference between being
         | careful and being fearful.
        
           | latexr wrote:
           | > if that means you will be uncompetitive in the new emerging
           | world. (...) There is a difference between being careful and
           | being fearful.
           | 
           | I'm so sick of that word. "You need to be competitive", "you
           | need to innovate". Bullshit. You want to talk about fear?
           | "Competitiveness" and "innovation" are the words the
           | unscrupulous people at the top use to instil fear on everyone
           | else and run rampant. They're not being competitive or
           | innovative, they're sucking you dry of as much value as they
           | can. We all need to take a breath. Stop and think for a
           | moment. You can literally eat food which grows from the
           | ground and make a shelter with a handful of planks and nails.
           | Humanity survived and thrived before all this unfettered
           | consumption, we don't _need_ to kill ourselves for more.
           | 
           | https://www.newyorker.com/cartoon/a16995
        
             | JumpCrisscross wrote:
             | > _"Competitiveness" and "innovation" are the words the
             | unscrupulous people at the top use to instil fear on
             | everyone else and run rampant_
             | 
             | If a society is okay accepting a lower standard of living
             | and sovereign subservience, then sure, competition doesn't
             | matter. But if America and China have AI and nukes and
             | Europe doesn't, one side gets to call the shots and the
             | other has to listen.
        
               | latexr wrote:
               | > a lower standard of living
               | 
               | We better start _really_ defining what that means,
               | because it has become quite clear that all this
               | "progress" is not leading to better lives. We're
               | literally going to kill ourselves with climate change.
               | 
               | > AI and nukes
               | 
               | Those two things aren't remotely comparable.
        
               | JumpCrisscross wrote:
               | > _it has become quite clear that all this "progress" is
               | not leading to better lives_
               | 
               | How do you think the average person under 50 would poll
               | on being teleported to the 1950s? No phones, no internet,
               | jet travel is only for the elite, oh nuclear war and MAD
               | are new cultural concepts, yippee, and fuck you if you're
               | black because the civil rights acts are still a decade
               | out.
               | 
               | > _two things aren't remotely comparable_
               | 
               | I'm assuming no AGI, just massive economic efficiencies.
               | In that sense, nuclear weapons give strategic autonomy
               | through military coercion and the ability to grant a
               | security umbrella, which fosters _e.g._ trade ties. In
               | the same way, the wealth from an AI-boosted economy
               | fosters similar trade ties (and creates similar costs for
               | disengaging). America doesn 't influence Europe by
               | threatening to nuke it, but by threatening _not_ to nuke
               | its enemies.
        
               | latexr wrote:
               | > on being teleported to the 1950s?
               | 
               | That's not the argument. At all. I argued we should
               | rethink our attitude of unfettered consumption so we
               | don't continue on an path which is provably leading to
               | destruction and death, and your take is going back in
               | time to nuclear war and overt racism. That is frankly
               | insane. I'm not fetishising "the old days", I'm saying
               | this attitude of "more more more" does not automatically
               | translate to "better".
        
               | JumpCrisscross wrote:
               | You said "all this 'progress' is not leading to better
               | lives." That implies lives were better or at least as
               | good before "all this 'progress'."
               | 
               | If you say Room A is not better than Room B, then you
               | should be, at the very least, indifferent to swapping
               | between them. If you're against it, then Room A _is_
               | better than Room B. Our lives are better--civically,
               | militarily and materially--than they were before.
               | Complaining about unfettered consumerism by falsely
               | claiming our lives are worse today than they were before
               | doesn 't support your argument. (It's further undercut by
               | the falling material and energy intensity of GDP in the
               | rich world. We're able to produce more value for less
               | input resource-wise.)
        
               | latexr wrote:
               | > You said "all this 'progress' is not leading to better
               | lives." That implies lives were better or at least as
               | good before "all this 'progress'."
               | 
               | No. There is a reason I put the word in quotes. We are on
               | a thread, the conversation follows from what came before.
               | My original post was explicit about words used to
               | bullshit us. I was specifically referring to what the
               | "unscrupulous people at the top" call "progress", which
               | doesn't truly progress humanity or enhances the lives of
               | most people, only theirs.
        
               | vladms wrote:
               | There are many people claiming many things. Not sure
               | which "top" you are referring to, but everybody at the
               | end of a chain (most rich, most political powerful, most
               | popular), generally are selected for being unscrupulous.
               | So not sure why you should ever trust what they say... If
               | you agree, just ignore what most of what those say and
               | find other people to listen to for interesting things.
               | 
               | To give a tech example, not many people were listening to
               | Stallman and Linus and they still managed to change a lot
               | for the better.
        
               | layer8 wrote:
               | To be honest, the 1950s become more appealing by the
               | year.
        
               | encipriano wrote:
               | There's no objective definition of what progress even
               | means so the guy is kinda right. We live in a
               | postmodernist society where its not easy to find
               | meaningfullness. All these debates have been discussed by
               | philosophers like Nietzche and Hegel. The media and
               | society shape our understanding and importance of whats
               | popular, progressive and utilitarian.
        
               | I-M-S wrote:
               | I'd like to see a poll if the average person would like
               | to be teleported 75 years into the future to 2100.
        
             | zoogeny wrote:
             | I live in a ruralish area. There is a lot of forested area
             | and due to economic depression there are a lot of people
             | living in the woods. Most live in tents but some actually
             | cut down the trees and turn them into make-shift shacks.
             | Using planks and nails like you suggest. They often drag
             | propane burners into the woods which often leads to fires.
             | Perhaps this is what you mean?
             | 
             | In reality, most people will continue to live the modern
             | life where there are doctors, accountants, veterinarians,
             | mechanics. We'll continue to enjoy food distribution and
             | grocery stores. We'll all hope that North America gets its
             | act together and build high speed rail so we can travel
             | comfortably for long distances.
             | 
             | There was a time Canada was a big exporter of engineering
             | technology. From mining to agriculture, satellites, and
             | nuclear technology. I want Canada to be competitive in
             | these ways, not making makeshift shacks out of planks and
             | nails for junkies that have given up on life and live in
             | the woods.
        
               | latexr wrote:
               | > They often drag propane burners into the woods which
               | often leads to fires. Perhaps this is what you mean?
               | 
               | I believe you very well know it's not, and are
               | transparently arguing in bad faith.
               | 
               | > shacks (...) for junkies that have given up on life
               | 
               | The insults you've chosen are quite telling. Not everyone
               | living in a way you disapprove of is an automatic junky.
        
               | zoogeny wrote:
               | You stated one ludicrous extreme (food comes out of the
               | ground! shelter is planks and nails!) and I stated
               | another ludicrous extreme. You can make my position look
               | simplistic and I can make your position look simplistic.
               | You can't then cry foul.
               | 
               | You are also assuming, in bad faith, an "all" where I did
               | not place one. It is an undeniable fact with evidence
               | beyond any reasonable doubt, including police reports and
               | documented studies by the district, that the makeshift
               | shacks in the rural woods near my house are made by drug
               | addicts that are eschewing the readily available social
               | housing for the specific reason that they can't go to
               | that housing due to its explicit restrictions on drug
               | use.
        
               | latexr wrote:
               | > ludicrous extreme
               | 
               | I don't understand this. Are you not familiar with
               | farming and houses? You know humans grow plants to eat
               | (including in backyards and balconies in cities) and make
               | cabins, chalets, houses, entire neighbourhoods (Sweden
               | currently planning the largest) with wood, right?
        
               | zoogeny wrote:
               | You are making a caricature of modern lifestyle farming,
               | not an argument for people literally living as they did
               | in the past. Going to your local garden center and buying
               | some seedlings and putting them on your balcony isn't
               | demonstrative of a life like our ancestors lived. Living
               | in one of the wealthiest countries to ever have existed
               | and going to the hardware store to buy expensive
               | hardwoods to decorate your house isn't the same as living
               | as our ancestors did.
               | 
               | You don't realize the luxury you have and for some reason
               | you assume that it is possible without that wealth. The
               | reality of that lifestyle without tremendous wealth is
               | more like subsistence farming in Africa and less like
               | Swedish planned neighborhoods.
        
               | latexr wrote:
               | > (...) not an argument for people literally living as
               | they did in the past. (...) isn't demonstrative of a life
               | like our ancestors lived. (...) isn't the same as living
               | as our ancestors did.
               | 
               | Correct. Nowhere did I defend or make an appeal to live
               | life "as they did in the past" or "like our ancestor
               | did". We should (and don't really have a choice but to)
               | live forward, not backward. We should take the good
               | things we learned and apply them positively to our lives
               | in the present and future, and not strive for change and
               | consumption for their own sakes.
        
               | roenxi wrote:
               | > I believe you very well know it's not, and are
               | transparently arguing in bad faith.
               | 
               | That is actually what you are talking about;
               | "uncompetitive" looks like something in the real world.
               | There isn't an abstract dial that someone twiddles to set
               | the efficiency of two otherwise identical outcomes - the
               | competitive one will typically look more advanced and
               | competently organised in observable ways.
               | 
               | To live in nice houses and have good food requires a
               | competitive economy. The uncompetitive version was
               | literally living in the forest with some meagre shelter
               | and maybe having a wood fire to cook food (that was
               | probably going to make someone very sick). The reason the
               | word "competitive" turns up so much is people living in a
               | competitive society get to have a more comfortable
               | lifestyle. People literally starve to death if the food
               | system isn't run with a competitive system that tends
               | towards efficiency; that experiment has been run far too
               | many times.
        
               | I-M-S wrote:
               | What the experiment has repeatedly shown is that people
               | living in non-competitive systems starve to death when
               | they get in the way of a system that has been optimized
               | solely for ruthless economic efficiency.
        
               | roenxi wrote:
               | The big one that leaps to mind was the famines with the
               | communist experiments in the 20th century. But there are
               | other, smaller examples that crop up disturbingly
               | regularly. Sri Lanka's fertiliser ban was a jaw-dropper;
               | Zimbabwe redistributing land away from whites was also
               | interesting. There are probably a lot more though,
               | messing with food logistics on the theory there are more
               | important things than producing lots of food seems to be
               | one of those things countries do from time to time.
               | 
               | People can argue about the moral and ideological sanity
               | of these things, but the fact is tolerating economic
               | inefficiencies into the food system can quickly leads to
               | there not being enough food.
        
               | Henchman21 wrote:
               | You, too, should read this and maybe try and tale it to
               | heart:
               | 
               | https://crimethinc.com/2018/09/03/the-mythology-of-work-
               | eigh...
        
             | Henchman21 wrote:
             | This may resonate with you:
             | 
             | https://crimethinc.com/2018/09/03/the-mythology-of-work-
             | eigh...
        
           | TFYS wrote:
           | It's because of competition that we are in this situation.
           | When the economic system and relationships between countries
           | are based on competition, it's nearly impossible to avoid
           | these races to the bottom. We need more systems based on
           | cooperation instead of competition.
        
             | JumpCrisscross wrote:
             | > _We need more systems based on cooperation instead of
             | competition._
             | 
             | That requires dissolving the anarchy of the international
             | system. Which requires an enforcer.
        
               | AnthonyMouse wrote:
               | Isn't this the opposite? If you want competition then you
               | need something like the WTO as a mechanism to prevent
               | countries from putting up trade barriers etc.
               | 
               | If some countries want to collaborate on some CERN
               | project they just... do that.
        
               | JumpCrisscross wrote:
               | > _If you want competition then you need something like
               | the WTO as a mechanism to prevent countries from putting
               | up trade barriers etc._
               | 
               | That's an enforcer. Unfortunately, nobody follows through
               | with its sanctions, so it's devolved into a glorified
               | opinion-providing body.
               | 
               | > _If some countries want to collaborate on some CERN
               | project they just... do that_
               | 
               | CERN is about doing thing, not _not_ doing things. You
               | can 't CERN your way to nuclear non-proliferation.
        
               | AnthonyMouse wrote:
               | > You can't CERN your way to nuclear non-proliferation.
               | 
               | Non-proliferation is, the US has nuclear weapons and
               | doesn't want Iran to have them, so is going to apply some
               | kind of bribe or threat. It's not cooperative.
               | 
               | The better example here is climate change. Everyone has a
               | direct individual benefit from burning carbon but it's to
               | our collective detriment, so how do you get anyone to
               | stop, especially the countries with large oil and coal
               | reserves?
               | 
               | In theory you could punish countries that don't stop
               | burning carbon, but that appears to be hard and in
               | practice what's doing the most good is making solar
               | cheaper than burning coal and making electric cars people
               | actually want, politics of infamous electric car man
               | notwithstanding.
               | 
               | So what does that look like for making AI "safe, secure
               | and trustworthy"? Maybe something like publishing state
               | of the art models for free with full documentation of how
               | they were created, so that people aren't sending their
               | sensitive data to questionable third parties who do who
               | knows what with it or using models with secret biases.
        
               | Henchman21 wrote:
               | I'd nominate either the AGI people keep telling me is
               | "right around the corner", or the NHI that seem to keep
               | popping up around nuclear installations.
               | 
               | Clearly humans aren't able to do this task.
        
             | zoogeny wrote:
             | I'm not certain of the balance myself. I was thinking as a
             | counterpoint of the band The Beatles where the two song
             | writers (McCartney and Lennon) are seen in competition.
             | There is a balance there between their competitiveness as
             | song writers and their cooperation in the band.
             | 
             | I think it is one-sided to see any situation where we want
             | to retain balance as being significantly affected by one of
             | the sides exclusively. If one believes that there is a
             | balance to be maintained between cooperation and
             | competition, I don't immediately default to believing that
             | any perceived imbalance is due to one and not the other.
        
             | int_19h wrote:
             | International systems are more organic than designed, but
             | the problem with cooperation is that it's not a
             | particularly stable arrangement without enforcement - sure,
             | everybody is better off when everybody cooperates, but you
             | can be even better off when you don't cooperate but
             | everybody else does.
        
             | pb7 wrote:
             | Competition is as old as time. There are single celled
             | organisms on your skin right now competing for resources to
             | live. There is nothing more innate to life than this.
        
               | sapphicsnail wrote:
               | Cooperation is as old as time. There are single celled
               | organisms living symbiotically on your skin right now.
        
               | XorNot wrote:
               | Yeah this isn't the analogy you want to use. The
               | mitochondria in my cells are also symbiotes but thats
               | just because whatever ancestor ate then found they were
               | hard to digest.
               | 
               | The naturalistic fallacy is still a fallacy.
        
           | tmnvix wrote:
           | > You don't want to be stuck in a country that outlaws AI
           | 
           | Just as you don't want to be stuck in the only town that
           | outlaws murder...
           | 
           | I am not a religious person, but I can see the value in
           | promoting shared taboos. The question is, how do we do this
           | in the modern world? We had some success with nuclear
           | weapons. I don't think it's any coincidence that contemporary
           | leaders (and possibly populations) seem to have forgotten how
           | bloody dangerous they are and how utterly stupid it is to
           | engage in brinkmanship with so much on the line.
        
         | pj_mukh wrote:
         | "We are able to think of thousands of hypothetical ways
         | technology could go off the rails in a catastrophic way"
         | 
         | Am I the only one here saying that this is no reason to
         | preemptively pass legislation? That just seems crazy to me.
         | Imagined horrors aren't real horrors?
         | 
         | I disagree with this administrations approach, I think we
         | should be vigilant, and keeping people who stand to gain so
         | much from the tech in the room, doesn't seem like a good idea,
         | but other than that, I haven't seen any real reason to do more
         | than wait and be vigilant?
        
           | saulpw wrote:
           | Predicted horrors aren't real horrors either. But maybe we
           | don't have to wait until the horrors are realized and
           | embedded into the fabric of society before we apply the
           | brakes a bit. How else could we possibly be vigilant? Reading
           | news articles and wringing our hands?
        
             | XorNot wrote:
             | There's a difference between the trolley speeding towards
             | someone tied to the tracks, versus someone tied to the
             | tracks but the trolley is stationary, and to someone
             | standing at the station looking at the bare ground and
             | saying "if we built some tracks and put a trolley on it,
             | and then tied someone to the tracks the trolley would kill
             | them! We need to regulate against this dangerous trolley
             | technology before it's too late". Then instead someone
             | builds a freeway because it turns out the area want well
             | suited rail trolley.
        
         | Gud wrote:
         | I wish your post wasn't so accurate.
         | 
         | Yet, I can't help but be hopeful about the future. We have to
         | be, right?
        
         | alfalfasprout wrote:
         | The harsh reality is that a culture of selfishness has become
         | too widespread. Too many people (especially in tech) don't
         | really care what happens to others as long as they get rich off
         | it. They'll happily throw others under the bus and refuse to
         | share wellbeing even in their own communities.
         | 
         | It's the inevitable result of low-trust societies infiltrating
         | high trust ones. And it means that as technologies with
         | dangerous implications for society become more available
         | there's enough people willing to prostitute themselves out to
         | work on society's downfall that there's no realistic hope of
         | the train stopping.
        
           | greenimpala wrote:
           | Profit over ethics, self-interest over communal well-being,
           | and competition over cooperation. You're describing
           | capitalism.
        
             | tmnvix wrote:
             | I don't necessarily disagree with you, but I think the
             | issue is a little more nuanced.
             | 
             | Capitalism obviously has advantages and disadvantages.
             | Regulation can address many disadvantages if we are
             | willing. Unfortunately, I think a particular (mostly
             | western) fetish for privileging individuals over
             | communities has been wrongly extended to capital itself
             | (e.g. corporations recognised as entities with rights
             | similar to - and sometimes over-and-above - those of a
             | person). We have literally created monsters. There is no
             | reason we had to go this far. Capitalism doesn't have to
             | mean the preeminence of capital above all else. It needs to
             | be put back in its place and not necessarily discarded. I
             | am certain there are better ways to practice capitalism.
             | They probably involve balancing it out with some other
             | 'isms.
        
               | ryandrake wrote:
               | Also, Shareholder Primacy is not some kind of natural
               | law, it's a choice that companies deliberately make in
               | their governance to prioritize shareholders' needs over
               | the needs of every other stakeholder.
        
               | FpUser wrote:
               | >"I think a particular (mostly western) fetish for
               | privileging individuals over communities has been wrongly
               | extended to capital itself (e.g. corporations recognised
               | as entities with rights similar to - and sometimes over-
               | and-above - those of a person)"
               | 
               | Possible remedy will be to tie corporation to a person -
               | person (or many if there are few owners and directors)
               | become personally liable for everything corporation does.
        
           | Aurornis wrote:
           | > The harsh reality is that a culture of selfishness has
           | become too widespread. Too many people (especially in tech)
           | don't really care what happens to others as long as they get
           | rich off it. They'll happily throw others under the bus and
           | refuse to share wellbeing even in their own communities.
           | 
           | This is definitely not a new phenomenon.
           | 
           | In my experience, tech has been one of the more considerate
           | areas of societal impact. Spend some time in other industries
           | and it's eye-opening to see the wanton disregard for
           | consumers and the environment.
           | 
           | There's a lot of pearl-clutching about social media,
           | algorithms, and "data", but you'll find far more people in
           | tech (including FAANG) who are actively working on privacy
           | technology, sustainable development and so on then you will
           | find people caring about the environment by going into oil &
           | gas, for example.
        
           | timacles wrote:
           | > reality is that a culture of selfishness has become too
           | widespread.
           | 
           | Tale as old as time. We're yet another society blinded by our
           | own hubris. Tell me what is happening now is not exactly how
           | Greece and Rome fell.
           | 
           | The scary part is that we as a species are becoming more and
           | more capable of large scale destruction. Seems like we are
           | doomed to end civilization this way someday
        
         | idiotsecant wrote:
         | Let's say we decide, today, that we want to prevent an AI
         | armageddon that we assume is coming.
         | 
         | How do you do that?
        
         | debbiedowner wrote:
         | Which books?
        
         | chasd00 wrote:
         | How to you prevent advancements in software? The barrier to
         | entry is so low, you just need a cheap laptop and an internet
         | connection and then day 1 you're right on the cutting edge
         | driving innovation. Current AI requires a lot of hardware for
         | training but anyone with a laptop and inet connection can still
         | do cutting edge research and innovate with architectures and
         | algorithms.
         | 
         | If a law is passed saying "AI advancement is illegal" how can
         | it ever be enforced?
        
           | palmotea wrote:
           | > How to you prevent advancements in software? The barrier to
           | entry is so low, you just need a cheap laptop and an internet
           | connection and then day 1 you're right on the cutting edge
           | driving innovation. Current AI requires a lot of hardware for
           | training but anyone with a laptop and inet connection can
           | still do cutting edge research and innovate with
           | architectures and algorithms.
           | 
           | > If a law is passed saying "AI advancement is illegal" how
           | can it ever be enforced?
           | 
           | Like any other real-life law? Software engineers (a class
           | which I'm a recovering member of) seem to have a pretty
           | common misunderstanding about the law: that it needs to be
           | air tight like secure software, otherwise it's pointless.
           | That's just not true.
           | 
           | So the way you "prevent advancements in [AI] software" is you
           | 1) punish them severely when detected and 2) restrict access
           | to information and specialized hardware to create a barrier
           | (see: nuclear weapons proliferation, "born secret" facts,
           | CSAM).
           | 
           | #1 is sufficient to control all the important legitimate
           | actors in society (e.g. corporations, university
           | researchers), and #2 creates a big barrier to everyone else
           | who may be tempted to not play by the rules.
           | 
           | It won't be perfect (see: the drug war), but it's not like
           | cartel chemists are top-notch, so it doesn't have to be. I
           | don't think the software engineering equivalent of a cartel
           | chemist will be able to "do cutting edge research and
           | innovate with architectures and algorithms" with only a
           | "laptop and inet connection."
           | 
           | Would the technology disappear? No? Will it be pushed to the
           | margins? Yes. Is that enough? Also yes.
        
             | AnimalMuppet wrote:
             | Punish them severely when detected? Nice plan. What if they
             | aren't in your jurisdiction? Are you going to punish them
             | severely when they're in China? North Korea? Somalia? Good
             | luck with that.
             | 
             | The problem is that the information can go anywhere that
             | has an internet connection, and the enforcement can't.
        
               | palmotea wrote:
               | > Punish them severely when detected? Nice plan. What if
               | they aren't in your jurisdiction?
               | 
               | https://en.wikipedia.org/wiki/Operation_Opera
               | 
               | https://en.wikipedia.org/wiki/2021_Natanz_incident
               | 
               | https://www.timesofisrael.com/israel-targeted-secret-
               | nuclear...
               | 
               | If were talking about technology that "could go off the
               | rails in a catastrophic way," don't dick around.
        
               | chasd00 wrote:
               | well let's assume an airstrike is on the table, what site
               | would you hit? AWS data centers in Virginia?
        
               | palmotea wrote:
               | The point wasn't _literally airstrike_ , it was _don 't
               | get hung up over "jurisdiction" when it comes to
               | "avoiding catastrophe."_ There are other options. Here
               | are a few from the Israeli example, https://en.wikipedia.
               | org/wiki/Assassination_of_Iranian_nucle...,
               | https://en.wikipedia.org/wiki/Stuxnet, but I'm sure there
               | are other innovative ways to answer the challenge.
        
       | dsign wrote:
       | I know I'm an oddball when it comes to the stuff that crosses my
       | mind, but here I go anyway.
       | 
       | It's possible to stop developing things. It's not even hard; most
       | of the world develops very little. Developing things requires
       | capital, education, hard work, social stability and the rule of
       | law. Many of us writing on this forum take those things for
       | granted but it's more the exception than the rule, when you look
       | at the entire planet.
       | 
       | I think we will face the scenario of runaway AI, where we lose
       | control, and we may not survive. I don't think it will be a sky-
       | net type of thing, sudden. At least not at first. What will
       | happen is that we will replace humans by AIs in more and more
       | positions of influence and power, gradually. Our ChatGPTs of
       | today will become board members and government advisors of
       | tomorrow. It will take some decades--though probably not many.
       | Then, a face-off will come one day, perhaps. Humans vs them.
       | 
       | But if we do survive and come to regret the development of
       | advanced AI and have a second chance, it will be trivially easy
       | to suppress them: just destroy the semiconductor fabs, treat them
       | the same way we treat ultra-centrifuges for enriching Uranium.
       | Cut off the dangerous data centers, and forbid the reborn
       | universities[1] from teaching linear algebra to the students.
       | 
       | [1]: We will lose advanced education for the masses on the way,
       | as it won't be economically viable nor necessary.
        
         | Simon_O_Rourke wrote:
         | > What will happen is that we will replace humans by AIs in
         | more and more positions of influence and power, gradually. Our
         | ChatGPTs of today will become board members and government
         | advisors of tomorrow.
         | 
         | Great, can't wait for even some small improvement over the
         | idiots in charge right now.
        
           | moffkalast wrote:
           | I for one, also welcome our new Omnissiah overlords.
        
           | realce wrote:
           | It's time to put an end to this fashionable and literal anti-
           | human attitude. There's no comparative advantage to AI
           | replacing humans en-masse because of how "stupid" we are.
           | This POV is advocating for incalculable suffering and death.
           | You personally will not be in a better or more rational
           | position after this transition, you'll simply be dead.
        
         | TheFuzzball wrote:
         | I am so tired of the AI doomer argument.
         | 
         | The entire thing is little more than a thought experiment.
         | 
         | > Look at how fast AI has advanced, it you just project that
         | trend out, we'll have human-level agents by the end of the
         | decade.
         | 
         | No. We won't. Scale up transformers as big as you like, this
         | won't happen without massive advances in architecture and
         | hardware.
         | 
         | I believe it is _possible_ , but the idea it'll happen _any day
         | now_ , and _by accident_ is bullshit.
         | 
         | This is one step from Pascal's Wager, but being presented as
         | fact by otherwise smart people.
        
           | dsign wrote:
           | > The entire thing is little more than a thought experiment.
           | 
           | Yes. Nobody can predict the future.
           | 
           | > but the idea it'll happen any day now, and by accident is
           | bullshit.
           | 
           | We agree on that one: it won't be sudden, and it won't be by
           | accident.
           | 
           | > I believe it is possible, but the idea it'll happen any day
           | now, and by accident is bullshit.
           | 
           | Exactly. Not by accident. But if you believe it's possible,
           | then we are both doomers.
           | 
           | The thing is, there are forces at play that want this. It's
           | all of us. We in society want to remove other human beings
           | from the chain of value. I use ChatGPT today to not pay a
           | human editor. My boss uses Suno AI to play generated music
           | with pro-productivty slogans before Teams meetings. The
           | moment the owners of my enterprise believe it's possible to
           | replace their highly paid engineers with AIs, they will do
           | it. My bosses don't need to lift a finger _today_ to ensure
           | that future. Other people have already imagined it, and thus,
           | already today we have well-founded AI companies doing their
           | best to develop the technology. Their investors see an
           | opportunity on making highly-skilled labor cheaper, and they
           | are dumping their money into that enterprise. Better
           | hardware, better models, better harnesses for those models.
           | All of that is happening at speed. I 'm not counting on
           | accidents there. If anything, I'm counting on accidents
           | Chernobyl style that make us realize, when there is still
           | time, if we are stepping into danger.
        
         | philomath_mn wrote:
         | > It's possible to stop developing things
         | 
         | If the US were willing to compromise some of it's core values,
         | then we could probably stop AI development domestically.
         | 
         | But what about the rest of the world? If China or India want to
         | reap the benefits of enhanced AI capability, how could we stop
         | them? We can hit them with sanctions and other severe measures,
         | but that hasn't stopped Russia in Ukraine -- plus the prospect
         | of world-leading AI capability has a lot more economic value
         | than what Ukraine can offer.
         | 
         | So if we can't stop the world from developing these things, why
         | hamstring ourselves and let our competitors have all of the
         | benefits?
        
           | hcurtiss wrote:
           | Exactly. Including military benefits. The US would not be a
           | nation for long.
        
           | hollerith wrote:
           | >the prospect of world-leading AI capability has a lot more
           | economic value than what Ukraine can offer.
           | 
           | The mere fact that you imagine that Moscow's motivation in
           | invading Ukraine is _economic_ is a sign that you 're missing
           | the main reasons Moscow or Beijing would want to ban AI: (1)
           | unlike in the West and especially unlike the US, it is
           | routine and normal for the government in those countries to
           | ban things or discourage their use, especially new things
           | that might cause large societal changes and (2) what Moscow
           | and Beijing want most is not economic prosperity, but rather
           | to prevent another one of those invasions or revolutions that
           | kills millions of people _and_ to prevent the country 's
           | ruling coalition from losing power.
        
             | philomath_mn wrote:
             | But this all comes back to the self-interest and game
             | theory discussion.
             | 
             | Let's suppose that, like you, both Moscow and Beijing do
             | not want AGI to exist. What could they do about it? Why
             | should they trust that the rest of the world will also
             | pause their AI development?
             | 
             | This whole discussion is basically a variation on the
             | prisoner's dilemma. Either you cooperate and AI risks are
             | mitigated, or you do not cooperate and try to take the best
             | outcome for yourself.
             | 
             | I think we can expect the latter. Not because it is the
             | right thing or because it is the optimal decision for
             | humanity, but because each individual will deem it their
             | best choice, even after accounting for P(doom).
        
               | hollerith wrote:
               | >Let's suppose that, like you, both Moscow and Beijing do
               | not want AGI to exist. What could they do about it? Why
               | should they trust that the rest of the world will also
               | pause their AI development?
               | 
               | That is why the US and Europe should stop AI in their
               | territories first especially as the US and Britain have
               | been the main drivers of AI "progress" up to now.
        
         | 627467 wrote:
         | Everyone wants to be the prophet of doom of their own religion.
        
         | simonw wrote:
         | "Our ChatGPTs of today will become board members and government
         | advisors of tomorrow."
         | 
         | That still feels like complete science fiction to me - more
         | akin to appointing a complicated Excel spreadsheet as a board
         | member.
        
           | fritzo wrote:
           | It feels like mere language difference. Certainly every
           | government official is advised by many Excel spreadsheets.
           | Were those spreadsheets "appointed", no.
        
             | simonw wrote:
             | The difference is between AI tools as augmentation and AI
             | tools as replacement.
             | 
             | Board members using tools like ChatGPT or Excel as part of
             | their deliberations? That's great.
             | 
             | Replacing a board member entirely with a black box
             | automation that makes meaningful decisions without human
             | involvement? A catastrophically bad idea.
        
             | vladms wrote:
             | People like having someone to blame and fire and maybe send
             | to jail. It's less impressive if someone blames everything
             | on their Excel sheet...
        
         | HelloMcFly wrote:
         | This is my oddball thought: the thing about AI doomerism is
         | that it feels to me like it requires substantially more
         | assumptions and leaps of logic than environmental doomerism.
         | And environmental doomerism seems only more justified as the
         | rightward lurch of western societies continues.
         | 
         | Note: I'm not quite a doomer, but definitely a pessimist.
        
         | jcarrano wrote:
         | What if we face the scenario of a Dr. Manhattan type AGI,
         | that's just fed up with people's problems and decides to leave
         | us for the stars?
        
         | anon291 wrote:
         | Right, let's go back to the stone age because we said so.
         | 
         | > What will happen is that we will replace humans by AIs in
         | more and more positions of influence and power,
         | 
         | With all due respect, and not to be controversial, but how is
         | this concern any more valid than the 'great replacement'
         | worries.
        
       | mytailorisrich wrote:
       | This declaration is just hand-waving.
       | 
       | Europe is hopeless so it does not make a difference. China can
       | sign and ignore it so it does not make a difference.
       | 
       | But it would not be wise for the USA to have their hands tied up
       | so early. I suppose that the UK wants to go their usual "lighter
       | touch regulation" than the EU route to attract investment. Plus
       | they are obviously trying hard to make friends with the new US
       | administration.
        
         | bostik wrote:
         | > * suppose that the UK wants to go their usual "lighter touch
         | regulation" than the EU route to attract investment.*
         | 
         | Not just that. A speaker in a conference I attended about a
         | month ago mentioned that UK is actively drifting away from EU's
         | stance, _particularly_ on the aspect of AI safety in practice.
         | 
         | The upcoming European AI act has "machine must not make
         | material decisions" as its cornerstone. UK are hell-bent to get
         | AI into government functions, to ostensibly make everything
         | more efficient. As part of that drive, the UK is aiming to
         | allow AI to make material decisions, without human review or
         | recourse. In a country still in the throes of the Post Office /
         | Horizon scandal, that really takes some nerve.
         | 
         | Those in charge in this country know fully well that "AI
         | safety" will be in violent conflict with the above.
        
       | stackedinserter wrote:
       | The declaration itself, if anyone's interested:
       | https://www.pm.gc.ca/en/news/statements/2025/02/11/statement...
       | 
       | Signed by 60 countries out of "more than 100 participants", it
       | just looks comically pathetic except "China" part:
       | 
       | Armenia, Australia, Austria, Belgium, Brazil, Bulgaria, Cambodia,
       | Canada, Chile, China, Croatia, Cyprus, Czechia, Denmark,
       | Djibouti, Estonia, Finland, France, Germany, Greece, Hungary,
       | India, Indonesia, Ireland, Italy, Japan, Kazakhstan, Kenya,
       | Latvia, Lithuania, Luxembourg, Malta, Mexico, Monaco, Morocco,
       | New Zealand, Nigeria, Norway, Poland, Portugal, Romania, Rwanda,
       | Senegal, Serbia, Singapore, Slovakia, Slovenia, South Africa,
       | Republic of Korea, Spain, Sweden, Switzerland, Thailand,
       | Netherlands, United Arab Emirates, Ukraine, Uruguay, Vatican,
       | European Union, African Union Commission.
        
       | tmpz22 wrote:
       | Why would any country align with US vision for AI policies after
       | how we've treated allies over the last two weeks?
       | 
       | Why would any country yield given the hard line negotiating
       | stance the US is now taking? And the flip flopping and unclear
       | messaging on our positions?
        
         | anon291 wrote:
         | People should be free to train AIs
        
       | jcarrano wrote:
       | When you are the dominant world power, you just don't let others
       | determine your strategy, as simple as that.
       | 
       | Attempts at curbing AI will come from those who are losing the
       | race. There's this interview where Edward Teller recalls how the
       | USSR used a moratorium in nuclear testing to catch up with the US
       | on the hydrogen bomb, and how he was the one telling the idealist
       | scientists that that was going to happen.
        
         | briankelly wrote:
         | I read in Supermen (book on Cray) that the test moratorium was
         | a strategic advantage for the US since labs here could simulate
         | nuclear weapons using HPC systems.
        
       | jameslk wrote:
       | What benefit do these AI regulations provide to progressing
       | AI/AGI development? Do they slow down progress? If so, how do the
       | countries that intend to enforce these regulations plan to
       | compete on AI/AGI with countries that don't have these
       | regulations?
        
       | Imnimo wrote:
       | Am I right in understanding that this "declaration" is not a
       | commitment to do anything specific? I don't really understand why
       | it matters who does or does not sign it.
        
         | layer8 wrote:
         | It's an indication of the values shared, or in this case, not
         | shared.
        
         | sva_ wrote:
         | Diplomatic theater, justification to get/keep more bureaucrats
         | on the payroll
        
         | karaterobot wrote:
         | Yep, it's got all the force of a New Year's resolution. It does
         | not appear to be much more specific than one, either. It's
         | about a page and a half long--the list of countries is as long
         | as than the declaration itself, and it basically says "we
         | talked about how we won't do anything bad".
        
       | rdm_blackhole wrote:
       | This declaration is not worth the paper it was written on. It
       | doesn't require to be enforced and it's non binding so, it's like
       | a kid's Christmas shopping list.
       | 
       | The US and the UK were right to reject it.
        
       | hintymad wrote:
       | Why would we trust Europe in the first place, given that they are
       | so full of regulations and they love to suffocate innovation by
       | introducing ever more regulations? I thought most people wanted
       | to deregulate anyway.
        
       | tnt128 wrote:
       | An AI arms race will be how we make sky net a reality.
       | 
       | If an enemy state gives AI autonomous control and gains massive
       | combat effectiveness, it puts the pressure to other countries to
       | do the same.
       | 
       | No one wants sky net. But if we continue the current path,
       | painting the world as we vs them. I m fearful sky net will be
       | what we get
        
         | bluescrn wrote:
         | If a rogue AI could take direct control of weapons systems,
         | then so could a human hacker - and we've got bigger problems
         | than just 'AI safety'.
        
       | seydor wrote:
       | Europe just loves signing declarations and concerned letters. It
       | would make no difference if they signed it.
        
         | swyx wrote:
         | leading in ai safety theater is actually worse than leading in
         | ai because the leadership of ai safety is actually just in
         | leading in ai period
        
       | anon291 wrote:
       | The world is the world. Today is today. Tomorrow is tomorrow.
       | 
       | You cannot face the world with how you want it to be, but only as
       | it is.
       | 
       | What we know today is that a relatively straightforward series of
       | matrix multiplications leads to what is perceived to be
       | intelligence. This is simply true no matter how many declarations
       | one signs.
       | 
       | Given that this is the case, there is nothing left to be done
       | unless we want to go full Butlerian Jihad
        
       | FloorEgg wrote:
       | What exactly is the letter declaring? There are so many
       | interpretations of "AI safety" with most of them not actually
       | having anything to do with maximizing distribution of societal
       | and ecosystem prosperity or minimizing the likelihood of
       | destruction or suffering. In fact some concepts of AI safety I
       | have seen are double speak for rules that are more likely to lead
       | to AI imposed tyranny.
       | 
       | Where is the nuanced discussion of what we want and don't want AI
       | to do as a society?
       | 
       | These details matter, and working through them collectively is
       | progress, in stark contrast to getting dragged into identity
       | politics arguments.
       | 
       | - I want AI to increase my freedom to do more and spend more time
       | doing things I find meaningful and rewarding. - I want AI to help
       | us repair damage we have done to ecosystems and reverse species
       | diversity collapse. - I want AI to allow me to consume more in a
       | completely sustainable way for me and the environment. - I want
       | AI that is an excellent and honest curator of truth, both in
       | terms of accurate descriptions of the past and nuanced
       | explanations of how reality works. - I want AI that elegantly
       | supports a diversity of values, so I can live how I want and
       | others can live how they want. - I don't want AI that forcefully
       | and arbitrarily limits my freedoms - I don't want AI that
       | forcefully imposes other people's values on me (or imposes my
       | values on others) - I don't want AI war that destroys our
       | civilization and creates chaos - I don't want AI that causes
       | unnecessary suffering - I don't want other people to use AI to
       | tyrannize me or anyone else.
       | 
       | How about instead of being so broadly generic about "AI safety"
       | declarations we get specific, and then ask people to make
       | specific commitments in kind. Then it would be a lot more
       | meaningful when they refuse, or when they oblige and then break
       | them.
        
       | FpUser wrote:
       | I watched JD Vance's speech. He had made few very reasonable
       | points to refuse joining the alliance. Still his speech left me
       | with some sour taste. I interpret it as - "we are fuckin America
       | and we do as we please. It is our sacred right to own the world.
       | The rest are to submit or be punished one or the other way".
        
       | PeterCorless wrote:
       | "Why do we want better artificial intelligence when we have all
       | this raw human stupidity as an abundant renewable resource we
       | haven't yet harnessed?"
        
       ___________________________________________________________________
       (page generated 2025-02-12 23:00 UTC)