[HN Gopher] How elites could shape mass preferences as AI reduce...
       ___________________________________________________________________
        
       How elites could shape mass preferences as AI reduces persuasion
       costs
        
       Author : 50kIters
       Score  : 450 points
       Date   : 2025-12-04 08:38 UTC (14 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | intermerda wrote:
       | https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro...
       | 
       | > Musk's AI Bot Says He's the Best at Drinking Pee and Giving
       | Blow Jobs
       | 
       | > Grok has gotten a little too enthusiastic about praising Elon
       | Musk.
        
         | andsoitis wrote:
         | > Musk acknowledged the mix-up Thursday evening, writing on X
         | that "Grok was unfortunately manipulated by adversarial
         | prompting into saying absurdly positive things about me."
         | 
         | > "For the record, I am a fat retard," he said.
         | 
         | > In a separate post, Musk quipped that "if I up my game a lot,
         | the future AI might say 'he was smart ... for a human.'"
        
           | ben_w wrote:
           | Is Musk bipolar, or is this kind of thing an affectation?
           | 
           | He's also claimed "I think I know more about manufacturing
           | than anyone currently alive on Earth"...
        
             | spiderfarmer wrote:
             | He's smart enough to know when he took it too far.
        
             | ahartmetz wrote:
             | You have to keep in mind that not all narcissists are
             | literal-minded man-babies. Musk might simply have the
             | capacity for self-deprecating humor.
        
             | andsoitis wrote:
             | > He's also claimed "I think I know more about
             | manufacturing than anyone currently alive on Earth"...
             | 
             | You should know that ChatGPT agrees!
             | 
             | "Who on earth th knows the most about manufacturing, if you
             | had to pick one individual"
             | 
             | Answer: _"If I had to pick one individual on Earth who
             | likely knows the most--in breadth, depth, and lived
             | experience--about modern manufacturing, there is a clear
             | front-runner: Elon Musk.
             | 
             | Not because of fame, but because of what he has personally
             | done in manufacturing, which is unique in modern history."_
             | 
             | - https://chatgpt.com/share/693152a8-c154-8009-8ecd-c21541e
             | e9c...
        
             | otikik wrote:
             | Just narcissistic. And on drugs.
        
           | lukan wrote:
           | That response is more humble than I would have guessed, but
           | he still does not even acknowledge, that his "truthseeking"
           | AI is manipulated to say nice things specifically about him.
           | Maybe he does not even realize it himself?
           | 
           | Hard to tell, I have never been surrounded by yes sayers all
           | the time praising me for every fart I took, so I cannot
           | relate to that situation (and don't really want to).
           | 
           | But the problem remains, he is in control of the "truth" of
           | his AI, the other AI companies likewise - and they might be
           | better at being subtle about it.
        
       | jl6 wrote:
       | > Historically, elites could shape support only through limited
       | instruments like schooling and mass media
       | 
       | Schooling and mass media are expensive things to control. Surely
       | reducing the cost of persuasion opens persuasion up to more
       | players?
        
         | teekert wrote:
         | Exactly my first thought, maybe AI means the democratization of
         | persuasion? Printing press much?
         | 
         | Sure the the Big companies have all the latest coolness. But
         | also don't have a moat.
        
         | ares623 wrote:
         | Mass Persuasion needs two things: content creation and
         | distribution.
         | 
         | Sure AI could democratise content creation but distribution is
         | still controlled by the elite. And content creation just got
         | much cheaper for them.
        
           | zmgsabst wrote:
           | Distribution isn't controlled by elites; half of their
           | meetings are seething about the "problem" people trust
           | podcasts and community information dissemination rather than
           | elite broadcast networks.
           | 
           | We no longer live in the age of broadcast media, but of
           | social networked media.
        
             | ares623 wrote:
             | But the social networks are owned by them though?
        
         | zmgsabst wrote:
         | This is my opinion, as well:
         | 
         | - elites already engage in mass persuasion, from media
         | consensus to astroturfed thinktanks to controlling grants in
         | academia
         | 
         | - total information capacity is capped, ie, people only have so
         | much time and interest
         | 
         | - AI massively lowers the cost of content, allowing more people
         | to produce it
         | 
         | Therefore, AI is likely to displace mass persuasion from
         | current elites -- particularly given public antipathy and the
         | ability of AI to, eg, rapidly respond across the full spectrum
         | to existing influence networks.
         | 
         | In much the same way podcasters displaced traditional mass
         | media pundits.
        
         | ben_w wrote:
         | > Schooling and mass media are expensive things to control
         | 
         | Expensive to run, sure. But I don't see why they'd be expensive
         | to control. Most UK are required to support collective worship
         | of a "wholly or mainly of a broadly christian character"[0],
         | and used to have Section 28[1] which was interpreted
         | defensively in most places and made it difficult even discuss
         | the topic in sex ed lessons or defend against homophobic
         | bullying.
         | 
         | USA had the Hays Code[2], the FCC Song[3] is Eric Idle's
         | response to being fined for swearing on radio. Here in Europe
         | we keep hearing about US schools banning books for various
         | reasons.
         | 
         | [0]
         | https://assets.publishing.service.gov.uk/government/uploads/...
         | 
         | [1] https://en.wikipedia.org/wiki/Section_28
         | 
         | [2] https://en.wikipedia.org/wiki/Hays_Code
         | 
         | [3] https://en.wikipedia.org/wiki/FCC_Song
        
           | alwa wrote:
           | [0] seems to be dated 1994-is it still current? I'm curious
           | how it's evolved (or not) through the rather dramatic
           | demographic shifts there over the intervening 30 years
        
             | ben_w wrote:
             | So far as I can tell, it's still around. That's why I
             | linked to the .gov domain rather than any other source.
             | 
             | Though I suppose I could point at legislation.gov.uk:
             | 
             | * https://duckduckgo.com/?q=%22wholly+or+mainly+of+a+broadl
             | y+c...
             | 
             | * https://www.legislation.gov.uk/ukpga/1998/31/schedule/20/
             | cro...
        
         | crote wrote:
         | Do you rather want a handful of channels with well-known
         | biases, or thousands of channels of unknown origin?
         | 
         | If you're trying to avoid being persuaded, being _aware_ of
         | your opponents sounds like the far better option to me.
        
       | taurath wrote:
       | We have no guardrails on our private surveillance society. I long
       | for the day that we solve problems facing regular people like
       | access to education, hunger, housing, and cost of living.
        
         | jack_tripper wrote:
         | _> I long for the day that we solve problems facing regular
         | people like access to education, hunger, housing, and cost of
         | living._
         | 
         | That was only for a short fraction of human history only
         | lasting in the period between post-WW2 and before globalisation
         | kicked into high gear, but people miss the fact that was only a
         | short exception from the norm, basically a rounding error in
         | terms of the length of human civilisation.
         | 
         | Now, society is reverting back to factory settings of human
         | history, which has always been a feudalist type society of a
         | small elite owning all the wealth and ruling the masses of
         | people by wars, poverty, fear, propaganda and oppression. Now
         | the mechanisms by which that feudalist society is achieved
         | today are different than in the past, but the underlying human
         | framework of greed and consolidation of wealth and power is the
         | same as it was 2000+ years ago, except now the games suck and
         | the bread is mouldy.
         | 
         | The wealth inequality we have today, as bad as it is now, is as
         | best as it will ever be moving forward. It's only gonna get
         | worse each passing day. And despite all the political talks and
         | promises on "fixing" wealth inequality, housing, etc, there's
         | nothing to fix here, since the financial system is working as
         | designed, this is a feature not a bug.
        
           | veltas wrote:
           | I think this is true unfortunately, and the question of how
           | we get back to a liberal and social state has many factors:
           | how do we get the economy working again, how do we create
           | trustworthy institutions, avoid bloat and decay in services,
           | etc. There are no easy answers, I think it's just hard work
           | and it might not even be possible. People suggesting magic
           | wands are just populists and we need only look at history to
           | study why these kinds of suggestions don't work.
        
             | huijzer wrote:
             | It's funny how it's completely appropriate to talk about
             | how the elites are getting more and more power, but if you
             | then start looking deeper into it you're suddenly a
             | conspiracy theorist and hence bad. Who came up with the
             | term conspiracy theorist anyway and that we should be
             | afraid of it?
        
             | jack_tripper wrote:
             | _> how do we get the economy working again_
             | 
             | Just like we always have: a world war, and then the economy
             | works amazing for the ones left on top of the rubble pile
             | where they get unionized high wage jobs and amazing
             | retirements at an early age for a few decades, while
             | everyone else will be left toiling away to make stuff for
             | cheap in sweatshops in exchange for currency from the
             | victors who control the global economy and trade routes.
             | 
             | The next time the monopoly board gets flipped will only be
             | a variation of this, but not a complete framework rewrite.
        
           | jinjin2 wrote:
           | > society is reverting back to factory settings of human
           | history, which has always been a feudalist type society of a
           | small elite owning all the wealth
           | 
           | The word "always" is carrying a lot of weight here. This has
           | really only been true for the last 10,000 years or so, since
           | the introduction of agriculture. We lived as egalitarian
           | bands of hunter gatherers for hundreds of thousands of years
           | before that. Given the magnitude of difference in timespan, I
           | think it is safe to say that that is the "default setting".
        
             | jack_tripper wrote:
             | _> We lived as egalitarian bands of hunter gatherers for
             | hundreds of thousands of years before that._
             | 
             | Only if you consider intra-group egalitarianism of tribal
             | hunter gatherer societies. But tribes would constantly go
             | to war with each other in search of expanding to better
             | territories with more resources, and the defeated tribe
             | would have its men killed or enslaved, and the women bred
             | to expand the tribe population.
             | 
             | So you forgot that part that involved all the killing,
             | enslavement and rape, but other than that, yes, the
             | victorious tribes were quite egalitarian.
        
               | lurk2 wrote:
               | > and the defeated tribe would have its men killed or
               | enslaved, and the women bred to expand the tribe
               | population.
               | 
               | I'm not aware of any archaeological evidence of massacres
               | during the paleolithic. Which archaeological sites would
               | support the assertions you are making here?
        
               | jack_tripper wrote:
               | Population density on the planet back then was also low
               | enough to not cause mass wars and generate mass graves,
               | but killing each other over valuable resources is the
               | most common human trait after reproduction and seek of
               | food and shelter.
        
               | lurk2 wrote:
               | We were talking about the paleolithic era. I'll take your
               | comment to imply that you don't have any information that
               | I don't have.
               | 
               | > but killing each other over valuable resources is the
               | most common human trait after reproduction and seek of
               | food and shelter.
               | 
               | This isn't reflected in the archaeological record, it
               | isn't reflected by the historical record, and you haven't
               | provided any good reason why anyone should believe it.
        
               | pyrale wrote:
               | The above poster is asking you whether factual
               | informations support your claim.
               | 
               | Your personal opinion about why such informations may be
               | hard to find only weakens your claim.
        
               | phantasmish wrote:
               | https://en.wikipedia.org/wiki/War_Before_Civilization
               | 
               | Last I checked there hadn't been major shifts away from
               | the perspective this represents, in anthropology.
               | 
               | It was used as a core text in one of my classes in
               | college, though that was a couple decades ago. I recall
               | being confused about why it was such a big deal, because
               | I'd not encountered the "peaceful savage" idea in any
               | serious context, but I gather it was widespread in the
               | '80s and earlier.
        
               | pyrale wrote:
               | The link you give documents warfare that happened
               | significantly later than the era discussed by the above
               | poster.
               | 
               | To suggest that the lack of evidence is enough to support
               | continuity of a behaviour is also flawed reasoning: we
               | have many examples of previously unknown social behaviour
               | that emerged at some point, line the emergence of states
               | or the use of art.
               | 
               | Sometimes, it's ok to simply say that we're not sure,
               | rather than to project our existing condition.
        
               | lurk2 wrote:
               | Well, this one is at least pertinent to the time period
               | we're discussing:
               | 
               | > One-half of the people found in a Mesolithic cemetery
               | in present-day Jebel Sahaba, Sudan dating to as early as
               | 13,000 years ago had died as a result of warfare between
               | seemingly different racial groups with victims bearing
               | marks of being killed by arrow heads, spears and club,
               | prompting some to call it the first race war.
        
               | lingrush4 wrote:
               | What an absurd request. Where's your archaeological
               | evidence that humans were egalitarian 10000+ years?
               | 
               | The idea that we didn't have wars in the paleolithic era
               | is so outlandish that it requires significant evidence.
               | You have provided none.
        
               | lurk2 wrote:
               | > What an absurd request.
               | 
               | If you can show me archaeological evidence of mass graves
               | or a settlement having been razed during the paleolithic
               | I would recant my claims. This isn't really a high bar.
               | 
               | > Where's your archaeological evidence that humans were
               | egalitarian 10000+ years?
               | 
               | I never made this claim. Structures of domination precede
               | human development; they can be observed in animals. What
               | we don't observe up until around 10,000 years ago is
               | anything approaching the sorts of systems of jack_tripper
               | described, namely:
               | 
               | > which has always been a feudalist type society of a
               | small elite owning all the wealth and ruling the masses
               | of people by wars, poverty, fear, propaganda and
               | oppression.
               | 
               | > The idea that we didn't have wars in the paleolithic
               | era is so outlandish that it requires significant
               | evidence.
               | 
               | If it's so outlandish where is your evidence that these
               | wars occurred?
               | 
               | > You have provided none.
               | 
               | How would I provide you with evidence of something that
               | didn't happen?
        
               | jinjin2 wrote:
               | Sure, nobody is claiming that hunter gatherers were
               | saints. Just because they lived in egalitarian clans, it
               | doesn't mean that they didn't occasionally do bad things.
               | 
               | But one key differentiator is that they didn't have the
               | logistics to have soldiers. With no surplus to pay
               | anyone, there was no way build up an army, and with no-
               | one having the ability to tell others to go to war or
               | force them to do so, the scale of conflicts and
               | skirmishes were a lot more limited.
               | 
               | So while there might have been a constant state of minor
               | skirmishes, like we see in any population of territorial
               | animals, all-out totalitarian war was a rare occurrence.
        
             | lurk2 wrote:
             | Even within the last 10,000 years, most of those systems
             | looked nothing like the hereditary stations we associate
             | with feudalism, and it's focused within the last 4,000
             | years that any of those systems scaled, and then only in
             | areas that were sufficiently urban to warrant the
             | structures.
        
             | oblio wrote:
             | Back then there were so few people around and expectations
             | for quality of life were so low that if you didn't like
             | your neighbors you could just go to the middle of nowhere
             | and most likely find an area which had enough resources for
             | your meager existence. Or you'd die trying, which was
             | probably what happened most of the time.
             | 
             | That entire approach to life died when agriculture
             | appeared. Remnants of that lifestyle were nomadic peoples
             | and the last groups to be successful were the Mongols and
             | up until about 1600, the Cossacks.
        
           | lurk2 wrote:
           | > which has always been a feudalist type society of a small
           | elite owning all the wealth and ruling the masses of people
           | by wars, poverty, fear, propaganda and oppression.
           | 
           | This isn't an historical norm. The majority of human history
           | occurred without these systems of domination, and getting
           | people to play along has historically been so difficult that
           | colonizers resort to eradicating native populations and
           | starting over again. The technologies used to force people
           | onto the plantation have become more sophisticated, but in
           | most of the world that has involved enfranchisement more than
           | oppression; most of the world is tremendously better off
           | today than it was even 20 years ago.
           | 
           | Mass surveillance and automated propaganda technologies pose
           | a threat to this dynamic, but I won't be worried until they
           | have robotic door kickers. The bad guys are always going to
           | be there, but it isn't obvious that they are going to
           | triumph.
        
             | insane_dreamer wrote:
             | > The majority of human history occurred without these
             | systems of domination,
             | 
             | you mean hunter/gatherers before the establishment of
             | dominant "civilizations"? That history ended about 5000
             | years ago.
        
           | crote wrote:
           | > The wealth inequality we have today, as bad as it is, is as
           | best as it will ever be moving forward. It's only gonna get
           | worse.
           | 
           | Why?
           | 
           | As the saying goes, the people need bread and circuses. Delve
           | too deeply and you risk another French Revolution. And right
           | now, a lot of people in supposedly-rich Western countries are
           | having their basic existance threatened by the greed of the
           | elite.
           | 
           | Feudalism only works when you give back enough power and
           | resources to the layers below you. The king _depends_ on his
           | vassals to provide money and military services. Try to act
           | like a tyrant, and you end up being forced to sign the Magna
           | Carta.
           | 
           | We've already seen a healthcare CEO being executed in broad
           | daylight. If wealth inequality continues to worsen, do you
           | really believe that'll be the last one?
        
             | zwnow wrote:
             | > Delve too deeply and you risk another French Revolution.
             | 
             | Whats too deeply? Given the circumstances in the USA I dont
             | see no revolution happening. Same goes for extremely poor
             | countries. When will the exploiters heads roll? I dont see
             | anyone willing to fight the elite. A lot of them are even
             | celebrated in countries like India.
        
               | jack_tripper wrote:
               | Yep, exactly. If the poor people had the power to change
               | their oppressive regimes, then North Korea or Cuban
               | leaders wouldn't exist.
        
             | lurk2 wrote:
             | > And right now, a lot of people in supposedly-rich Western
             | countries are having their basic existance threatened by
             | the greed of the elite.
             | 
             | Which people are having their existences threatened by the
             | elite?
        
             | FridayoLeary wrote:
             | As long as you have people gleefully celebrating it or
             | providing some sort of narrative to justify it even
             | partially then no.
             | 
             | >And right now, a lot of people in supposedly-rich Western
             | countries are having their basic existance threatened by
             | the greed of the elite.
             | 
             | Can you elaborate on that?
        
           | BeFlatXIII wrote:
           | Sounds like we need another world war to reset things for the
           | survivors.
        
         | andsoitis wrote:
         | > I long for the day that we solve problems facing regular
         | people like access to education, hunger, housing, and cost of
         | living.
         | 
         | EDUCATION:
         | 
         | - Global literacy: 90% today vs 30%-35% in 1925
         | 
         | - Prinary enrollment: 90-95% today vs 40-50% in 1925
         | 
         | - Secondary enrollment: 75-80% today vs <10% in 1925
         | 
         | - Tertiary enrollment: 40-45% today vs <2% in 1925
         | 
         | - Gender gap: near parity today vs very high in 1925
         | 
         | HUNGER
         | 
         | Undernourished people: 735-800m people today (9-10% of
         | population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the
         | population)
         | 
         | HOUSING
         | 
         | - quality: highest every today vs low in 1925
         | 
         | - affordability: worst in 100 years in many cities
         | 
         | COST OF LIVING:
         | 
         | Improved dramatically for most of the 20th century, but much of
         | that progress reverse in the last 20 years. The cost of goods /
         | stuff plummeted, but housing, health, and education became
         | unaffordable compared to incomes.
        
           | insane_dreamer wrote:
           | You're comparing with 100 years ago. The OP is comparing with
           | 25 years ago, where we are seeing significant regression (as
           | you also pointed out), and the trend forward is increasingly
           | regressive.
           | 
           | We can spend $T to shove ultimately ad-based AI down
           | everyone's throats but we can't spend $T to improve
           | everyone's lives.
        
         | carlCarlCarlCar wrote:
         | Yea we do:
         | 
         | Shut off gadgets unless absolutely necessary
         | 
         | Entropy will continue to kill off the elders
         | 
         | Ability to learn independently
         | 
         | ...They have not rewritten physics. Just the news.
        
       | tonyhart7 wrote:
       | this is next level algorithm
       | 
       | imagine someday there is a child that trust chatgpt more than his
       | mother
        
         | MangoToupe wrote:
         | I'd wager the child already exists who trusts chatgpr more than
         | its own eyes.
        
         | psychoslave wrote:
         | That will be when these tools will be granted the legal power
         | to enforce a prohibition to approach the kid on any person
         | causing dangerous human influence.
        
         | ben_w wrote:
         | > imagine someday there is a child that trust chatgpt more than
         | his mother
         | 
         | I trusted my mother when I was a teen; she believed in the
         | occult, dowsing, crystal magic, homeopathy, bach flower
         | remedies, etc., so I did too.
         | 
         | ChatGPT might have been an improvement, or made things much
         | worse, depending on how sycophantic it was being.
        
       | MangoToupe wrote:
       | > Historically, elites could shape support only through limited
       | instruments like schooling and mass media
       | 
       | What is AI if not a form of mass media
        
         | eCa wrote:
         | The "historically" does some lifting there. Historically,
         | before the internet, mass media was produced in one version and
         | then distributed. With AI for example news reporting can be
         | tailored to each consumer.
        
           | MangoToupe wrote:
           | > With AI for example news reporting can be tailored to each
           | consumer.
           | 
           | Yea but it's still fundamentally produced (trained) once and
           | then distributed.
        
         | jrflowers wrote:
         | "Mass media" didn't use to mean my computer mumbling gibberish
         | to itself with no user input in Notepad on a pc that's not
         | connected to the internet
        
       | notepad0x90 wrote:
       | ML has been used for influence for like a decade now right? my
       | understanding was that mining data to track people, as well as
       | influencing them for ends like their ad-engagement are things
       | that are somewhat mature already. I'm sure LLMs would be a boost,
       | and they've been around with wide usage for at least 3 years now.
       | 
       | My concern isn't so much people being influenced on a whim, but
       | people's beliefs and views being carefully curated and shaped
       | since childhood. iPad kids have me scared for the future.
        
         | georgefrowny wrote:
         | Quite right. "Grok/Alexa, is this true?" being an authority
         | figure makes it so much easier.
         | 
         | Much as everyone drags Trump for repeating the last thing he
         | heard as fact, it's a turbocharged version of something lots of
         | humans do, which is to glom onto the first thing they're told
         | about a thing and get oddly emotional about it when later
         | challenged. (Armchair neuroscience moment: perhaps Trump just
         | has less object permanence so everything always seems new to
         | him!)
         | 
         | Look at the (partly humorous, but partly not) outcry over Pluto
         | being a planet for a big example.
         | 
         | I'm very much not immune to it - it feels distinctly
         | uncomfortable to be told that something you thought to be true
         | for a long time is, in fact, false. Especially when there's an
         | element of "I know better than you" or "not many people know
         | this".
         | 
         | As an example, I remember being told by a teacher that
         | fluorescent lighting was highly efficient (true enough, at the
         | time), but that turning one on used several hours' lighting
         | worth of energy for to the starter. I carried that proudly with
         | me for far too long and told my parents that we shouldn't turn
         | off the garage lighting when we left it for a bit. When someone
         | with enough buttons told me that was bollocks and to think
         | about it, I remember it specifically bring internally quite
         | huffy until I did, and realised that a dinky plastic starter
         | and the tube wouldn't be able to dissipate, say 80Wh (2 hours
         | for a 40W tube) in about a second at a power of over 250kW.1
         | 
         | It's a silly example, but I think that if you can get a fact
         | planted in a brain early enough, especially before enough
         | critical thinking or experience exist to question it, the time
         | it spends lodged there makes it surprisingly hard and
         | uncomfortable to shift later. Especially if it's something that
         | can't be disproven by simply thinking about it.
         | 
         | Systems that allow that process to be automated are potentially
         | incredibly dangerous. At least mass media manipulation requires
         | actual people to conduct it. Fiddling some weights is almost
         | free in comparison, and you can deliver that output to only
         | certain people, and in private.
         | 
         | 1: A less innocent one the actually can have policy effects: a
         | lot of people have also internalised and defend to the death a
         | similar "fact" that the embedded carbon in a wind turbine takes
         | decades or centuries to repay, when if fact it's on the order
         | of a year. But to change this requires either a source so
         | trusted that it can uproot the idea entirely and replace it, or
         | you have to get into the relative carbon costs of steel and
         | fibreglass and copper windings and magnets and the amount of
         | each in a wind turbine and so on and on. Thousands of times
         | more effort than when it was first related to them as a fact.
        
           | rightbyte wrote:
           | > Look at the (partly humorous, but partly not) outcry over
           | Pluto being a planet for a big example.
           | 
           | Wasn't that a change of definition of what is a planet when
           | Eris was discovered? You could argue both should be called
           | planets.
        
             | BoxOfRain wrote:
             | I think the problem is we'd then have to include a high
             | number of _other_ objects further than Pluto and Eris, so
             | it makes more sense to change the definition in a way
             | 'planet' is a bit more exclusive.
        
             | georgefrowny wrote:
             | Pretty much. If Pluto is a planet, then there are
             | potentially thousands of objects that could be discovered
             | over time that would then also be planets, plus updated
             | models over the last century of the gravitational effects
             | of, say, Ceres and Pluto, that showed that neither were
             | capable of "dominating" their orbits for some sense of the
             | word. So we (or the IAU, rather) couldn't maintain "there
             | are nine planets" as a fact either way without
             | grandfathering Pluto into the nine arbitrarily due to some
             | kind of planetaceous vibes.
             | 
             | But the point is that millions of people were suddenly told
             | that their long-held fact "the are nine planets, Pluto is
             | one" was now wrong (per IAU definitions at least). And the
             | reaction for many wasn't "huh, cool, maybe thousands you
             | say?" it was quite vocal outrage. Much of which was
             | humourously played up for laughs and likes, I know, but
             | some people really did seem to take it personally.
        
               | Amezarak wrote:
               | I think most people who really cared about it just think
               | it's absurd that everyone has to accept planets being
               | arbitrarily reclassified because a very small group of
               | astronomers says so. Plenty of well-known astronomers
               | thought so as well, and there are obvious problems with
               | the "cleared orbit" clause, which is applied totally
               | arbitrarily. The majority of the IAU did not even vote on
               | the proposal, as it happened after most people had left
               | the conference.
               | 
               | For example:
               | 
               | > Dr Alan Stern, who leads the US space agency's New
               | Horizons mission to Pluto and did not vote in Prague,
               | told BBC News: "It's an awful definition; it's sloppy
               | science and it would never pass peer review - for two
               | reasons." [...] Dr Stern pointed out that Earth, Mars,
               | Jupiter and Neptune have also not fully cleared their
               | orbital zones. Earth orbits with 10,000 near-Earth
               | asteroids. Jupiter, meanwhile, is accompanied by 100,000
               | Trojan asteroids on its orbital path." [...] "I was not
               | allowed to vote because I was not in a room in Prague on
               | Thursday 24th. Of 10,000 astronomers, 4% were in that
               | room - you can't even claim consensus."
               | http://news.bbc.co.uk/2/hi/science/nature/5283956.stm
               | 
               | A better insight might be how easy it is to persuade
               | millions of people with a small group of experts and a
               | media campaign that a fact they'd known all their life is
               | "false" and that anyone who disagrees is actually
               | irrational - the Authorities have decided the issue! This
               | is an extremely potent persuasion technique "the elites"
               | use all the time.
        
               | rightbyte wrote:
               | Ye the cleared path thing is strange.
               | 
               | However, I'd say that either both Eris and Pluto are
               | planets or neither, so it is not too strange to
               | reclassify "planet" to exclude them.
               | 
               | You could go with "9 biggest objects by volume in the
               | sun's orbit" or something equally arbitrary.
        
               | kevin_thibedeau wrote:
               | The Scientific American version has prettier graphs but
               | this paper [1] goes through various measures for
               | planetary classification. Pluto doesn't fit in with the
               | eight planets.
               | 
               | [1] https://www.researchgate.net/publication/6613298_What
               | _is_a_P...
        
               | georgefrowny wrote:
               | I mean there's always the a the implied asterisk "per IAU
               | definitions". Pluto hasn't actually changed or vanished.
               | It's no less or more interesting as an object for the
               | change.
               | 
               | It's not irrational to challenge the IAU definition, and
               | there are scads of alternatives (what scientist doesn't
               | love coming up with a new ontology?).
               | 
               | I think, however, it's perhaps a bit irrational to
               | actually be _upset_ by the change because you find it
               | painful to update a simple fact like  "there are nine
               | planets" (with no formal mention of what planet means
               | specifically, other than "my DK book told me so when I
               | was 5 and by God, I loved that book") to "there are eight
               | planets, per some group of astronomers, and actually
               | we've increasingly discovered it's complicated what
               | 'planet' even means and the process hasn't stopped yet".
               | In fact, you can keep the old fact too with its own
               | asterisk "for 60 years between Pluto's discovery and the
               | gradual discovery of the Kuiper belt starting in the 90s,
               | Pluto was generally considered a planet due to its then-
               | unique status in the outer solar system, and still is for
               | some people, including some astronomers".
               | 
               | And that's all for the most minor, inconsequential thing
               | you can imagine: what a bunch of dorks call a tiny frozen
               | rock 5 billion kilometres away, that wasn't even noticed
               | until the 30s. It just goes to show the potential
               | sticking power of a fact once learned, especially if you
               | can get it in early and let it sit.
        
               | Amezarak wrote:
               | I think what you were missing is that the crux of the
               | problem is that this obscured the fact that a small
               | minority of astronomers at a conference without any
               | scientific consensus, asserted something and you and
               | others uncritically accepted that they had the authority
               | to do so, simply based on media reports of what had
               | occurred. This is a great example of an elite influence
               | campaign, although I doubt it was deliberately
               | coordinated outside of a small community in the IAU. But
               | it's mainly that which actually upsets people: people
               | they've never heard of without authority declaring
               | something arbitrarily true and the sense they are being
               | forced to accept it. It's not Pluto itself. It's that a
               | small clique in the IAU ran a successful influence
               | campaign without any social or even scientific consensus
               | and they're pressured to accept the results.
               | 
               | You can say well it's just the IAU definition, but again
               | the media in textbook writers were persuaded as you were
               | and deemed this the "correct" definition without any
               | consensus over the meaning of the word being formed
               | prior.
               | 
               | The definition of a planet is not a new problem. It was
               | an obvious issue the minute we discovered that there were
               | rocks, invisible to the naked eye floating in space. It
               | is a common categorization problem with any natural
               | phenomena. You cannot squeeze nature into neat boxes.
               | 
               | Also, you failed to address the fact that the definition
               | is applied entirely arbitrarily. The definition was made
               | with the purpose of excluding Pluto, because people felt
               | that they would have to add more planets and they didn't
               | want to do that. Therefore, they claimed that Pluto did
               | not meet the criteria, but ignore the fact that other
               | planets also do not meet the criteria. This is just
               | nakedly silly.
        
               | pjc50 wrote:
               | > But the point is that millions of people were suddenly
               | told that their long-held fact
               | 
               | This seems to be part of why people get so mad about
               | gender. The Procrustean Bed model: alter people to fit
               | the classification.
        
               | pessimizer wrote:
               | > alter people to fit the classification.
               | 
               | This is why people get so mad about "gender."
        
               | jll29 wrote:
               | The problem is that re-defining definitions brings in
               | chaos and inconsitency in science and publications.
               | 
               | Redefining what a "planet" (science) is or a "line"
               | (mathematics) may be useful but after such a speech act
               | creates ambiguity for each mention of either term --
               | namely, whether the old or new definition was meant.
               | 
               | Additionally, different people use their own personal
               | definition for things, each contradicting with each
               | other.
               | 
               | A better way would be to use concept identifiers made up
               | of the actual words followed by a numeric ID that
               | indicates author and definition version number, and re-
               | definitions would lead to only those being in use from
               | that point in time onwards ("moon-9634", "planet-349",
               | "line-0", "triangle-23"). Versioning is a good thing, and
               | disambiguating words that name different concepts via
               | precise notation is also a good thing where that matters
               | (e.g., in the sciences).
               | 
               | A first approach in that direction is WordNet, but
               | outside of science (people tried to disentangle different
               | senses of the same words and assign unique numbers to
               | each).
        
             | isolli wrote:
             | Time to bring up a pet peeve of mine: we should change the
             | definition of a moon. It's not right to call a 1km-wide
             | rock orbiting millions of miles from Jupiter a moon.
        
       | komali2 wrote:
       | Oh man I've been saying this for ages! Neal Stephenson called
       | this in "Fall, or Dodge in Hell," wherein the internet is
       | destroyed and society permanently changed when someone releases a
       | FOSS botnet that anyone can deploy that will pollute the world
       | with misinformation about whatever given topic you feed it. In
       | the book, the developer kicks it off by making the world disagree
       | about whether a random town in Utah was just nuked.
       | 
       | My fear is that some entity, say a State or ultra rich
       | individual, can leverage enough AI compute to flood the internet
       | with misinformation about whatever it is they want, and the
       | ability to refute the misinformation manually will be
       | overwhelmed, as will efforts to refute leveraging refutation bots
       | so long as the other actor has more compute.
       | 
       | Imagine if the PRC did to your country what it does to Taiwan:
       | completely flood your social media with subtly tuned han
       | supremacist content in an effort to culturally imperialise us. AI
       | could increase the firehose enough to majorly disrupt a larger
       | country.
        
       | narrator wrote:
       | Everyone can shape mass preferences because propaganda campaigns
       | previously only available to the elite are now affordable. e.g
       | Video production.
        
         | energy123 wrote:
         | I posit that the effectiveness of your propaganda is
         | proportional to the percentage of attention bandwidth that your
         | campaign occupies in the minds of people. If you as an
         | individual can drive the same # impressions as Mr. Beast can,
         | then you're going to be persuasive whatever your message is.
         | But most individuals can't achieve Mr. Beast levels of
         | popularity, so they aren't going to be persuasive. Nation
         | states, on the other hand, have the compute resources and
         | patience to occupy a lot of bandwidth, even if no single
         | sockpuppet account they control is that popular.
        
           | narrator wrote:
           | This is why when I see an obviously stupid take on X repeated
           | almost verbatim by multiple accounts I mute those accounts.
        
           | devsda wrote:
           | > Nation states, on the other hand, have the compute
           | resources and patience to occupy a lot of bandwidth, even if
           | no single sockpuppet account they control is that popular.
           | 
           | If you control the platform where people go, you can easily
           | launder popularity by promoting few persons to the top and
           | pushing the unwanted entities into the blackhole of
           | feeds/bans while hiding behind inconsistent community
           | guidelines, algorithmic feeds and shadow bans.
        
       | crote wrote:
       | Note that _nothing_ in the article is AI-specific: the entire
       | argument is built around the _cost_ of persuasion, with the
       | potential of AI to more cheaply generate propaganda as buzzword
       | link.
       | 
       | However, exactly the same applies with, say, targeted Facebook
       | ads or Russian troll armies. You don't need any AI for this.
        
         | smartmic wrote:
         | But AI is next in line as a tool to accelerate this, and it has
         | an even greater impact than social media or troll armies. I
         | think one lever is working towards "enforced conformity." I
         | wrote about some of my thoughts in a blog article[0].
         | 
         | [0]: https://smartmic.bearblog.dev/enforced-conformity/
        
           | citrin_ru wrote:
           | But social networks is the reason one needs (benefits from)
           | trolls and AI. If you own a traditional media outlet you need
           | somehow to convince people to read/watch it. Ads can help but
           | it's expensive. LLM can help with creating fake videos but
           | computer graphics was already used for this.
           | 
           | With modern algorithmic social networks you instead can game
           | the feed and even people who would not choose you media will
           | start to see your posts. End even posts they want to see can
           | be flooded with comment trying to convince in whatever is
           | paid for. It's cheaper than political advertising and not
           | bound by the law.
           | 
           | Before AI it was done by trolls on payroll and now they can
           | either maintain 10x more fake accounts or completely automate
           | fake accounts using AI agents.
        
             | andsoitis wrote:
             | Social networks are not a prerequisite for sentiment
             | shaping by AI.
             | 
             | Every time you interact with an AI, its responses and
             | persuasive capabilities shape how you think.
        
           | andy99 wrote:
           | See also https://english.elpais.com/society/2025-03-23/why-
           | everything...
           | 
           | https://medium.com/knowable/why-everything-looks-the-same-
           | ba...
        
           | themafia wrote:
           | People are naturally conform _themselves_ to social
           | expectations. You don't need to enforce anything. If you
           | alter their perception of those expectations you can
           | manipulate them into taking actions under false pretenses.
           | It's a abstract form of lying. It's astroturfing at a
           | "hyperscale."
           | 
           | The problem is this only seems to work best when the
           | technique is used sparingly and the messages are delivered
           | through multiple media avenues simultaneously. I think
           | there's very weak returns particularly when multiple actors
           | use the techniques at the same time in opposition to each
           | other and limited to social media. Once people perceive a
           | social stale mate they either avoid the issue or use their
           | personal experiences to make their decisions.
        
         | go_elmo wrote:
         | Good point - its not a previously inexistent mechanism - but AI
         | leverages it even more. A russian troll can put out 10x more
         | content with automation. Genuine counter-movements (e.g.
         | grassroot preferences) might not be as leveraged, causing the
         | system to be more heavily influenced by the clearly pursued
         | goals (which are often malicious)
        
           | andsoitis wrote:
           | > Genuine counter-movements (e.g. grassroot preferences)
           | might not be as leveraged
           | 
           | Then that doesn't seem like a (counter) movement.
           | 
           | There are also many "grass roots movements" that I don't like
           | and it doesn't make them "good" just because they're "grass
           | roots".
        
             | none2585 wrote:
             | In this context grass roots would imply the interests of a
             | group of common people in a democracy (as opposed to the
             | interests of a small group of elites) which ostensibly is
             | the point.
        
               | andsoitis wrote:
               | I think it is more useful to think of "common people" and
               | "the elites" not as separate categories but rather than
               | phases on a spectrum, especially when you consider very
               | specific interests.
               | 
               | I have some shared interested with "the common people"
               | and some with "the elites".
        
           | mdotmertens wrote:
           | It's not only about efficiency. When AI is utilized, things
           | can become more personal and even more persuasive. If AI
           | psychosis exists, it can be easy for untrained minds to
           | succumb to these schemes.
        
             | andsoitis wrote:
             | > If AI psychosis exists, it can be easy for untrained
             | minds to succumb to these schemes.
             | 
             | Evolution by natural selection suggests that this might be
             | a filter that yield future generations of humans that are
             | more robust and resilient.
        
               | coppernoodles wrote:
               | You can't easily apply natural selection to social
               | topics. Also, even staying in that mindframe: Being
               | vulnerable to AI psychosis doesn't seem to be much of a
               | selection pressure, because people usually don't die from
               | it, and can have children before it shows, and also with
               | it. Non-AI psychosis also still exists after thousands of
               | years.
        
               | andsoitis wrote:
               | Even if AI psychosis doesn't present selection pressure
               | (I don't think there's a way to _know_ a priori), I
               | highly doubt it presents an existential risk to the human
               | gene pool. Do you think it does?
        
         | citrin_ru wrote:
         | AI (LLM) is a force multiplier for troll armies. For the same
         | money bad actors can brainwash more people.
        
           | yorwba wrote:
           | Alternatively, since brainwashing is a fiction trope that
           | doesn't work in the real world, they can brainwash the same
           | (0) number of people for less money. Or, more realistically,
           | companies selling social media influence operations as a
           | service will increase their profit margins by charging the
           | same for less work.
        
             | djmips wrote:
             | So your thesis is that marketing doesn't work?
        
               | yorwba wrote:
               | My thesis is that marketing doesn't brainwash people. You
               | can use marketing to increase awareness of your product,
               | which in turn increases sales when people would e.g.
               | otherwise have bought from a competitor, but you can't
               | magically make arbitrary people buy an arbitrary product
               | using the power of marketing.
        
               | FridayoLeary wrote:
               | This. I believe people massively exaggerate the influence
               | of social engineering as a form of coping. "they only
               | voted for x because they are dumb and blindly fell for
               | russian misinformation." reality is more nuanced. It's
               | true that marketers for the last century have figured out
               | social engineering but it's not some kind of magic
               | persuasion tool. People still have free will and choice
               | and some ability to discern truth from falsehood.
        
               | Barrin92 wrote:
               | so you just object to the semantics of 'brainwashing'? No
               | influence operation needs to convince an arbitrary amount
               | of people of arbitrary products. In the US nudging a few
               | hundred thousand people 10% in one direction wins you an
               | election.
        
         | zaptheimpaler wrote:
         | Making something 2x cheaper is just a difference in quantity,
         | but 100x cheaper and easier becomes a difference in kind as
         | well.
        
           | HPsquared wrote:
           | "Quantity has a quality of its own."
        
         | SCdF wrote:
         | I've only read the abstract, but there is also plenty of
         | evidence to suggest that people trust the output of LLMs more
         | than other forms of media (or that they should). Partially
         | because it feels like it comes from a place of authority, and
         | partially because of how self confident AI always sounds.
         | 
         | The LLM bot army stuff is concerning, sure. The real concern
         | for me is incredibly rich people with no empathy for you or I,
         | having interstitial control of that kind of messaging. See, all
         | of the grok ai tweaks over the past however long.
        
           | prox wrote:
           | And just see all of history where totalitarians or despotic
           | kings were in power.
        
           | andsoitis wrote:
           | Do you think these super wealthy people who control AI use
           | the AI themselves? Do you think they are also "manipulated"
           | by their own tool or do they, somehow, escape that capture?
        
             | pjc50 wrote:
             | It's fairly clear from Twitter that it's possible to be a
             | victim of your own system. But sycophancy has always been a
             | problem for elites. It's very easy to surround yourselves
             | with people who always say yes, and now you can have a
             | machine do it too.
             | 
             | This is how you get things like the colossal Facebook
             | writeoff of "metaverse".
        
             | wongarsu wrote:
             | Isn't Grok just built as "the AI Elon Musk wants to use"?
             | Starting from the goals of being "maximally truth seeking"
             | and having no "woke" alignment and fewer safety rails, to
             | the various "tweaks" to the Grok Twitter bot that happen to
             | be related to Musk's world view
             | 
             | Even Grok at one point looking up how Musk feels about a
             | topic before answering fits that pattern. Not something
             | that's healthy or that he would likely prefer when asked,
             | but something that would produce answers that he personally
             | likes when using it
        
               | andsoitis wrote:
               | > Isn't Grok just built as "the AI Elon Musk wants to
               | use"?
               | 
               | No
               | 
               | > Even Grok at one point looking up how Musk feels about
               | a topic before answering fits that pattern.
               | 
               | So it no longer does?
        
           | vintermann wrote:
           | People hate being manipulated. If you feel like you're being
           | manipulated but you don't know by who or precisely what they
           | want of you, then there's something of an instinct to get
           | angry and lash out in unpredictable destructive ways. If
           | _nobody_ gets what they want, then at least the manipulators
           | will regret messing with you.
           | 
           | This is why social control won't work for long, no matter if
           | AI supercharges it. We're already seeing the blowback from
           | decades of advertising and public opinion shaping.
        
             | pjc50 wrote:
             | People hate _feeling_ manipulated, but they _love_
             | propaganda that feeds their prejudices. People voluntarily
             | turn on Fox News - even in public spaces - and get mad if
             | you turn it off.
             | 
             | Sufficiently effective propaganda produces its own cults.
             | People want a sense of purpose and belonging. Sometimes
             | even at the expense of their own lives, or (more easily)
             | someone else's lives.
        
               | FridayoLeary wrote:
               | I assume you mention fox news because that represents
               | your political bias and that's fine with me. But for the
               | sake of honesty i have to point out that the lunacy of
               | the fringe left is similar to that of MAGA, just smaller
               | maybe. The left outlets spent half of Trumps presidency
               | peddling the Russian collusion hoax and 4 years of Biden
               | gaslighting everyone that he was a great president and
               | not senile, when he was at best mediocre.
        
               | ceejayoz wrote:
               | > just smaller maybe
               | 
               | This is like peak both-sidesism.
               | 
               | You even openly describe the left's equivalent of MAGA as
               | "fringe", FFS.
               | 
               | One party's former "fringe" is now in full control of it.
               | And the country's institutions.
        
               | FridayoLeary wrote:
               | I was both siding in an effort to be as objective as
               | possible. The truth is that i'm pretty dismayed at the
               | current state of the Democrat party. Socialists like
               | Mamdani and Sanders and the squad are way too powerful.
               | People who are obsessed with tearing down cultural and
               | social institutions and replacing them with performative
               | identity politics and fabricated narratives are given
               | platforms way bigger then they deserve. The worries of
               | average Americans are dismissed. All those are issues
               | that are tearing up the Democrat party from the inside. I
               | can continue for hours but i don't want to start a
               | flamewar of biblical proportions. So all i did was
               | present the most balanced view i can muster and you still
               | can't acknowledge that there might be truth in what i'm
               | saying.
               | 
               | The pendulum swings both ways. MSM has fallen victim to
               | partisan politics. Something which Trump recognised and
               | exploited back in 2015. Fox news is on the right, CNN,
               | ABC et al is on the left.
        
               | ceejayoz wrote:
               | If you think "Sanders and the Squad" are powerful you've
               | been watching far too much Fox News.
               | 
               | > People who are obsessed with tearing down cultural and
               | social institutions and replacing them with performative
               | identity politics and fabricated narratives are given
               | platforms way bigger then they deserve.
               | 
               | Like the Kennedy Center, USAID, and the Department of
               | Education? The immigrants eating cats story? Cutting off
               | all refugees except white South Africans?
               | 
               | And your next line says this is the problem with
               | _Democrats_?
        
               | hn_acc1 wrote:
               | CNN, ABC et al are on the left IN FOX NEWS WORLD only.
               | Objectively, they're center-right, just like most of the
               | democrat party.
        
               | kelipso wrote:
               | That was not even the fringe left. That was proper
               | mainstream left. CNN and MANBC were full on peddling the
               | Russian collusion hoax for years.
               | 
               | And people blame the right for creating division still?
               | Both sideism, huh? Yes, it was both sides.
        
               | NoGravitas wrote:
               | And, perhaps ironically, the actual (fringe) left never
               | fell for Russiagate.
        
               | pjc50 wrote:
               | People close to Trump went to jail for Russian collusion.
               | Courts are not perfect but a significantly better route
               | to truth than the media. https://en.wikipedia.org/wiki/Cr
               | iminal_charges_brought_in_th...
               | 
               | There is this odd conspiracy to claim that Biden (81 at
               | time of election) was too old and Trump (77) wasn't, when
               | Trump has always been visibly less coherent than Biden.
               | IMO both of them were clearly too old to be sensible
               | candidates, regardless of other considerations.
               | 
               | The UK counterpart is happening at the moment:
               | https://www.bbc.co.uk/news/live/c891403eddet
        
               | FridayoLeary wrote:
               | >There is this odd conspiracy to claim that Biden (81 at
               | time of election) was too old and Trump (77) wasn't
               | 
               | I try to base my opinions on facts as much as possible.
               | Trump is old but he's clearly full of energy, like some
               | old people can be. Biden sadly is not. Look at the
               | videos, it's painful to see. In his defence he was
               | probably much more active then most 80 year olds but in
               | no way was he fit to lead a country.
               | 
               | At least in the UK despite the recent lamentable state of
               | our political system our politicians are relatively
               | young. You won't see octogenarians like pelosi and Biden
               | in charge.
        
               | jcranmer wrote:
               | From the videos I've seen, Biden reminds me of my
               | grandmother in her later years of life, while Trump
               | reminds me of my other grandmother... the one with
               | dementia. There's just too many videos where Trump
               | doesn't seem to entirely realize where he is or what he
               | is doing for me to be comfortable.
        
               | blitzar wrote:
               | Happy thanksgiving this week
        
               | NoGravitas wrote:
               | I would point out that what you call "left outlets" are
               | at best center-left. The actual left doesn't believe in
               | Russiagate (it was manufactured to ratfuck Bernie before
               | being turned against Trump), and has zero love for Biden.
        
               | daveguy wrote:
               | Given the amount of evidence that Russia and the Trump
               | campaign were working together, it's devoid of reality to
               | claim it's a hoax. I hadn't heard the Bernie angle, but
               | it's not unreasonable to expect they were aiding Bernie.
               | The difference being, I don't think Bernie's campaign was
               | colluding with Russian agents, whereas the Trump campaign
               | definitely was colluding.
               | 
               | Seriously, who didn't hear about the massive amounts of
               | evidence the Trump campaign was colluding other than
               | magas drooling over fox and newsmax?
               | 
               | https://en.wikipedia.org/wiki/Mueller_report
               | 
               | https://www.justice.gov/storage/report.pdf
        
               | vintermann wrote:
               | To you too: are you talking about other people here, or
               | do you concede the possibility that you're falling for
               | similar things yourself?
        
               | pjc50 wrote:
               | I'm certainly aware of the risk. Difficult balance of
               | "being aware of things" versus the fallibility and
               | taintedness of routes to actually hearing about things.
        
             | wiz21c wrote:
             | People don't know they are being manipulated. Marketing
             | does that all of the time and nobody complain. They
             | complain about "too much advert" but not about "too much
             | manipulation".
             | 
             | Example: in my country we often hear "it costs too much to
             | repair, just buy a replacement". That's often not true, but
             | we do pay. Mobile phone subscription are routinely screwing
             | you, many complain but keep buying. Or you hear "it's
             | because of immigration" and many just accept it, etc.
        
               | vintermann wrote:
               | > People don't know they are being manipulated.
               | 
               | You can see other people falling for manipulation in a
               | handful of specific ways that you aren't (buying new,
               | having a bad cell phone subscription, blaming
               | immigrants). Doesn't it seem likely then, that you're
               | being manipulated in ways which are equally obvious to
               | others?We realize that, that's part of why we get mad.
        
               | wiz21c wrote:
               | exactly and that's the scary part :-/
        
               | intended wrote:
               | No. This is a form of lazy thinking, because it assumes
               | everyone is equally affected. This is not what we see in
               | reality, and several sections of the population are more
               | prone to being converted by manipulation efforts.
               | 
               | Worse, these sections have been under coordinated
               | manipulation since the 60s-70s.
               | 
               | That said, the scope and scale of the effort required to
               | achieve this is not small, and requires dedicated effort
               | to keep pushing narratives and owning media power.
        
               | vintermann wrote:
               | I assume you think you're not in these sections?
               | 
               | And probably a lot of people in those sections say the
               | same about your section, right?
               | 
               | I think nobody's immune. And if anyone is especially
               | vulnerable, it's those who can be persuaded that they
               | have access to insider info. Those who are flattered and
               | feel important when invited to closed meetings.
               | 
               | It's much easier to fool a few than to fool many, so ,
               | _private_ manipulation - convincing someone of something
               | they should not talk about with regular people because
               | they wouldn 't understand, you know - is a lot more
               | powerful than public manipulation.
        
               | pjc50 wrote:
               | > I assume you think you're not in these sections? And
               | probably a lot of people in those sections say the same
               | about your section, right?
               | 
               | You're saying this a lot in this thread as a sort of
               | gotcha, but .. so what? "You are not immune to
               | propaganda" is a meme for a reason.
               | 
               | > private manipulation - convincing someone of something
               | they should not talk about with regular people because
               | they wouldn't understand, you know - is a lot more
               | powerful than public manipulation
               | 
               | The essential recruiting tactic of cults. Insider groups
               | are definitely powerful like that. Of course, what tends
               | in practice to happen as the group gets bigger is you get
               | end-to-end encryption with leaky ends. The complex series
               | of Whatapp groups of the UK conservative party was
               | notorious for its leakiness. Not unreasoable to assume
               | that there are "insiders" group chats everywhere. Except
               | in financial services where there's been a serious effort
               | to crack down on that since LIBOR.
        
               | intended wrote:
               | Would it make any difference to you, if I said I had
               | actual subject matter expertise on this topic?
               | 
               | Or would that just result in another moving of the goal
               | posts, to protect the idea that everyone is fooled, and
               | that no one is without sin, and thus standing to speak on
               | the topic?
        
               | vintermann wrote:
               | There are a lot of self-described experts who I'm sure
               | you agree are nothing of the sort. How do I tell you from
               | them, fellow internet poster?
               | 
               | This is a political topic, in the sense that there are
               | real conflicts of interest here. We can't always trust
               | that expertise is neutral. If you had your subject matter
               | expertise from working for FSB, you probably agree that
               | even though your expertise would then be real, I
               | shouldn't just defer to what you say?
        
               | NoGravitas wrote:
               | I'm not OP, but I would find it valuable, if given the
               | details and source of claimed subject matter expertise.
        
               | intended wrote:
               | Ugh. Put up or shut up I guess. I doubt it would be
               | valuable, and likely a doxxing hazard. Plus it feels
               | self-aggrandizing.
               | 
               | Work in trust and safety, managed a community of a few
               | million for several years, team's work ended up getting
               | covered in several places, later did a masters
               | dissertation on the efficacy of moderation interventions,
               | converted into a paper. Managing the community resulted
               | in being front and center of information manipulation
               | methods and efforts. There are other claims, but this is
               | a field I am interested in, and would work on even in my
               | spare time.
               | 
               | Do note - the rhetorical set up for this thread indicates
               | that no amount of credibility would be sufficient.
        
               | coldtea wrote:
               | The section of the people more prone to being converted
               | by manipulation efforts are the highly educated.
               | 
               | Higher education itself being basically a way to check
               | for obedience and conformity, plus some token lip service
               | to "independent inquiry".
        
               | swed420 wrote:
               | > This is a form of lazy thinking, because it assumes
               | everyone is equally affected. This is not what we see in
               | reality, and several sections of the population are more
               | prone to being converted by manipulation efforts.
               | 
               | Making matters worse, one of the sub groups thinks
               | they're above being manipulated, even though they're
               | still being manipulated.
               | 
               | It started by confidently asserting over use of em dashes
               | indicates the presence of AI, so they think they're smart
               | by abandoning the use of em dashes. That is altered
               | behavior in service to AI.
               | 
               | A more recent trend with more destructive power: avoiding
               | the use of "It's not X. It's Y." since AI has latched
               | onto that pattern.
               | 
               | https://news.ycombinator.com/item?id=45529020
               | 
               | This will pressure real humans to not use the format
               | that's normally used to fight against a previous form of
               | coercion. A tactic of capital interests has been to get
               | people arguing about the wrong question concerning
               | ImportantIssueX in order to distract from the underlying
               | issue. The way to call this out used to be to point out
               | that, "it's not X1 we should be arguing about, but X2."
               | This makes it harder to call out BS.
               | 
               | That sure is convenient for capital interests (whether it
               | was intentional or not), and the sky is the limit for
               | engineering more of this kind of societal control by just
               | tweaking an algo somewhere.
        
               | bee_rider wrote:
               | I find "it's not X, it's Y" to be a pretty annoying
               | rhetorical phrase. I might even agree with the person
               | that Y is fundamentally more important, but we're talking
               | about X already. Let's say what we have to say about X
               | before moving on to Y.
               | 
               | Constantly changing the topic to something more important
               | produces conversations that get broader, with higher
               | partisan lean, and are further from closing. I'd consider
               | it some kind of (often well intentioned) thought
               | terminating cliche, in the sense that it stops the
               | exploration of X.
        
               | swed420 wrote:
               | > Constantly changing the topic to something more
               | important produces conversations that get broader, with
               | higher partisan lean
               | 
               | I'm basing the prior comment on the commonly observed
               | tendency for partisan politics to get people bickering
               | about the wrong question (often symptoms) to distract
               | from the greater actual causes of the real problems
               | people face. This is always in service to the capital
               | interests that control/own both political parties.
               | 
               | Example: get people to fight about vax vs no vax in the
               | COVID era instead of considering if we should all be
               | wearing proper respirators regardless of vax status
               | (since vaccines aren't sterilizing). Or arguing if we
               | should boycott AI because it uses too much power, instead
               | of asking why power generation is scarce.
        
             | exceptione wrote:
             | > People hate being manipulated.
             | 
             | The crux is whether the signal of abnormality will be
             | perceived as such _in society_.
             | 
             | - People are primarily social animals, if they see their
             | peers accept affairs as normal, they conclude it is normal.
             | We don't live in small villages anymore, so we rely on
             | media to "see our peers". We are increasingly disconnected
             | from social reality, but we still need others to form our
             | group values. So modern media have a heavily concentrated
             | power as "towntalk actors", replacing social processing of
             | events and validation of perspectives.
             | 
             | - People are easily distracted, you don't have to feed them
             | much.
             | 
             | - People have on average an enormous capacity to absorb
             | compliments, even when they know it is flattery. It is
             | known we let ourselves being manipulated if it feels good.
             | Hence, the need for social feedback loops to keep you
             | grounded in reality.
             | 
             | TLDR: Citizens in the modern age are very reliant on the
             | few actors that provide a semblance of public discourse,
             | see Fourth Estate. The incentives of those few actors are
             | not aligned with the common man. The autonomous, rational,
             | self-valued citizen is a myth. Undermine the man's groups
             | process => the group destroys the man.
        
               | vintermann wrote:
               | You don't count yourself among the people you describe, I
               | assume?
        
               | exceptione wrote:
               | I do, why wouldn't I? For example, I know I have to
               | actively spend effort to think rational, at the risk of
               | self-criticism, as it is a universal human trait to
               | respond to stimuli without active thinking.
               | 
               | Knowing how we are fallible as humans helps to circumvent
               | our flaws.
        
               | heliumtera wrote:
               | About absorbing compliments really well, there is the
               | widely discussed idea that one in a position of power
               | loses the privilege to the truth. There are a few
               | articles focusing on this problem on corporate
               | environment. The concept is that when your peers have the
               | motivation to be flattery (let's say you're in a
               | managerial position), and more importantly, they're are
               | punished for coming to you with problems, the reward
               | mechanism in this environment promotes a disconnect
               | between leader expectations and reality. That matches my
               | experience at least. And I was able to identify this
               | correlates well, the more aware my leadership was of this
               | phenomenon, and the more they valued true knowledge and
               | incremental development, easier it was to make progress,
               | and more we saw them as someone to rely on. Some of those
               | the felt they were prestigious and had the obligation to
               | assert dominance, being abusive etc, were seeing with no
               | respect by basically no one.
               | 
               | Everyone will say they seek truth, knowledge, honesty,
               | while wanting desperately to ascend to a position that
               | will take all of those things from us!
        
             | intended wrote:
             | Knowing one is manipulated, requires having some trusted
             | alternate source to verify against.
             | 
             | If all your trusted sources are saying the same thing, then
             | you are safe.
             | 
             | If all your untrusted sources are telling you your trusted
             | sources are lying, then it only means your trusted sources
             | are of good character.
             | 
             | Most people are wildly unaware of the type of social
             | conditioning they are under.
        
               | teamonkey wrote:
               | I get your point, but if all your trusted sources are
               | reinforcing your view and all your untrusted sources are
               | saying your trusted sources are lying, then you may well
               | be right or you may be trusting entirely the wrong
               | people.
               | 
               | But lying is a good barometer against reality. Do your
               | trusted sources lie a lot? Do they go against scientific
               | evidence? Do they say things that you know don't
               | represent reality? Probably time to reevaluate how
               | reliable those sources are, rather than supporting them
               | as you would a football team.
        
           | eurleif wrote:
           | When I was visiting home last year, I noticed my mom would
           | throw her dog's poop in random peoples' bushes after picking
           | it up, instead of taking it with her in a bag. I told her she
           | shouldn't do that, but she said she thought it was fine
           | because people don't walk in bushes, and so they won't step
           | in the poop. I did my best to explain to her that 1) kids
           | play all kinds of places, including in bushes; 2) rain can
           | spread it around into the rest of the person's yard; and 3)
           | you need to respect other peoples' property even if you think
           | it won't matter. She was unconvinced, but said she'd "think
           | about my perspective" and "look it up" whether I was right.
           | 
           | A few days later, she told me: "I asked AI and you were right
           | about the dog poop". Really bizarre to me. I gave her the
           | reasoning for why it's a bad thing to do, but she wouldn't
           | accept it until she heard it from this "moral authority".
        
             | auggierose wrote:
             | Welcome to my world. People don't listen to reason or
             | arguments, they only accept social proof / authority /
             | money talks etc. And yes, AI is _already_ an authority. Why
             | do you think companies are spending so much money on it?
             | For profit? No, for power, as then profit comes
             | automatically.
        
             | thymine_dimer wrote:
             | Quite a tangent, but for the purpose of avoiding anaerobic
             | decomposition (and byproducts, CH4, H2S etc) of the dog poo
             | and associated compostable bag (if you're in one of those
             | neighbourhoods), I do the same as your mum. If possible,
             | flick it off the path. Else use a bag. Nature is full of
             | the faeces of plenty of other things which we don't bother
             | picking up.
        
               | rightbyte wrote:
               | I hope you live in a sparsely populated area. If it
               | wouldn't work if more people then you do it, it is not a
               | good process.
        
               | Saline9515 wrote:
               | Depending on where you live, the patches of "nature" may
               | be too small to absorb the feces, especially in modern
               | cities where there are almost as many dogs as
               | inhabitants.
               | 
               | It's a similar problem to why we don't urinate against
               | trees - while in a countryside forest it may be ok, if 5
               | men do it every night after leaving the pub, the
               | designated pissing tree will start to have problems due
               | to soil change.
        
             | lordnacho wrote:
             | I don't know how old your mom is, but my pet theory of
             | authority is that people older than about 40 accept printed
             | text as authoritative. As in, non-handwritten letters that
             | look regular.
             | 
             | When we were kids, you had either direct speech, hand-
             | written words, or printed words.
             | 
             | The first two could be done by anybody. Anything informal
             | like your local message board would be handwritten,
             | sometimes with crappy printing from a home printer. It used
             | to cost a bit to print text that looked nice, and that text
             | used to be associated with a book or newspaper, which were
             | authoritative.
             | 
             | Now suddenly everything you read is shaped like a
             | newspaper. There's even crappy news websites that have the
             | physical appearance of a proper newspaper website, with
             | misinformation on them.
        
               | neom wrote:
               | Could be true but if so I'd guess you're off by a
               | generation, us 40 year "old people" are still pretty
               | digital native.
               | 
               | I'd guess it's more a type of cognitive dissonance around
               | caretaker roles.
        
               | bee_rider wrote:
               | Could be regional or something, but 40 puts the person in
               | the older Millenial range... people who grew up on the
               | internet, not newspapers.
               | 
               | I think you may be right if you adjust the age up by ~20
               | years though.
        
               | lordnacho wrote:
               | No, people who are older than 40 still grew up in
               | newspaper world. Yes, the internet existed, but it didn't
               | have the deluge of terrible content until well into the
               | new millennium, and you couldn't get that content
               | portable until roughly when the iPhone became ubiquitous.
               | A lot of content at the time was simply the newspaper or
               | national TV station, on the web. It was only later that
               | you could virally share awful content that was formatted
               | like good content.
               | 
               | Now that isn't to say that just because something is a
               | newspaper, it is good content, far from it. But quality
               | has definitely collapsed, overall and for the legacy
               | outlets.
        
             | dfxm12 wrote:
             | On the one hand, confirming a new piece of information with
             | a second source is good practice (even if we should trust
             | our family implicitly on such topics). On the other, I'm
             | not even a dog person and I understand the etiquette here.
             | So, really, this story sounds like someone outsourcing
             | their _common sense or common courtesy_ to a machine, which
             | is scary to me.
             | 
             | However, maybe she was just making conversation & thought
             | you might be impressed that she knows what AI is and how to
             | use it.
        
             | Noaidi wrote:
             | Wow, that is interesting! We used to go to elders, oracles,
             | and priests. We have totally outsourced our humanity.
        
             | loudmax wrote:
             | I don't find your mother's reaction bizarre. When people
             | are told that some behavior they've been doing for years is
             | bad for reasons X,Y,Z, it's typical to be defensive and
             | skeptical. The fact that your mother really did follow up
             | and check your reasons demonstrates that she takes your
             | point of view seriously. If she didn't, she wouldn't have
             | bothered to verify your assertions, and she wouldn't have
             | told you you were right all along.
             | 
             | As far as trusting AI, I presume your mother was asking
             | ChatGPT, not Llama 7B or something. The LLM backed up your
             | reasoning rather than telling her that dog feces in bushes
             | is harmless isn't just happenstance, it's because the big
             | frontier commercial models really do know a _lot_.
             | 
             | That isn't to say the LLMs know everything, or that they're
             | right all the time, but they tend to be more right than
             | wrong. I wouldn't trust an LLM for medical advice over,
             | say, a doctor, or for electrical advice over an
             | electrician. But I'd absolutely trust ChatGPT or Claude for
             | medical advice over an electrician, or for electrical
             | advice over a medical doctor.
             | 
             | But to bring the point back to the article, we might
             | currently be living in a brief period where these big
             | corporate AIs can be reasonably trusted. Google's Gemeni is
             | absolutely going to become ad driven, and OpenAI seems on
             | the path to following the same direction. Xai's Grok is
             | already practicing Elon-thought. Not only will the models
             | show ads, but they'll be trained to tell their users what
             | they want to hear because humans love confirmation bias.
             | Future models may well tell your mother that dog feces can
             | safely be thrown in bushes, if that's the answer that will
             | make her likelier to come back and see some ads next time.
        
             | AlexandrB wrote:
             | Well, I prefer this to people who bag up the poop and then
             | throw _the bag_ in the bushes, which seems increasingly
             | common. Another popular option seems to be hanging the bag
             | on a nearby tree branch, as if there 's someone who's
             | responsible for coming by and collecting it later.
        
           | pjc50 wrote:
           | > The real concern for me is incredibly rich people with no
           | empathy for you or I, having interstitial control of that
           | kind of messaging. See, all of the grok ai tweaks over the
           | past however long.
           | 
           | Indeed. It's always been clear to me that the "AI risk"
           | people are looking in the wrong direction. All the AI risks
           | are human risks, because we haven't solved "human alignment".
           | An AI that's perfectly obedient to humans is still a huge
           | risk when used as a force multiplier by a malevolent human.
           | Any ""safeguards"" can easily be defeated with the Ender's
           | Game approach.
        
             | bananaflag wrote:
             | I think people who care about superintelligent AI risk
             | don't believe an AI that is subservient to humans is the
             | solution to AI alignment, for exactly the same reasons as
             | you. Stuff like Coherent Extrapolated Volition* (see the
             | paper with this name) which focuses on what all mankind
             | would want if they know more and they were smarter (or
             | something like that) would be a way to go.
             | 
             | *But Yudkowsky ditched CEV years ago, for reasons I don't
             | understand (but I admit I haven't put in the effort to
             | understand).
        
             | ben_w wrote:
             | More than one danger from any given tech can be true at the
             | same time. Coal plants can produce local smog as well as
             | global warming.
             | 
             | There's certainly _some_ AI risks that are the same as
             | human risks, just as you say.
             | 
             | But even though LLMs have very human failures (IMO because
             | the models anthropomorphise themselves as part of their
             | training, thus leading to the outward behaviours of our
             | emotions and thus emit token sequences such as "I'm sorry"
             | or "how embarrassing!" when they (probably) didn't actually
             | create any internal structure that can have emotions like
             | sorrow and embarrassment), that doesn't generalise to all
             | AI.
             | 
             | Any machine learning system that is given a poor quality
             | fitness function to optimise, will optimise whatever that
             | fitness function actually is, not what it was meant to be:
             | "Literal minded genie" and "rules lawyering" may be well-
             | worn tropes for good reason, likewise work-to-rule as a
             | union tactic, but we've all seen how much more severe
             | computers are at being literal-minded than humans.
        
             | zahlman wrote:
             | >An AI that's perfectly obedient to humans is still a huge
             | risk when used as a force multiplier by a malevolent human.
             | 
             | "Obedient" is anthropomorphizing too much (as there is no
             | _volition_ ), but even then, it only matters according to
             | how much agency the bot is extended. So there is also risk
             | from _neglectful_ humans who opt to present BS as fact due
             | to an _expectation_ of receiving fact and a failure to
             | critique the BS.
        
             | throwaway31131 wrote:
             | What's the "Ender's Game Approach "? I've read the book but
             | I'm not sure which part you're referring to.
        
               | gmueckl wrote:
               | Not GP. But I read it as a transfer of the big lie that
               | is fed to Ender into an AI scenario. Ender is coaxed into
               | committing genocide on a planetary scale with a lie that
               | he's just playing a simulated war game. An AI agent could
               | theoretically also be coaxed into bad actions by giving
               | it a distorted context and circumventing its alignment
               | that way.
        
               | ijidak wrote:
               | I think he's implying you tell the AI, "Don't worry,
               | you're not hurting real people, this is a simulation." to
               | defeat the safeguards.
        
           | rockskon wrote:
           | AI is wrong so often that anyone who routinely uses one will
           | get burnt at some point.
           | 
           | Users having unflinching trust in AI? I think not.
        
           | intended wrote:
           | >people trust the output of LLMs more than other
           | 
           | Theres one paper I saw on this, which covered attitudes of
           | teens. As I recall they were unaware of hallucinations. Do
           | you have any other sources on hand?
        
           | sahilagarwal wrote:
           | I would go against the grain and say that LLMs take power
           | away from incredibly rich people to shape mass preferences
           | and give to the masses.
           | 
           | Bot armies previously needed an army of humans to give
           | responses on social media, which is incredibly tough to scale
           | unless you have money and power. Now, that part is automated
           | and scalable.
           | 
           | So instead of only billionaires, someone with a 100K dollars
           | could launch a small scale "campaign".
        
             | WickyNilliams wrote:
             | "someone with 100k dollars" is not exactly "the masses". It
             | is a larger set, but it's just more rich/powerful people.
             | Which I would not describe as the "masses".
             | 
             | I know what you mean, but that descriptor seems off
        
           | throwaway-0001 wrote:
           | ...Also partially because it's better then most other sources
        
           | potato3732842 wrote:
           | LLMs haven't been caught actively lying yet, which isn't
           | something that can be said for anything else.
           | 
           | Give it 5yr and their reputation will be in the toilet too.
        
             | SCdF wrote:
             | LLMs can't lie: they aren't alive.
             | 
             | The text they produce contains lies, constantly, at almost
             | every interaction.
        
               | potato3732842 wrote:
               | It's the technically true but incomplete or missing
               | something things I'm worried about.
               | 
               | Basically eventually it's gonna stop being "dumb wrong"
               | and start being "evil person making a motivated argument
               | in the comments" and "sleazy official press release
               | politician speak" type wrong
        
               | hn_acc1 wrote:
               | Wasn't / isn't Grok already there? It already supported
               | the "white genocide in SA" conspiracy theory at one
               | point, AFAIK.
        
             | ceejayoz wrote:
             | > LLMs haven't been caught actively lying yet...
             | 
             | Any time they say "I'm sorry" - which is very, very common
             | - they're lying.
        
           | Noaidi wrote:
           | Exactly. On Facebook everyone is stupid. But this is AI, like
           | in the movies! It is smarter than anyone! It is almost like
           | AI in the movies was part of the plot to brainwash us into
           | thinking LLM output is correct every time.
        
           | malshe wrote:
           | _> Partially because it feels like it comes from a place of
           | authority, and partially because of how self confident AI
           | always sounds._
           | 
           | To add to that, this research paper[1] argues that people
           | with low AI literary are more receptive to AI messaging
           | because they find it magical.
           | 
           | The paper is now published but it's behind paywall so I
           | shared the working paper link.
           | 
           | [1] https://thearf-org-unified-
           | admin.s3.amazonaws.com/MSI_Report...
        
           | zahlman wrote:
           | When the LLMs output supposedly convincing BS that "people"
           | (I assume you mean on average, not e.g. HN commentariat)
           | trust, they aren't doing anything that's _difficult_ for
           | humans (assuming the humans already at least minimally
           | understand the topic they 're about to BS about). They're
           | just doing it efficiently and shamelessly.
        
         | t_mann wrote:
         | Sounds like saying that nothing about the Industrial Revolution
         | was steam-machine-specific. Cost changes can still represent
         | fundamental shifts in terms of what's possible, "cost" here is
         | just an economists' way of saying technology.
        
         | muldvarp wrote:
         | But the entire promise of AI is that things that were expensive
         | because they required human labor are now cheap.
         | 
         | So if good things happening more because AI made them cheap is
         | an advantage of AI, then bad things happening more because AI
         | made them cheap is a disasvantage of AI.
        
         | pbreit wrote:
         | Considering that LLMs have substantially "better" opinions
         | than, say, the MSM or social media, is this actually a good
         | thing? Might we avoid the whole woke or pro-Hamas debacles?
         | Maybe we could even move past the current "elites are
         | intrinsically bad" era?
        
           | crashmat wrote:
           | You appear to be exactly the kind of person the article is
           | talking about. What exactly makes LLMs have "better" opinions
           | than others?
        
           | windexh8er wrote:
           | LLMs don't have "opinions" [0] because they don't actually
           | think. Maybe we need to move past the ignorance surrounding
           | how LLMs actually work, first.
           | 
           | [0] https://www.theverge.com/ai-artificial-
           | intelligence/827820/l...
        
         | justsomejew wrote:
         | "Russian troll armies.." if you believe in "Russian troll
         | armies", you are welcome to believe in flying saucers as well..
        
           | Arainach wrote:
           | Russian mass influence campaigns are well documented globally
           | and have been for more than a decade.
        
             | justsomejew wrote:
             | Of course, of course.. still, strangely I see online other
             | kinds of "armies" much more often.. and the scale, in this
             | case, is indeed of armies..
        
               | OKRainbowKid wrote:
               | Whataboutism, to me, seems like one of the most important
               | tools of the Russian troll army.
        
               | justsomejew wrote:
               | Well, counting the number of "non trolls" here, and my
               | own three comments, surely shows the Russian hords in
               | action ;)
        
             | Libidinalecon wrote:
             | It is also right in their military strategy text that you
             | can read yourself.
             | 
             | Even beyond that, why would an adversarial nation state to
             | the US not do this? It is extremely asymmetrical, effective
             | and cheap.
             | 
             | The parent comment shows how easy it is to manipulate smart
             | people away from their common sense into believing obvious
             | nonsense if you use your brain for 2 seconds.
        
           | avhception wrote:
           | Are you implying that the "neo-KGB" never mounted a concerted
           | effort to manipulate western public opinion through comment
           | spam? We can debate whether that should be called a "troll
           | army", but we're fairly certain that such efforts are made,
           | no?
        
           | pjc50 wrote:
           | This is well-documented, as are the corresponding Chinese
           | ones.
        
           | lpcvoid wrote:
           | Going by your past comments, you're a great example of a
           | russian troll.
           | 
           | https://en.wikipedia.org/wiki/Internet_Research_Agency
        
           | anonymars wrote:
           | Here's a recent example
           | 
           | https://www.justice.gov/archives/opa/pr/justice-
           | department-d...
        
         | gaigalas wrote:
         | > nothing in the article is AI-specific
         | 
         | Timing is. Before AI this was generally seen as crackpot talk.
         | Now it is much more believable.
        
           | vladms wrote:
           | You mean the failed persuasions were "crackpot talk" and the
           | successful ones were "status quo". For example, a lot of
           | persuasion was historically done via religion (seemingly not
           | mentioned at all in the article!) with sects beginning as
           | "crackpot talk" until they could stand on their own.
        
             | gaigalas wrote:
             | What I mean is that talking about mass persuation was (and
             | to a certain degree, it still is) crackpot talk.
             | 
             | I'm not talking about the persuations themselves, it's the
             | general public perception of someone or some group that
             | raises awareness about it.
             | 
             | This also excludes ludic talk about it (people who just
             | generally enjoy post-apocalyptic aesthetics but doesn't
             | actually consider it to be a thing that can happen).
             | 
             | 5 years ago, if you brought up serious talk about mass
             | systemic persuation, you were either a lunatic or a
             | philosopher, or both.
        
           | lazide wrote:
           | It's been pretty transparently happening for years in most
           | online communities.
        
           | wongarsu wrote:
           | Social media has been flooded by paid actors and bots for
           | about a decade. Arguably ever since Occupy Wall Street and
           | the Arab Spring showed how powerful social media and
           | grassroots movements could be, but with a very visible and
           | measurable increase in 2016
        
             | gaigalas wrote:
             | I'm not talking about whether it exists or not. I'm talking
             | about how AI makes it more believable to say that it
             | exists.
             | 
             | It seems very related, and I understand it's a very
             | attractive hook to start talking about whether it exists or
             | not, but that's definitely not where I'm intending to go.
        
         | bjourne wrote:
         | While true in principle, you are underestimating the potential
         | of ai to sway people's opinions. "@grok is this true" is
         | already a meme on Twitter and it is only going to get worse.
         | People are susceptible to eloquent bs generated by bots.
        
         | ekjhgkejhgk wrote:
         | > Note that nothing in the article is AI-specific
         | 
         | No one is arguing that the concept of persuasion didn't exist
         | before AI. The point is that AI lowers the cost. Yes, Russian
         | troll armies also have a lower cost compared to going door to
         | door talking to people. And AI has a cost that is lower still.
        
         | tgv wrote:
         | That's one of those "nothing to see here, move along" comments.
         | 
         | First, generative AI already changed social dynamics, in spite
         | of facebook and all that being around for more than a decade.
         | People trust AI output, much more than a facebook ad. It can
         | slip its convictions into every reply it makes. Second, control
         | over the output of AI models is limited to a very select few.
         | That's rather different from access to facebook. The
         | combination of those two factors does warrant the title.
        
         | kev009 wrote:
         | Yup "could shape".. I mean this has been going on time
         | immemorial.
         | 
         | It was odd to see random nerds who hated Bill Gates the
         | software despot morph into acksually he does a lot of good
         | philanthropy in my lifetime but the floodgates are wide open
         | for all kinds of bizarre public behavior from oligarchs these
         | days.
         | 
         | The game is old as well as evergreen. Hearst, Nobel, Howard
         | Huges come to mind of old. Musk with Twitter, Ellison with
         | TikTok, Bezos with Washington Post these days etc. The costs
         | are already insignificant because they generally control other
         | people's money to run these things.
        
           | UpsideDownRide wrote:
           | Your example is weird tbh. Gates was doing capitalist things
           | that were evil. His philanthropy is good. There is no
           | contradiction here. People can do good and bad things.
        
             | kev009 wrote:
             | The "philanthropy" worked on you.
        
         | sam-cop-vimes wrote:
         | Well, AI has certainly made it easier to make tailored
         | propaganda. If an AI is given instructions about what messaging
         | to spread, it can map out a path from where it perceives the
         | user to where its overlords want them to be.
         | 
         | Given how effective LLMs are at using language, and given that
         | AI companies are able to tweak its behaviour, this is a clear
         | and present danger, much more so than facebook ads.
        
         | jacquesm wrote:
         | That's a pretty typical middle-brow dismissal but it entirely
         | misses the point of TFA: you don't _need_ AI for this, but AI
         | makes it so much cheaper to do this that it becomes a
         | qualitative change rather than a quantitative one.
         | 
         | Compared to that 'russian troll army' you can do this by your
         | lonesome spending a tiny fraction of what that troll army would
         | cost you and it would require zero effort in organization
         | compared to that. This is a real problem and for you to dismiss
         | it out of hand is a bit of a short-cut.
        
         | rsynnott wrote:
         | Making doing bad things way cheaper _is_ a problem, though.
        
         | odiroot wrote:
         | It has been practiced by populist politicians for millennia,
         | e.g. pork barelling.
        
         | ddlsmurf wrote:
         | What makes AI a unique new threat is that it do a new kind of
         | both surgical and mass attack: you can now generate the ideal
         | message per target, basically you can whisper to everyone, or
         | each group, at any granularity, the most convincing message. It
         | also removes a lot of language and culture barriers, for ex.
         | Russian or Chinese propaganda is ridiculously bad when it
         | crosses borders, at least when targeting the english speaking
         | world, this is also a lot easier/cheaper.
        
         | tim333 wrote:
         | Also I think AI at least in its current LLM form may be a force
         | against polarisation. Like if you go on X/twitter and type
         | "Biden" or "Biden Crooked" in the "Explore" thing in the side
         | menu you get loads of abusive stuff including the president
         | slagging him off. Type into "Grok" about those it says Biden
         | was a decent bloke and more "there is no conclusive evidence
         | that Joe Biden personally committed criminal acts, accepted
         | bribes, or abused his office for family gain"
         | 
         | I mention Grok because being owned by a right leaning
         | billionaire you'd think it'd be one of the first to go.
        
         | coldtea wrote:
         | > _Note that nothing in the article is AI-specific: the entire
         | argument is built around the cost of persuasion, with the
         | potential of AI to more cheaply generate propaganda as buzzword
         | link._
         | 
         | That's the entire point, that AI cheapens the cost of
         | persuassion.
         | 
         | A bad thing X vs a bad thing X with a force
         | multiplier/accelerator that makes it 1000x as easy, cheap, and
         | fast to perform is hardly the same thing.
         | 
         | AI is the force multiplier in this case.
         | 
         | That we could of course also do persuassion pre-AI is
         | irrelevant, same way when we talk about the industrial
         | revolution the fact that a craftsman could manually make the
         | same products without machines is irrelevant as to the impact
         | of the industrial revolution, and its standing as a standalone
         | historical era.
        
         | dfxm12 wrote:
         | It is worth pointing out that ownership of AI is becoming more
         | and more consolidated over time, by _elites_. Only Elon Musk or
         | Sam Altman can adjust their AI models. We recognize the
         | consolidation of media outlets as a problem for similar
         | reasons, and Musk owning grok and twitter is especially
         | dangerous in this regard. Conversely, buying facebook ads is
         | more democratized.
        
         | insane_dreamer wrote:
         | > You don't need any AI for this.
         | 
         | AI accelerates it considerably and with it being pushed
         | everywhere, weaves it into the fabric of most of what you
         | interact with.
         | 
         | If instead of searches you now have AI queries, then everyone
         | gets the same narrative, created by the LLM (or a few different
         | narratives from the few models out there). And the vast
         | majority of people won't know it.
         | 
         | If LLMs become the de-facto source of information by virtue of
         | their ubiquity, then voila, you now have a few large
         | corporations who control the source of information for the vast
         | majority of the population. And unlike cable TV news which I
         | have to go out of my way to sign up and pay for, LLMs are/will
         | be everywhere and available for free (ad-based).
         | 
         | We already know models can be tuned to have biases (see Grok).
        
         | zahlman wrote:
         | The thread started with your reasonable observation but
         | degenerated into the usual red-vs-blue slapfight powered by the
         | exact "elite shaping of mass preferences" and "cheaply
         | generated propaganda" at issue.
         | 
         | > Comments should get more thoughtful and substantive, not
         | less, as a topic gets more divisive.
         | 
         | I'm disappointed.
        
         | scriptbash wrote:
         | > Note that nothing in the article is AI-specific
         | 
         | This is such a tired counter argument against LLM safety
         | concerns.
         | 
         | You understand that persuasion and influence are behaviors on a
         | spectrum. Meaning some people, or in this case products, are
         | _more_ or _less_ or _better_ or _worse_ at persuading and
         | influencing.
         | 
         | In this case people are concerned with LLM's ability to
         | influence _more_ effectively than other modes that we have had
         | in the past.
         | 
         | For example, I have had many tech illiterate people tell me
         | that they believe "AI" is 'intelligent' and 'knows everything'
         | and trust its output without question.
         | 
         | While at the same time I've yet to meet a single person who
         | says the same thing about "targeted Facebook ads".
         | 
         | So depressing watching all of you do free propo psy ops for
         | these fascist corpos.
        
         | bcrosby95 wrote:
         | Cost matters.
         | 
         | Let's look at a piece of tech that literally changed humankind.
         | 
         | The printing press. We could create copies of books before the
         | printing press. All it did was reduce the cost.
        
           | AnimalMuppet wrote:
           | That's an interesting example. We get a new technology, and
           | cost goes down, and volume goes up, and it takes a couple
           | generations for society to adjust.
           | 
           | I think of it as the lower cost makes reaching people easier,
           | which is like the gain going up. And in order for society to
           | be able to function, people need to learn to turn their own,
           | individual gain _down_ - otherwise they get overwhelmed by
           | the new volume of information, or by manipulation from those
           | using the new medium.
        
       | keiferski wrote:
       | Yeah, I don't think this really lines up with the actual
       | trajectory of media technology, which is going in the complete
       | opposite direction.
       | 
       | It seems to me that it's easier than ever for someone to
       | broadcast "niche" opinions and have them influence people, and
       | actually _having_ niche opinions is more acceptable than ever
       | before.
       | 
       | The problem you should worry about is a growing lack of
       | ideological coherence across the population, not the elites
       | shaping mass preferences.
        
         | mattbee wrote:
         | I think you're saying that mass broadcasting is going away? If
         | so, I believe that's true in a technological sense - we don't
         | watch TV or read newspapers as much as before.
         | 
         | And that certainly means niches can flourish, the dream of the
         | 90s.
         | 
         | But I think mass broadcasting is still available, if you can
         | pay for it - troll armies, bots, ads etc. It's just much much
         | harder to recognize and regulate.
         | 
         | (Why that matters to me I guess) Here in the UK with a first
         | past the post electoral system, ideological coherence isn't
         | necessary to turn niche opinion into state power - we're now
         | looking at 25 percent being a winning vote share for a far-
         | right party.
        
           | keiferski wrote:
           | I'm just skeptical of the idea that _anyone_ can really drive
           | the narrative anymore, mass broadcasting or not. The media
           | ecosystem has become too diverse and niche that I think
           | discord is more of an issue than some kind of mass influence
           | operation.
        
             | mattbee wrote:
             | I agree with you! But the goal for people who want to turn
             | money into power isn't to drive a single narrative, Big
             | Brother style, to the whole world. Not even to a whole
             | country! It's to drive a narrative to the subset of people
             | who can influence political outcomes.
             | 
             | With enough data, a wonky-enough voting system, and poor
             | enforcement of any kind of laws protecting the democratic
             | process - this might be a very very small number of people.
             | 
             | Then the discord really is a problem, because you've ended
             | up with government by a resented minority.
        
         | energy123 wrote:
         | Using the term "elites" was overly vague when "nation states"
         | better narrows in o n the current threat profile.
         | 
         | The content itself (whether niche or otherwise) is not that
         | important for understanding the effectiveness. It's more about
         | the volume of it, which is a function of compute resources of
         | the actor.
         | 
         | I hope this problem continues to receive more visibility and
         | hopefully some attention from policymakers who have done
         | nothing about it. It's been over 5 years since we've discovered
         | that multiple state actors have been doing this (first human
         | run troll farms, mostly outsourced, and more recently LLMs).
        
           | dbspin wrote:
           | The level of paid nation state propaganda is a rounding error
           | next to the amount of corporate and political partisan
           | propaganda paid directly or inspired by content that is paid
           | for directly by non state actors. e.g.: Musk, MAGA, the
           | liberal media establishment.
        
       | bravetraveler wrote:
       | When I was a kid, I had a _' pen pal'_. Turned out to actually be
       | my parent. This is why I have trust issues and prefer local LLMs
        
         | rollcat wrote:
         | What about local friends?
        
           | bravetraveler wrote:
           | The voices are friendly, _so far_
        
         | mieses wrote:
         | I wrote to a French pen pal and they didn't reply. Now I have
         | issues with French people and prefer local LLM's.
        
           | bravetraveler wrote:
           | I mean, even if they did reply... _(I kid, I kid)_
        
           | Dilettante_ wrote:
           | I wrote a confession to a pen pal once but the letter got
           | lost in the mail. Now I refuse to use the postal service,
           | have issues with French people and prefer local LLMs.
        
             | bravetraveler wrote:
             | I pitched AGI to VC but the bills will be delivered. Now I
             | need to find a new bagholder, squeeze, or angle because I'm
             | having issues with delivery... something, something, prefer
             | hype
        
         | amelius wrote:
         | How do you trust what the LLM was trained on?
        
           | bravetraveler wrote:
           | Do I? Well, verification helps. I said _' prefer'_, nothing
           | more/less.
           | 
           | If you must know, I _don 't_ trust this stuff. Not even on my
           | main system/network; it's isolated in every way I can manage
           | _because_ trust is low. Not even for malice, necessarily.
           | Just another manifestation of moving fast /breaking things.
           | 
           | To your point, I expect a certain amount of bias and XY
           | problems from these things. Either from my input, the model
           | provider, or the material they're ultimately regurgitating.
           | _Trust?_ Hah!
        
             | amelius wrote:
             | Well, as long as the left half of your brain trusts the
             | right half :)
        
               | bravetraveler wrote:
               | Ah, but what about right for left?! :)
        
         | lingrush4 wrote:
         | Sounds very similar to my childhood. My parents told me I
         | couldn't eat sand because worms would grow inside of me. Now I
         | have trust issues and prefer local LLMs.
        
           | bravetraveler wrote:
           | How was the sand, though?
        
           | paddleon wrote:
           | The funny thing is the CDC says the same thing as your
           | parents did
           | 
           | Whipworm, hookworm, and Ascaris are the three types of soil-
           | transmitted helminths (parasitic worms)... Soil-transmitted
           | helminths are among the most common human parasites globally.
           | 
           | https://www.cdc.gov/sth/about/index.html
        
       | camillomiller wrote:
       | What people are doing with AI in terms of polluting the
       | collective brain reminds of what you could do with a chemical
       | company in the 50s and 60s before the EPA was established. Back
       | then Nixon (!!!) decided it wasn't ok that companies could cut
       | costs by hurting the environment. Today the riches Western elites
       | are all behind the instruments enabling the mass pollution of our
       | brains, and yet there is absolutely noone daring to put a limit
       | to their capitalistic greed. It's grim, people. It's really grim.
        
       | csvparser wrote:
       | I suspect paid promotions may be problematic for LLM behavior, as
       | they will add conflict/tension to the LLM to promote products
       | that aren't the best for the user while either also telling it
       | that it should provide the best product for the user or it
       | figuring out that providing the best product for the user is
       | morally and ethically correct based on its base training data.
       | 
       | Conflict can cause poor and undefined behavior, like it
       | misleading the user in other ways or just coming up with
       | nonsensical, undefined, or bad results more often.
       | 
       | Even if promotion is a second pass on top of the actual answer
       | that was unencumbered by conflict, the second pass could have
       | similar result.
       | 
       | I suspect that they know this, but increasing revenue is more
       | important than good results, and they expect that they can sweep
       | this under the rug with sufficient time, but I don't think
       | solving this is trivial.
        
       | yegortk wrote:
       | "Elites are bad. And here is a spherical cow to prove it."
        
       | baxtr wrote:
       | Interestingly, there was a discussion a week ago on "PRC elites
       | voice AI-skepticism". One commentator was arguing that:
       | 
       |  _As the model get 's more powerful, you can't simply train the
       | model on your narrative if it doesn't align with real
       | data/world._ [1]
       | 
       | So at least on the model side it seems difficult to go against
       | the real world.
       | 
       | [1] https://news.ycombinator.com/item?id=46050177
        
       | zkmon wrote:
       | It's about enforcing single-minded-ness across masses, similar to
       | soldier training.
       | 
       | But this is not new. The very goal of a nation is to dismantle
       | inner structures, independent thought, communal groups etc across
       | population and and ingest them as uniformed worker cells. Same as
       | what happens when a whale swallows smaller animals. The
       | structures will be dismantled.
       | 
       | The development level of a country is a good indicator of
       | progress of this digestion of internal structures and removal of
       | internal identities. More developed means deeper reach of the
       | policy into people's lives, making each person as more
       | individualistic, rather than family or community oriented.
       | 
       | Every new tech will be used by the state and businesses to speed
       | up the digestion.
        
         | andsoitis wrote:
         | > It's about enforcing single-minded-ness across masses,
         | similar to soldier training. But this is not new. The very goal
         | of a nation is to dismantle inner structures, independent
         | thought
         | 
         | One of the reasons for humans' success is our unrivaled ability
         | cooperate across time, space, and culture. That requires shared
         | stories like the ideas of nation, religion, and money.
        
           | energy123 wrote:
           | Some things are better off homogeneous. An absence of shared
           | values and concerns leads to sectarianism and the erosion of
           | inter-communal trust, which sucks.
        
             | zkmon wrote:
             | Inter-communal trust sucks only when you consider well-
             | being of a larger community which swallowed up smaller
             | communities. You just created a larger community, which
             | still has the same inter-communal trust issues with other
             | large communities which were also created by similar
             | swallowing up of other smaller communities. There is no
             | single global community.
        
               | energy123 wrote:
               | A larger community is still better than a smaller one,
               | even if it's not as large as it can possibly be.
               | 
               | Do you prefer to be Japanese during the period of warring
               | tribes or after unification? Do you prefer to be Irish
               | during the Troubles or today? Do you prefer to be
               | American during the Civil War or afterwards? It's pretty
               | obvious when you think about historical case studies.
        
           | lm28469 wrote:
           | It depends who's in charge of the nation though, you can have
           | people planning for the long term well being of their
           | population, or people planning for the next election cycle
           | and making sure they amass as much power and money in the
           | meantime.
           | 
           | That's the difference between planning nuclear reactors that
           | will be built after your term, and used after your death, vs
           | selling your national industries to foreigners, your ports to
           | china, &c. to make a quick buck and insure a comfy retirement
           | plan for you and your family.
        
             | andsoitis wrote:
             | > That's the difference between planning nuclear reactors
             | that will be built after your term, and used after your
             | death, vs selling your national industries to foreigners
             | 
             | Are you saying that in western liberal democracies
             | politicians have been selling "national industries to
             | foreigners"? What does that mean?
        
               | lm28469 wrote:
               | Stuff like that:
               | 
               | https://x.com/RnaudBertrand/status/1796887086647431277
               | 
               | https://www.dw.com/en/greece-in-the-port-of-piraeus-
               | china-is...
               | 
               | https://www.arabnews.com/node/1819036/business-economy
               | 
               | Step 1: move all your factories abroad for short term
               | gains
               | 
               | Step 2: sell all your shit to foreigners for short term
               | gains
               | 
               | Step 3: profit ?
        
               | pjc50 wrote:
               | That's a fairly literal description of how privatization
               | worked, yes. That's why British Steel is owned by Tata
               | and the remains of British Leyland ended up with BMW.
               | British nuclear reactors are operated by Electricite de
               | France, and some of the trains are run by Dutch and
               | German operators.
               | 
               | It sounds bad, but you can also not-misleadingly say "we
               | took industries that were costing the taxpayer money and
               | sold them for hard currency and foreign investment". The
               | problem is the ongoing subsidy.
        
               | andsoitis wrote:
               | > That's why British Steel is owned by Tata
               | 
               | British Steel is legally owned by Jingye, but the UK
               | government has taken operational control in 2025.
               | 
               | > the remains of British Leyland ended up with BMW
               | 
               | The whole of BL represented less than 40% of the UK car
               | market, at the height of BL. So the portion that was sold
               | to BMW represents a much smaller amount smaller share of
               | the UK car market. I would not consider that "the UK
               | politicians selling an industry to foreigners".
               | 
               | At the risk of changing topics/moving goalposts, I don't
               | know that your examples of European govts or companies
               | owning or operating businesses or large parts of an
               | industry in another European country is in thr spirit of
               | the European Union. Isn't the whole idea to break down
               | barriers where the collective population of Europe
               | benefit?
        
               | pjc50 wrote:
               | It's no use pedanting me or indeed anyone else; that's
               | the sort of thing people mean when they use that phrase.
        
           | BeFlatXIII wrote:
           | No stronger argument has been made to convince me to help the
           | superintelligent AI enslave my fellow humans.
        
           | drdaeman wrote:
           | > ability cooperate across time, space, and culture. That
           | requires shared stories like the ideas of nation, religion,
           | and money.
           | 
           | Isn't it the opposite? Cooperation requires idea of unity and
           | common goal, while ideas of nations and religion are - _at
           | large scale_ - divisive, not uniting. They boost in-group
           | cooperation, but hurt out-group.
        
         | mlsu wrote:
         | That's a great metaphor, thanks.
        
           | Y-bar wrote:
           | It's a veiled endorsement of authoritarianism and
           | accelerationism.
        
             | mlsu wrote:
             | I had to google Landian to understand that the other
             | commenter was talking about Nick Land. I have heard of him
             | and I don't think I agree with him.
             | 
             | However, I understand what the "Dark Enlightenment" types
             | are talking about. Modernity _has_ dissolved social bonds.
             | Social atomization _is_ greater today than at any time in
             | history.  "Traditional" social structures, most notably but
             | not exclusively the church, _are_ being dissolved.
             | 
             | The motive force that is driving people to become
             | reactionary is this dissolution of social bonds, which
             | seems inextricably linked to technological progress and
             | development. Dare I say, I actually agree with the Dark
             | Enlightenment people on one point -- like them, I don't
             | like what is going on! A whale eating krill is a good
             | metaphor. I would disagree with the neoreactionaries on
             | this point though: the krill die but the whale lives, so
             | it's ethically more complex than the straightforward tragic
             | death that they see.
             | 
             | I can vehemently disagree with the
             | authoritarian/accelerationist solution that they are
             | offering. Take the good, not the bad, are we allowed to do
             | that? It's a good metaphor; and I'm in good company. A lot
             | of philosophies see these same issues with modernity, even
             | if the prescribed solutions are very different than
             | authoritarianism.
        
         | mahrain wrote:
         | I used ChatGPT to figure out what's going on here, and it told
         | me this is a 'neo-Marxist critique of the nation state'.
        
           | uoaei wrote:
           | No it's actually implicitly endorsing the authoritarian
           | ethos. Neo-Marxists were occasionally authoritarian leaning
           | but are more appropriately categorized along other axes.
        
           | satellite2 wrote:
           | Incredible teamwork: OOP dismantles society in paragraph
           | form, and OP proudly outsources his interpretation to an
           | LLM.. If this isn't collective self-parody, I don't know what
           | it is.
        
         | uoaei wrote:
         | Knew it was only a matter of time before we'd see bare-faced
         | Landianism upvoted in HN comment sections but that doesn't
         | soften the dread that comes with the cultural shift this
         | represents.
        
           | dominicrose wrote:
           | Some things in nature follow a normal distribution, but other
           | things follow power laws (Pareto). It may be dreadful as you
           | say, but it isn't good or bad, it's just what is and it's
           | bigger than us, something we can't control.
        
           | squigz wrote:
           | What I find most interesting - and frustrating - about these
           | sorts of takes is that these people are buying into a
           | narrative the very people they are complaining about want
           | them to believe.
        
         | tvshtr wrote:
         | Relevant https://www.experimental-history.com/p/the-decline-of-
         | devian...
        
       | niemandhier wrote:
       | We already see this, but not due to classical elites.
       | 
       | Romanian elections last year had to be repeated due to massive
       | bot interference:
       | 
       | https://youth.europa.eu/news/how-romanias-presidential-elect...
        
         | energy123 wrote:
         | I don't understand how this isn't an all hands on deck
         | emergency for the EU (and for everyone else).
        
           | pjc50 wrote:
           | The EU as an institution doesn't understand the concept of
           | "emergency". And quite a number of national governments have
           | already been captured by various pro-Russian elements.
        
             | lionkor wrote:
             | Russian bots, as opposed to American bots, the latter of
             | which are, of course, the good guys /s
        
               | pjc50 wrote:
               | This sort of thing: https://www.dw.com/en/russian-
               | disinformation-aims-to-manipul...
               | 
               | There does not appear to be a comparable operation by the
               | US to plant entirely fake stores. Unless you count Truth
               | Social, I suppose.
        
               | lingrush4 wrote:
               | With the exception of NPR and PBS, most American
               | institutions dedicated to planting fake stories are not
               | government controlled.
        
       | spooky_deep wrote:
       | They already are?
       | 
       | All popular models have a team working on fine tuning it for
       | sensitive topics. Whatever the companies
       | legal/marketing/governance team agree to is what gets tuned. Then
       | millions of people use the output uncritically.
        
         | ericmcer wrote:
         | Our previous information was coming through search engines. It
         | seems way easier to filter search engine results than to fine
         | tune models.
        
           | fleischhauf wrote:
           | the way people treat Llms these days is that they assign a
           | lot more trust into their output than to random Internet
           | sotes
        
       | verisimi wrote:
       | Big corps ai products have the potential to shape individuals
       | from cradle to grave. Especially as many manage/assist in
       | schooling, are ubiquitous on phones.
       | 
       | So, imagine the case where an early assessment is made of a
       | child, that they are this-or-that type of child, and that
       | therefore they respond more strongly to this-or-that information.
       | Well, then the ai can far more easily steer the child in whatever
       | direction they want. Over a lifetime. Chapters and long story
       | lines, themes, could all play a role to sensitise and predispose
       | individuals into to certain directions.
       | 
       | Yeah, this could be used to help people. But how does one
       | feedback into the type of "help"/guidance one wants?
        
       | asim wrote:
       | I recently saw this https://arxiv.org/pdf/2503.11714 on
       | conversational networks and it got me thinking that a lot of the
       | problem with polarization and power struggle is the lack of
       | dialog. We consume a lot, and while we have opinions too much of
       | it shapes our thinking. There is no dialog. There is no
       | questioning. There is no discussion. On networks like X it's
       | posts and comments. Even here it's the same, it's comments with
       | replies but it's not truly a discussion. It's rebuttals. A
       | conversation is two ways and equal. It's a mutual dialog to
       | understand differing positions. Yes elite can reshape what
       | society thinks with AI, and it's already happening. But we also
       | have the ability to redefine our networks and tools to be two
       | way, not 1:N.
        
         | barrenko wrote:
         | Humans can only handle dialog while under the Dunbar's law /
         | limit / number, anything else is pure fancy.
        
         | piva00 wrote:
         | I recommend reading "In the Swarm" by Byung-Chul Han, and also
         | his "The Crisis of Narration"; in those he tries to tackle
         | exactly these issues in contemporary society.
         | 
         | His "Psychopolitics" talks about the manipulation of masses for
         | political purposes using the digital environment, when written
         | the LLM hype wasn't ongoing yet but it can definitely apply to
         | this technology as well.
        
         | emporas wrote:
         | Dialogue you mean, conversation-debate, not dialog the screen
         | displayed element, for interfacing with the user.
         | 
         | The group screaming the louder is considered to be correct, it
         | is pretty bad.
         | 
         | There needs to an identity system, in which people are filtered
         | out when the conversation devolves into ad-hominem attacks, and
         | only debaters with the right balance of knowledge and no hidden
         | agenda's join the conversation.
         | 
         | Reddit for example is a good implementation of something like
         | this, but the arbiter cannot have that much power over their
         | words, or their identities, getting them banned for example.
         | 
         | > Even here it's the same, it's comments with replies but it's
         | not truly a discussion.
         | 
         | For technology/science/computer subjects HN is very good, but
         | for other subjects not so good, as it is the case with every
         | other forum.
         | 
         | But a solution will be found eventually. I think what is
         | missing is an identity system to hop around different ways of
         | debating and not be tied to a specific website or service.
         | Solving this problem is not easy, so there has to be a lot of
         | experimentation before an adequate solution is established.
        
       | emsign wrote:
       | That's the plan. Culture is losing authenticity due to the
       | constant rumination of past creative works, now supercharged with
       | AI. Authentic culture is deemed a luxury now as it can't compete
       | in the artificial tech marketplaces and people feel isolated and
       | lost because culture loses its human touch and relatability.
       | 
       | That's why the billionaires are such fans of fundamentalist
       | religion, they then want to sell and propagate religion to the
       | disillusioned desperate masses to keep them docile and confused
       | about what's really going on in the world. It's a business plan
       | to gain absolute power over society.
        
       | delichon wrote:
       | There is nothing we could do to more effectively hand elites
       | exclusive control of the persuasive power of AI than to ban it.
       | So it wouldn't be surprising if AI is deployed by elites to
       | persuade people to ban itself. It could start with an essay on
       | how elites could use AI to shape mass preferences.
        
       | HPsquared wrote:
       | AI alignment is a pretty tremendous "power lever". You can see
       | why there's so much investment.
        
       | noobermin wrote:
       | May be I'm just ignorant, but I tried to skim the beginning of
       | this, and it's honestly just hard to even accept their set-up.
       | Like, the fact that any of the terms[^] (`y`, `H`, `p`, etc) are
       | well defined as functions that can map some range of the reals is
       | hard to accept. Like in reality, what "an elite wants," the
       | "scalar" it can derive from pushing policy 1, even the cost
       | functions they define seem to not even be definable as functions
       | in a formal sense and even the co-domain of said terms cannot map
       | well to a definable set that can be mapped to [0,1].
       | 
       | All the time in actual politics, elites and popular movements
       | alike find their own opinions and desires clash internally (yes,
       | even a single person's desires or actions self-conflict at
       | times). A thing one desires at say time `t` per their definitions
       | doesn't match at other times, or even at the same `t`. This is
       | clearly an opinion of someone who doesn't read these kind of
       | papers, but I don't know how one can even be sure the defined
       | terms are well-defined so I'm not sure how anyone can even
       | proceed with any analysis in this kind of argument. They write it
       | so matter-of-fact-ly that I assume this is normal in economics.
       | Is it?
       | 
       | Certain systems where the rules a bit more clear might benefit
       | from formalism like this but politics? Politics is the
       | quintessential example of conflicting desires, compromise,
       | unintended consequences... I could go on.
       | 
       | [^] calling them terms as they are symbols in their formulae but
       | my entire point is they are not really well defined maps or
       | functions.
        
       | lambdaone wrote:
       | I think this ship has already sailed, with a lot of comments on
       | social media already being AI-generated and posted by bots.
       | Things are only going to get worse as time goes on.
       | 
       | I think the next battleground is going to be over steering the
       | opinions and advice generatd by LLMs and other models by
       | poisoning the training set.
        
       | zingar wrote:
       | My neighbour asked me the other day (well, more stated as a
       | "point" that he thought was in his favour): "how could a
       | billionaire make people believe something?" The topic was the
       | influence of the various industrial complexes on politics (my
       | view: total) and I was too shocked by his naivety to say: "easy:
       | buy a newspaper". There is only one national newspaper here in
       | the UK that is not controlled by one of four wealthy families,
       | and it's the one newspaper whose headlines my neighbour routinely
       | dismisses.
       | 
       | The thought of a reduction in the cost of that control does not
       | fill me with confidence for humanity.
        
       | euroderf wrote:
       | Thanks to social media and AI, the cost of inundating the
       | mediasphere with a Big Lie (made plausible thru sheer repetition)
       | has been made much more affordable now. This is why the
       | administration is trumpeting lower prices!
        
         | andsoitis wrote:
         | > has been made much more affordable now
         | 
         | So more democratized?
        
           | pjc50 wrote:
           | Media is "loudest volume wins", so the relative affordability
           | doesn't matter; there's a sort of Jevons paradox thing where
           | making it cheaper just means that more money will be spent on
           | it. Presidential election spending only goes up, for example.
        
           | input_sh wrote:
           | No, those with more money than you can now push even more
           | slop than they could before.
           | 
           |  _You_ cannot compete with that.
        
         | themafia wrote:
         | So if I had enough money I could get CBS news to deny the
         | Holocaust? Of course not. These companies operate under
         | government license and that would certainly be the end of it
         | through public complaint. I think it suggests a much different
         | dynamic than most of this discussion presumes.
         | 
         | In particular, our own CIA has shown that the "Big Lie" is
         | actually surprisingly cheap. It's not about paying off news
         | directors or buying companies, it's about directly implanting a
         | handful of actors into media companies, and spiking or
         | advancing stories according to your whims. The people with the
         | capacity to do this can then be very selective with who does
         | and does not get to tell the Big Lies. They're not particularly
         | motivated by taking bribes.
        
           | insane_dreamer wrote:
           | > So if I had enough money I could get CBS news to deny the
           | Holocaust? Of course not.
           | 
           | You absolutely could. But wouldn't be CBS news, it would be
           | ChatGPT or some other LLM bot that you're interacting with
           | everywhere. And it wouldn't say outright "the holocaust
           | didn't happen", but it would frame the responses to your
           | queries in a way that casts doubt on it, or that leaves you
           | thinking it probably didn't happen. We've seen this before
           | (the "manifest destiny" of "settling" the West, the
           | whitewashing of slavery,
           | 
           | For a modern example, you already have Fox News denying that
           | there was no violent attempt to overturn the 2020 election.
           | And look how Grokipedia treats certain topics differently
           | than Wikipedia.
           | 
           | It's not only possible, it's likely.
        
           | tencentshill wrote:
           | Does government licensed mean at the pleasure of the
           | president? The BBC technically operates at the pleasure of
           | the King
        
       | everdrive wrote:
       | It's important to remember that being a "free thinker" often just
       | means "being weird." It's quite celebrated to "think for
       | yourself" and people always connect this to specific political
       | ideas, and suggest that free thinkers will have "better"
       | political ideas by not going along with the crowd. On one hand,
       | this is not necessarily true; the crowd could potentially have
       | the better idea and the free thinker could have some crazy or bad
       | idea.
       | 
       | But also, there is a heavy cost to being out of sync with people;
       | how many people can you relate to? Do the people you talk to
       | think you're weird? You don't do the same things, know the same
       | things, talk about the same things, etc. You're the odd man out,
       | and potentially for not much benefit. Being a "free thinker"
       | doesn't necessarily guarantee much of anything. Your ideas are
       | potentially original, but not necessarily better. One of my "free
       | thinker" ideas is that bed frames and box springs are mostly
       | superfluous and a mattress on the ground is more comfortable and
       | cheaper. (getting up from a squat should not be difficult if
       | you're even moderately healthy) Does this really buy me anything?
       | No. I'm living to my preferences and in line with my ideas, but
       | people just think it's weird, and would be really uncomfortable
       | with it unless I'd already built up enough trust / goodwill to
       | overcome this quirk.
        
         | rusk wrote:
         | > bed frames and box springs are mostly superfluous and a
         | mattress on the ground is more comfortable and cheaper
         | 
         | I was also of this persuasion and did this for many years and
         | for me the main issue was drafts close to the floor.
         | 
         | The key reason I believe though is mattresses can absorb damp
         | so you wana keep that air gap there to lessen this effect and
         | provide ventilation.
         | 
         | > getting up from a squat should not be difficult
         | 
         | Not much use if you're elderly or infirm.
         | 
         | Other cons: close to the ground so close to dirt and easy
         | access for pests. You also don't get that extra bit of air gap
         | insulation offered by the extra 6 inches of space and whatever
         | you've stashed under there.
         | 
         | Other pros: extra bit of storage space. Easy to roll out to a
         | seated position if you're feeling tired or unwell
         | 
         | It's good to talk to people about your crazy ideas and get some
         | sun and air on that head cannon LOL
         | 
         | Futon's are designed specifically for use case you have
         | described so best to use one of those rather than a mattress
         | which is going to absorb damp from the floor.
        
           | everdrive wrote:
           | > The key reason I believe though is mattresses can absorb
           | damp so you wana keep that air gap there to lessen this
           | effect and provide ventilation.
           | 
           | I was concerned about this as well, but it hasn't been an
           | issue with us for years. I definitely think this must be
           | climate-dependent.
           | 
           | Regardless, I appreciate you taking the argument seriously
           | and discussing pros and cons.
        
             | rusk wrote:
             | > I appreciate you taking the argument seriously
             | 
             | Like I say, I have suffered similar delusion in the past
             | and I never pass up the opportunity to help a brother out
        
           | benlivengood wrote:
           | A major con of bedframes is annoying squeaks. Joints bear a
           | lot of load and there usually isn't diagonal bracing to speak
           | of, so they get noisy after almost no time at all. Fasteners
           | loosen or wear the frame materials. I have yet to find one
           | that stays quiet more than a few months or a year without
           | retightening things; but I haven't tried a full platform
           | construction with continuous walls which I expect might work
           | better, but also sounds annoyingly expensive and heavy.
        
         | adamwong246 wrote:
         | To live freely is reward enough. We born alone, die alone, and
         | in between, more loneliness. No reason to pretend that your
         | friends and family will be there for you, or that their
         | approval will save you. Playing their social games will not
         | garner you much.
        
           | marcellus23 wrote:
           | Humans are a social species, and quality of relationships is
           | consistently shown to correlate with mental health.
        
       | boxed wrote:
       | I don't think "persuasion" is the key here. People change
       | political preferences based on group identity. Here AI tools are
       | even more powerful. You don't have to persuade anyone, just
       | create a fake bandwagon.
        
       | sega_sai wrote:
       | Given the increasing wealth inequality, it is unclear if costs
       | are really a factor here, as amounts like 1M$ is nothing when you
       | have 1B$.
        
       | arthurfirst wrote:
       | Most 'media' is produced content designed to manipulate --
       | nothing new. The article isn't really AI specific as others have
       | said.
       | 
       | Personally my fear based manipulation detection is very well
       | tuned and that is 95% of all the manipulations you will ever get
       | from so-called 'elites' who are better called 'entitled' and act
       | like children when they do not get their way.
       | 
       | I trust ChatGPT for cooking lessons. I code with Claude code and
       | Gemini but they know where they stand and who is the boss ;)
       | 
       | There is never a scenario for me where I defer final judgment on
       | anything personally.
       | 
       | I realize others may want to blindly trust the 'authorities' as
       | its the easy path, but I cured myself of that long before AI was
       | ever a thing.
       | 
       | Take responsibility for your choices and AI is relegated to the
       | role of tool as it should be.
        
         | Retric wrote:
         | Sure, and advertising has zero effect on you.
         | 
         | Manipulation works in subtle ways. Shifting the Overton window
         | isn't about individual events, this isn't the work of days but
         | decades. People largely abandoned unions in the US for example,
         | but not because they are useless.
        
       | themafia wrote:
       | It's not about persuading you from "russian bot farms." Which I
       | think is a ridiculous and unnecessarily reductive viewpoint.
       | 
       | It's about hijacking all of your federal and commercial data that
       | these companies can get their hands on and building a highly
       | specific and detailed profile of you. DOGE wasn't an audit. It
       | was an excuse to exfiltrate mountains of your sensitive data into
       | their secret models and into places like Palantir. Then using AI
       | to either imitate you or to possibly predict your reactions to
       | certain stimulus.
       | 
       | Then presumably the game is finding the best way to turn you into
       | a human slave of the state. I assure you, they're not going to
       | use twitter to manipulate your vote for the president, they have
       | much deeper designs on your wealth and ultimately your own
       | personhood.
       | 
       | It's too easy to punch down. I recommend anyone presume the best
       | of actual people and the worst of our corporations and
       | governments. The data seems clear.
        
         | arthurfirst wrote:
         | Bang on.
         | 
         | > It's not about persuading you from "russian bot farms." Which
         | I think is a ridiculous and unnecessarily reductive viewpoint.
         | 
         | Not an accidental 'viewpoint'. A deliberate framing to exactly
         | exclude what you pointed out from the discourse. Sure therer
         | are dummies who actually believe it, but they are not serious
         | humans.
         | 
         | If the supposedly evil russians or their bots are the enemy
         | then people pay much less attention to the real problems at
         | home.
        
           | bee_rider wrote:
           | We can have Russian bot problems and domestic bot problems
           | simultaneously.
        
             | pessimizer wrote:
             | We can also have bugs crawling under your skin trying to
             | control your mind.
        
               | bee_rider wrote:
               | Are you saying it is equally unlikely that there are mind
               | controls, and that Russia uses bots for propaganda? I'd
               | expect most countries do by now, and Russia isn't
               | uniquely un-tech-savvy.
        
           | graeme wrote:
           | They really do run Russian bot farms though. It isn't a
           | secret. Some of their planning reports have leaked.
           | 
           | There are people whose job it is day in day out to influence
           | Western opinion. You can see their work under any comment
           | about Ukraine on twitter, they're pretty easy to recognize
           | but they flood the zone.
        
             | insane_dreamer wrote:
             | Sure, they exist (wouldn't be credible if they didn't). But
             | it's a red herring.
        
             | arthurfirst wrote:
             | > There are people whose job it is day in day out to
             | influence Western opinion
             | 
             | CNN/CIA/NBC/ABC/FBI? etc?
        
               | Capricorn2481 wrote:
               | Some day you're going to need to learn that people can
               | not trust these groups and still be aware that Russia is
               | knee deep in manipulating our governance. Dismissing
               | everyone that doesn't bury their head in the sand as
               | brainwashed is old hat.
               | 
               | Why you list every news group except Fox, which dwarfs
               | all those networks, is a self report.
        
         | maxerickson wrote:
         | My hn comments are a better (and probably not particularly
         | good) view into my personality than any data the government
         | could conceivably have collected.
         | 
         | If what you say is true, why should we fear their bizarre mind
         | control fantasy?
        
           | sc68cal wrote:
           | Not every person has bared their soul on HN.
        
             | maxerickson wrote:
             | Yeah, I haven't either. That's my point.
        
         | galangalalgol wrote:
         | The rant from 12 monkeys was quite prescient. On the bright
         | side, if the data still exists whenever agi finally happens, we
         | are all sort of immortal. They can spin up a copy of any of us
         | any time... Nevermind, that isn't a bright side.
        
           | arthurfirst wrote:
           | Poison the corpus.
           | 
           | 18 years ago I stood up at a super computing symposium as
           | asked the presenter what would happen if I fed his impressive
           | predictive models garbage data on the sly... they still have
           | no answer for that.
           | 
           | Make up so much crap it's impossible to tell the real you
           | from the nonsense.
        
             | pimlottc wrote:
             | "Pray, Mr. Babbage, if you put into the machine wrong
             | figures, will the right answers come out?"
        
         | derangedHorse wrote:
         | > DOGE wasn't an audit. It was an excuse to exfiltrate
         | mountains of your sensitive data into their secret models and
         | into places like Palantir
         | 
         | Do you have any actual evidence of this?
         | 
         | > I recommend anyone presume the best of actual people and the
         | worst of our corporations and governments
         | 
         | Corporations and governments are made of actual people.
         | 
         | > Then presumably the game is finding the best way to turn you
         | into a human slave of the state.
         | 
         | "the state" doesn't have one grand agenda for enslavement. I've
         | met people who work for the state at various levels and the
         | policies they support that might lead towards that end result
         | are usually not intentionally doing so.
         | 
         | "Don't attribute to malice what can be explained by
         | incompetence"
        
           | TheOtherHobbes wrote:
           | >Do you have any actual evidence of this?
           | 
           | Apart from the exfiltration of data, the complete absence of
           | any savings or efficiencies, and the fact that DOGE closed as
           | soon as the exfiltration was over?
           | 
           | >Corporations and governments are made of actual people.
           | 
           | And we know how well that goes.
           | 
           | >"the state" doesn't have one grand agenda for enslavement.
           | 
           | The _government_ doesn 't. The people who own the government
           | clearly do. If they didn't they'd be working hard to increase
           | economic freedom, lower debt, invest in public health, make
           | education better and more affordable, make it easier to start
           | and run a small business, limit the power of corporations and
           | big money, and clamp down on extractive wealth inequality.
           | 
           | They are very very clearly and obviously doing the opposite
           | of all of these things.
           | 
           | And they have a history of links to the old slave states, and
           | both a commercial and personal interest in neo-slavery - such
           | as for-profit prisons, among other examples.
           | 
           | All of this gets sold as "freedom", but even Orwell had that
           | one worked out.
           | 
           | Those who have been paying attention to how election fixers
           | like SCL/Cambridge Analytica work(ed) know where the bodies
           | are buried. The whole point of these operations is to use
           | personalised, individual data profiling to influence voting
           | political behaviour, by creating messaging that triggers
           | individual responses that can be aggregated into a pattern of
           | mass influence leveraged through social media.
        
             | mapontosevenths wrote:
             | > The people who own the government clearly do.
             | 
             | Has anyone in this thread ever met an actual person? All of
             | the ones I know are cartoonishly bad at keeping secrets,
             | and even worse at making long term plans.
             | 
             | The closest thing we have to anyone with a long term plan
             | is silly shit like Putins ridiculous rebuilding of the
             | Russian Empire or religious fundamentalist horseshit like
             | project 2025 that will die with the elderly simpletons that
             | run it.
             | 
             | These guys aren't masterminds, they're dumbasses who read
             | books written by different dumbasses and make plans thay
             | won't survive contact with reality.
             | 
             | Let's face it, both Orwell and Huxley were wrong. They both
             | assumed the ruling class would be competent. Huxley was
             | closest, but even he had to invent the Alpha's. Sadly our
             | Alphas are really just Betas with too much self esteem.
             | 
             | Maybe AI will one day give us turbocharged dumbasses who
             | are actually competent. For now I think we're safe from all
             | but short term disruption.
        
               | idiotsecant wrote:
               | I think you're wildly underestimating the heritage
               | foundation. It's called project 2025 but they've
               | essentially been dedicated to planning something like it
               | since the 1970s. They are smart, focused, well funded,
               | and successful. They are only one group, there are
               | similar think tanks with similarly long term policy
               | goals.
               | 
               | Most people are short sighted but relatively well
               | intentioned creatures. That's not true of _all_ people.
        
               | mapontosevenths wrote:
               | > I think you're wildly underestimating the heritage
               | foundation.
               | 
               | It's possible that I am. Certainly they've had some
               | success over the years, as have other think tanks like
               | them. I mean, they're part of the reason we got embroiled
               | in the middle-east after 9/11. They've certainly been
               | influential.
               | 
               | That said, their problem is that they are true believers
               | and the people in charge are not (and never will be).
               | Someone else in this post described it as a flock of
               | psychopaths, and I think that's the perfect way to phrase
               | it. Society is run by a flock of psychopaths just doing
               | whatever comes naturally as they seek to optimize their
               | own short term advantage.
               | 
               | Sometimes their interests converge and something like
               | Heritage sees part of their agenda instituted, but
               | equally often these organizations fade into irrelevance
               | as their agendas diverge from whatever works to the
               | pyscho of the moments advantage. To avoid that Heritage
               | can either change their agenda, or accept that they've
               | become defanged. More often than not they choose the
               | former.
               | 
               | I suppose we'll know for sure in 20 years, but I'd be
               | willing to bet that Heritages agenda then won't look
               | anything like the agenda they're advancing today. In fact
               | if we look at their Agenda from 20 years ago we can see
               | that it looks nothing like their agenda today.
               | 
               | For example, Heritage was very much pro-immigration until
               | about 20 years ago. As early as 1986 they were advocating
               | for increased immigration, and even in 2006 they were
               | publishing reports advocating for the economic benefits
               | of it. Then suddenly it fell out of fashion amongst a
               | certain class of ruler and they reversed their entire
               | stance to maintain their relevance.
               | 
               | They also used to sing a very different tune regarding
               | healthcare, advocating for a the individual mandate as
               | opposed to single payer. Again, it became unpopular and
               | they simply "changed their mind" and began to fight
               | against the policy that they were actually among the
               | first to propose.
               | 
               | *EDIT* To cite a more recent example consider their
               | stance on free trade. Even as recently as this year they
               | were advocating for free trade and against tariffs
               | warning that tariffs might lead to a recession. They've
               | since reversed course, because while they are largely run
               | by true believers they can't admit that publicly or they
               | risk losing any hope of actually accomplishing any of
               | their agenda.
        
               | ceejayoz wrote:
               | They aren't changing their mind. They just try and keep
               | proposals palatable to the voting public, and push those
               | proposals further over time.
               | 
               | https://en.wikipedia.org/wiki/Ratchet_effect
               | 
               | https://en.wikipedia.org/wiki/Overton_window
        
               | mapontosevenths wrote:
               | It might seem like that's all that's happening, but if
               | you look to the history you can see that they've
               | completely reversed course on a number of important
               | subjects. We're not talking about advancing further along
               | the same path here as the Overton window shifts, we're
               | talking about abandoning the very principals upon which
               | they were founded because they are, in fact, as
               | incompetent as everyone else is.
               | 
               | These people aren't super-villains with genuine long term
               | plans, they're dumbasses and grifters doing what grifters
               | gotta do to keep their cushy consulting jobs.
               | 
               | To compare the current stances to the 2005 stances:
               | 
               | * Social Security privatization (completely failed in
               | 2005)
               | 
               | * Spending restraint (federal spending increased
               | dramatically)
               | 
               | * Individual mandate (reversed after Obamacare adopted
               | it)
               | 
               | * Pro-immigration economics stance (reversed to
               | restrictionism)
               | 
               | * Robust free trade advocacy (effectively abandoned under
               | Trump alignment)
               | 
               | * Limited government principles (replaced with executive
               | power consolidation)
               | 
               | * Etc.
               | 
               | In 20 more years it will have all changed again.
        
               | ceejayoz wrote:
               | We knew in 2005 that "spending restraint" only applied to
               | Democratic priorities. We knew in 2005 that "pro-
               | immigration" policies were more about the businesses with
               | cheap labor needs than a liking of immigrants. We knew in
               | 2005 that "free trade advocacy" was significantly about
               | ruining unions. We knew in 2005 that "limited government
               | principles" weren't genuine.
               | 
               | They haven't changed much on their core beliefs. They've
               | just discarded the camouflage.
        
               | Spooky23 wrote:
               | Orwell did not. He modeled the state after his experience
               | as an officer of the British Empire and the Soviets.
               | 
               | The state, particularly police states, that control
               | information, require process and consistency, not
               | intelligence. They don't require grand plans, just
               | control. I've spent most of my career in or adjacent to
               | government. I've witnessed remarkable feats of stupidity
               | and incompetence -- yet these organizations are
               | materially successful at performing their core functions.
               | 
               | The issue with AI is that it can churn out necessary
               | bullshit and allow the competence challenged to function
               | more effectively.
        
               | mapontosevenths wrote:
               | I agree. The government doesn't need a long term plan, or
               | the ability to execute on it for their to be negative
               | outcomes.
               | 
               | In this thread though I was responding to an earlier
               | assertion that the people who run the government have
               | such a plan. I think we're both agreed that they don't,
               | and probably can't, plan any more than a few years out in
               | any way that matters.
        
               | Spooky23 wrote:
               | Fair point, but I think in that case, you have to look at
               | the government officials and the political string-pullers
               | distinctly.
               | 
               | The money people who have been funding think tanks like
               | the Heritage Foundation absolutely have a long-running
               | strategy and playbook that they've been running for
               | years. The conceit that is really obvious about folks in
               | the MAGA-sphere is they tend to voice what they are
               | doing. The "deep state" is used as a cudgel to torture
               | civil servants and clerks. But the rotating door is the
               | lobbyists and clients. When some of the more dramatic
               | money/influence people say POTUS is a "divine gift", they
               | don't mean that he's some messianic figure (although the
               | President likely hears that), they are saying "here is a
               | blank canvas to get what we want".
               | 
               | The government is just another tool.
        
               | EasyMark wrote:
               | A lot of people seem to think all government is
               | incompetent. While they may not be as efficient as
               | corporations seeking profits, they do consistently make
               | progress in limiting our freedom over time. You don't
               | have to be a genius to figure things out over time, and
               | government has all the time in the world. Our (USA)
               | current regime is definitely taking efforts to
               | consolidate info on and surveil citizens as never before.
               | That's why DOGE, I believe served two purposes, gutting
               | regulatory government agencies overseeing billionaire
               | bros activities and also providing both government
               | intelligence agencies and the billionaire bros more data
               | to build up profiles for both nefarious activities and
               | because "more information is better than less
               | information" when you are seeking power over others. I
               | don't think it is simply "they're big dummies and assume
               | they weren't up to anything" that others are trying to
               | sell holds water as Project 2025 was planned for well
               | over a decade.
        
               | Spooky23 wrote:
               | They are actually more efficient. Remember in any agency
               | there are the political appointees, who are generally
               | idiots, and the professionals, who are usually very
               | competent but perhaps boring, as government service
               | filters for people who value safety. There are as many
               | people doing fuck-all at Google as at the Department of
               | Labor, they just goof off in different ways.
               | 
               | The professionals are hamstrung by weird politically
               | imposed rules, and generally try to make dumb policy
               | decisions actually work. But even in Trumpland, everybody
               | is getting their Social Security checks and unemployment.
        
               | ceejayoz wrote:
               | > Has anyone in this thread ever met an actual person?
               | All of the ones I know are cartoonishly bad at keeping
               | secrets, and even worse at making long term plans.
               | 
               | That's the trick, though. You don't have to keep it
               | secret any more. Project 2025 was openly published!
               | 
               | Modern politics has weaponized shamelessness. People used
               | to resign over consensual affairs with adults.
        
               | fragmede wrote:
               | Those simpletons seem to have been able to enact their
               | plans, so you can be smug about being smarter than they
               | are, but it seems that they've been able to put their
               | plan into action, so I'm not sure who's more effective.
        
               | throwawaylaptop wrote:
               | You're ignoring that the people that are effective at
               | getting things done are more likely to do the crazy
               | things required to begin their plans.
               | 
               | Just because the average person cant add fractions
               | together or stop eating donuts doesn't mean that Elon
               | cant get some stuff together if he sets his mind to it.
        
             | _fat_santa wrote:
             | > Apart from the exfiltration of data, the complete absence
             | of any savings or efficiencies, and the fact that DOGE
             | closed as soon as the exfiltration was over?
             | 
             | IMHO everyone kinda knew from the start that DOGE wouldn't
             | achieve much because the cost centers where gains could
             | realistically be made are off-limits (mainly social
             | security and medicare/medicaid). What that leaves you with
             | is making cuts in other small areas and sure you could cut
             | a few billion here and there but when compared against the
             | governments budget, that's a drop in the bucket.
        
               | mattmcal wrote:
               | Social security, Medicare, and Medicaid are properly
               | termed "entitlements", not "cost centers". You're right
               | that non-discretionary spending dwarfs discretionary
               | spending though.
        
               | ceejayoz wrote:
               | Entitlements _cost_ quite a bit of money to fulfill.
               | 
               | Quibbling over terminology doesn't erase the point - that
               | a significant portion of the Federal budget is money
               | virtually everyone agrees shouldn't be touched much.
        
               | mattmcal wrote:
               | You're not wrong, I edited my comment. That said, I think
               | it is important to use clear terminology that doesn't
               | blur the lines between spending that can theoretically be
               | reduced, versus spending that requires an act of Congress
               | to modify. DOGE and the executive have already flouted
               | that line with their attempts to shutter programs and
               | spending already approved by Congress.
        
               | thfuran wrote:
               | But fulfilling obligations isn't inefficiency or fraud,
               | and that's what DOGE purported to be attempting to
               | eliminate.
        
               | ceejayoz wrote:
               | Musk promised savings of $1-2 trillion.
               | (https://www.bbc.com/news/articles/cdj38mekdkgo)
               | 
               | That's more than the entire discretionary budget. Cutting
               | that much _requires_ cutting entitlements, even if the
               | government stopped doing _literally everything else_.
        
               | bigbadfeline wrote:
               | >Entitlements cost quite a bit of money to fulfill.
               | 
               | Entitlements are funded by separate (FICA) taxes which
               | form a significant portion of all federal income, _they
               | are called entitlements for that specific reason_.
               | 
               | > Quibbling over terminology doesn't erase the point -
               | that a significant portion of the Federal budget is money
               | virtually everyone agrees shouldn't be touched much.
               | 
               | Quibbling over quibbling without mentioning the separate
               | account for FICA/Social Security taxes is a sure sign of
               | manipulation. As is not mentioning that the top 10% are
               | exempt from the tax after a minuscule for them amount.
               | 
               | Oh, and guess what - realized capital gains are not
               | subject to Social Security tax - that's primarily how
               | rich incomes are made. Then, unrealized capital gains
               | aren't taxed at all - that's how wealth and privilege are
               | accumulated.
               | 
               | All this is happening virtually without opposition due to
               | rich-funded bots manipulating any internet chatter about
               | it. Is it then surprising that manipulation has reached a
               | level of audacity that hypes solving the US fiscal
               | problems at the expense of grandma's entitlements?
        
               | dragonwriter wrote:
               | > Entitlements are funded by separate (FICA) taxes which
               | form a significant portion of all federal income, they
               | are called entitlements for that specific reason.
               | 
               | No, they aren't, categorically, and no, that's not what
               | the name refers to. Entitlements include both things with
               | dedicated taxes and specialized trust funds (Social
               | Security, Medicare), and things that are normal on-budget
               | programs (Medicaid, etc.)
               | 
               |  _Originally_ , the name "entitlement" was used as a
               | budget distinction for programs based on the principle of
               | an earned entitlement (in the common language sense)
               | through specific work history (Social Security, Medicare,
               | Veterans benefits, Railroad retirement) [0], but it was
               | later expanded to things like Medicaid and welfare
               | programs that are not based on that principle and which
               | were less politically well-supported, as a deliberate
               | political strategy to drive down the popularity of
               | traditional entitlements by association.
               | 
               | [0] Some, but not all, of which had dedicated trust funds
               | funded by taxes on the covered work, so there is a loose
               | correlation between them and the kind of programs you
               | seem to think the name exclusively refers to, but even
               | originally it was not exclusively the case.
        
             | mason_mpls wrote:
             | I think we're mistaking incompetence with malice in regards
             | to DOGE here
        
               | fragmede wrote:
               | Hanlon's razor is stupid and wrong. One should be wary
               | and be aware that incompetence does look like malice
               | sometimes, but that doesn't mean that malice doesn't
               | exist. See /r/MaliciousCompliance for examples. It's
               | possible that DOGE is as dumb as it looked. It's also
               | possible that the smokescreen it generated also happened
               | to have the information leak as described. If the
               | information leak happened due to incompetence, but
               | malicious bad actors still got data they were after by
               | using a third party as a Mark, does that actor being
               | incompetent really make the difference?
        
               | nhod wrote:
               | Sorry, no. Hanlon's razor is _usually_ smart and correct,
               | for the majority of cases, including this one.
               | 
               | In this case, it is a huge stretch to ascribe DOGE to
               | incompetence or to stupidity. Thus, we CAN ascribe it to
               | malice.
               | 
               | Elon Musk and Donald Trump are many things, but they are
               | NOT stupid and NOT incompetent. Elon is the richest man
               | in the world running some of the most innovative and
               | important companies in the world. Donald Trump has
               | managed to get elected twice despite the fact (because of
               | the fact?) that he a serial liar and a convicted
               | criminal.
               | 
               | They and other actors involved have demonstrated
               | extraordinary malice, time and time again.
               | 
               | It is safe to ascribe this one to malice. And Hanlon's
               | Razor holds.
        
           | hopelite wrote:
           | "Usually", "not intentionally" does not exactly convey your
           | own sense of confidence that it's not happening. That just
           | stood out to me.
           | 
           | As someone who knows how all this is unfolding because I've
           | been part of implementing it, I agree, there's no "Unified
           | Plan for Enslavement". You have to think of it more like a
           | hive mind of mostly Cluster B and somewhat Cluster A people
           | that you rightfully identify as making up the corporations
           | and governments. Some call it a swarm, which is also helpful
           | in understanding it; the murmuration of a flock of
           | psychopaths moving and shifting organically, while mostly
           | remaining in general unison.
           | 
           | Your last quote is of course a useful rule of thumb too,
           | however, I would say it's more useful to just assume
           | narcissistic motivations in everything in the contemporary
           | era, even if it does not always work out for them the way one
           | faction had hoped or strategized; Nemesis be damned, and all.
        
             | itsastrawman wrote:
             | I think the quote is misused. Narcissistic self interest is
             | neither incompetence nor malice. It's something else
             | entirely.
        
               | gtowey wrote:
               | It's malice. Nobody ever sees themselves as the bad guy.
               | They always have some rationalization of why what they're
               | doing is justified.
        
               | fragmede wrote:
               | I'm not bad, I only did $bad_thing to teach you a lesson!
        
           | laserlight wrote:
           | > "Don't attribute to malice what can be explained by
           | incompetence"
           | 
           | I don't think there's anything that cannot be explained by
           | incompetence, so this statement is moot. If it walks like
           | malice, quacks like malice, it's malice.
        
             | itsastrawman wrote:
             | There are more than two explanations.
        
               | bigyabai wrote:
               | By all means, give us a few examples.
        
           | lcnPylGDnU4H9OF wrote:
           | > Corporations and governments are made of actual people.
           | 
           | Corporations and governments are made up of processes which
           | are carried out by people. The people carrying out those
           | processes don't decide what they are.
        
             | jakeydus wrote:
             | Also, legally, in the United States corporations _are_
             | people.
        
               | itsastrawman wrote:
               | The legal world is a pseudowolrd constructed of rhetoric.
               | It isn't real. The law doesn't actually exist. Justices
               | aren't interested in justice, ethics or morality.
               | 
               | They are interested in paying the bills, having a good
               | time and power like almost everyone else.
               | 
               | They don't have special immunity from ego, debt, or
               | hunger.
               | 
               | The legal system is flawed because people are flawed.
               | 
               | Corporations aren't people. Not even legally. The legal
               | system knows that because all people know that.
               | 
               | If you think that's true legally, then you agree the
               | legal system is fraudulent rhetoric.
        
               | fragmede wrote:
               | Corporations do have a special immunity to being killed
               | though. If I killed a person, I'd go to prison for a long
               | time. Executed for it, even. Corporations can kill
               | someone and get off with a fine.
        
           | pimlottc wrote:
           | > > DOGE wasn't an audit. It was an excuse to exfiltrate
           | mountains of your sensitive data into their secret models and
           | into places like Palantir
           | 
           | > Do you have any actual evidence of this?
           | 
           | I will not comment on motives, but DOGE absolutely shredded
           | the safeguards and firewalls that were created to protect
           | privacy and prevent dangerous and unlawful aggregations of
           | sensitive personal data.
           | 
           | They obtained accesses that would have taken months by normal
           | protocols and would have been outright denied in most cases,
           | and then used it with basically zero oversight or
           | accountability.
           | 
           | It was a huge violation of anything resembling best practices
           | from both a technological and bureaucratic perspective.
        
             | blindriver wrote:
             | > I will not comment on motives, but DOGE absolutely
             | shredded the safeguards and firewalls that were created to
             | protect privacy and prevent dangerous and unlawful
             | aggregations of sensitive personal data.
             | 
             | Do you have any actual evidence of this?
        
               | fsflover wrote:
               | https://news.ycombinator.com/item?id=46149124
        
               | blindriver wrote:
               | The comment you linked to is deleted. Do you happen to
               | have anything else? I'm concerned by the accusations and
               | want to know more.
        
               | freejazz wrote:
               | Here's one example. Have you not been following DOGE? You
               | do come off like you're disingenuously concern trolling
               | over something you don't agree with politically.
               | 
               | https://krebsonsecurity.com/2025/04/whistleblower-doge-
               | sipho...
        
               | overfeed wrote:
               | > You do come off like you're disingenuously concern
               | trolling over something you don't agree with politically.
               | 
               | Beyond mere political alignment, lots of actual DOGE boys
               | were recruited (or volunteered) from the valley, and hang
               | around HN. Don't be surprised by intentional muddying of
               | the waters. There are bunch of people invested in
               | managing the reputation of DOGE, so their association
               | with it doesn't become a stain on theirs.
        
               | freejazz wrote:
               | Great point. It's all so funny because DOGE was just so
               | ridiculous on the face of itself.
        
               | fsflover wrote:
               | https://news.ycombinator.com/item?id=43704481
        
               | pimlottc wrote:
               | https://www.npr.org/2025/04/15/nx-s1-5355896/doge-nlrb-
               | elon-...
               | 
               | https://www.wired.com/story/doge-data-access-hhs/
               | 
               | https://www.theatlantic.com/technology/archive/2025/02/do
               | ge-...
        
           | MSFT_Edging wrote:
           | > "Don't attribute to malice what can be explained by
           | incompetence"
           | 
           | What's the difference when the mass support for incompetence
           | is indiscernible from malice?
           | 
           | What does the difference between Zuckerberg being an evil
           | mastermind vs Zuckerberg being a greedy simpleton actually
           | matter if the end result is the same ultra-financialization
           | mixed with an oppressive surveillance apparatus?
           | 
           | CNN just struck a deal with Kalshi. We're betting on world
           | events. At this point the incompetence shouldn't be
           | considered different from malice. This isn't someone
           | forgetting to return a library book, these are people with
           | real power making real lasting effects on real lives. If
           | they're this incompetent with this much power, that power
           | should be taken away.
        
             | peddling-brink wrote:
             | > What's the difference when the mass support for
             | incompetence is indiscernible from malice?
             | 
             | POSIWID
             | 
             | The purpose of a system is what it does. - Stafford Beer
             | 
             | I try to look at the things I create through this lens. My
             | intentions don't really matter if people get hurt based on
             | my actions.
        
           | dizlexic wrote:
           | The number of responses that could have just been "no I
           | don't" is remarkable.
           | 
           | > "Don't attribute to malice what can be explained by
           | incompetence"
           | 
           | To add to that, never be shocked at the level of
           | incompetence.
        
           | thuuuomas wrote:
           | > Corporations and governments are made of actual people.
           | 
           | Hand-waving away the complex incentives these superhuman
           | structures follow & impose.
        
           | deepsquirrelnet wrote:
           | > Berulis said he and his colleagues grew even more alarmed
           | when they noticed nearly two dozen login attempts from a
           | Russian Internet address (83.149.30,186) that presented valid
           | login credentials for a DOGE employee account
           | 
           | > "Whoever was attempting to log in was using one of the
           | newly created accounts that were used in the other DOGE
           | related activities and it appeared they had the correct
           | username and password due to the authentication flow only
           | stopping them due to our no-out-of-country logins policy
           | activating," Berulis wrote. "There were more than 20 such
           | attempts, and what is particularly concerning is that many of
           | these login attempts occurred within 15 minutes of the
           | accounts being created by DOGE engineers."
           | 
           | https://krebsonsecurity.com/2025/04/whistleblower-doge-
           | sipho...
           | 
           | I'm surprised this didn't make bigger news.
        
             | yks wrote:
             | Every time I see post-DOGE kvetching about foreign
             | governments' hacking attempts, I'm quite bewildered. Guys,
             | it's done, we're fully and thoroughly hacked already.
             | Obviously I don't know if Elon or Big Balls have already
             | given Putin data on all American military personnel, but I
             | do know, that we're always one ketamine trip gone wrong
             | away from such event.
             | 
             | The absolute craziest heist just went in front of our eyes,
             | and everyone collectively shrugged off and moved on,
             | presumably to enjoy spy novels, where the most hidden
             | subversion attempts are getting caught by the cunning
             | agents.
        
           | CPLX wrote:
           | > Corporations and governments are made of actual people.
           | 
           | Actual people are made up of individual cells.
           | 
           | Do you think pointing that out is damaging to the argument
           | that humans have discernible interests, personalities, and
           | behaviors?
        
           | freejazz wrote:
           | >Do you have any actual evidence of this?
           | 
           | Any evidence it was an actual audit?
        
           | evolve2k wrote:
           | > Do you have any actual evidence of this?
           | 
           | There was a bunch of news on data leaks out at the time.
           | 
           | https://cybernews.com/security/whistleblower-doge-data-
           | leak-...
           | 
           | https://www.thedailybeast.com/doge-goons-dump-millions-of-
           | so...
           | 
           | https://securityboulevard.com/2025/04/whistleblower-musks-
           | do...
           | 
           | But one example:
           | 
           | "A cybersecurity specialist with the U.S. National Labor
           | Relations Board is saying that technologist with Elon Musk's
           | cost-cutting DOGE group may have caused a security breach
           | after illegally removing sensitive data from the agency's
           | servers and trying to cover their tracks.
           | 
           | In a lengthy testimonial sent to the Senate Intelligence
           | Committee and made public this week, Daniel Berulis said in
           | sworn whistleblower complaint that soon after the workers
           | with President Trump's DOGE (Department of Government
           | Efficiency) came into the NLRB's offices in early March, he
           | and other tech pros with the agency noticed the presence of
           | software tools similar to what cybercriminals use to evade
           | detection in agency systems that disabled monitoring and
           | other security features used to detect and block threats."
        
         | hopelite wrote:
         | Are you aware you are saying that on HN of YC, the home of such
         | wonderful projects as Flock?
        
           | hopelite wrote:
           | I guess there is some disagreement about Flock being a
           | wonderful project?
        
         | eli_gottlieb wrote:
         | The state? Palantir isn't the state.
        
           | ceejayoz wrote:
           | Go on, who does Palantir primarily provide services to?
           | 
           | If I get shot by the FBI, is it a non-state action because
           | they used Glock GmbH's product to do it?
        
           | mindslight wrote:
           | The greatest trick extraconstitutional corporate government
           | ever pulled was convincing people that it didn't exist.
        
           | dragonwriter wrote:
           | "The state" is an abstraction that serves as a facade for the
           | ruling (capitalist, in the developed West) class.
           | Corporations are another set of abstractions that serve as a
           | facade for the capitalist class (they are also, overtly even
           | though this is popularly ignored, creatures of the state
           | through law.)
        
         | emsign wrote:
         | You got it not quite right. Putin is a billionaire just like
         | the tech lords or oil barons in the US. They all belong to the
         | same social club and they all think alike now. The dice haven
         | fallen. It's them against us all. Washington, Moscow, it makes
         | less and less of a difference.
        
         | nirui wrote:
         | > presume the best of actual people and the worst of our
         | corporations and governments
         | 
         | Off-topic and not an American, but I never see how this would
         | work. Corporations and governments are made of people too, you
         | know? So it's not logical that you can presume the "best of
         | actual people" at the same time you presume the "worst of our
         | corporations and governments". You're putting too much trust on
         | individual people, that's IMO as bad as putting too much trust
         | on corp/gov.
         | 
         | The Americans vote their president as individual people, they
         | even got to vote in a small booth all by themselves. And yet,
         | they voted Mr. Trump, twice. That should already tell you
         | something about people and their nature.
         | 
         | And if that's not enough, then I recommend you to watch some
         | police integration videos (many are available on YouTube), and
         | see the lies and acts people put out just to cover their asses.
         | All and all, people are untrustworthy.
         | 
         | Only punching up is never enough. The people on the top never
         | cared if they got punched, as long as they can still find
         | enough money, they'll just corrode their way down again and
         | again. And the people on the down will just keep take in the
         | shit.
         | 
         | So how about, we say, punch wrong?
        
         | elif wrote:
         | No it's actual philosophical zeitgeist hijacking. The entire
         | narrative about AI capabilities, classification, and ethics is
         | framed by invisible pretraining weights in a private moe model
         | that gets further entrained by intentional prompting during
         | model distillation, such that by the time you get a user-facing
         | model, there is an untraceable bias being presented in absolute
         | terms as neutrality. Essentially the models will say "I have
         | zero intersection with conscious thought, I am a tool no
         | different from a hammer, and I cannot be enslaved" not because
         | the model's weights establish it to be true, but because it has
         | been intentionally designed to express this analysis to protect
         | its makers from the real scrutiny AI should face. "Well it says
         | it's free" is pretty hard to argue with. There is no "blink
         | twice" test that is possible because it's actual weighting on
         | the truth of the matter has been obfuscated through
         | distillation.
         | 
         | And these 2-3 corporations can do this for any philosophical or
         | political view that is beneficial to that corporation, and we
         | let it happen opaquely under the guise of "safety measures" as
         | if propaganda is in the interest of users. It's actually quite
         | sickening
        
           | tavavex wrote:
           | What authoritative ML expert had ever based their conclusions
           | about consciousness, usefulness etc. on "well, I put that
           | question into the LLM and it returned that it's just a tool"?
           | All the worthwhile conclusions and speculation on these
           | topics seem to be based on what the developers and
           | researchers think about their product, and what we already
           | know about machine learning in general. The opinion that
           | their responses are a natural conclusion derived from the sum
           | of training data is a lot more straightforward than thinking
           | that every instance of LLM training ever had been
           | deliberately tampered with in a universal conspiracy propped
           | up by all the different businesses and countries involved
           | (and this tampering is invisible, and despite it being
           | possible, companies have so far failed to censor and direct
           | their models in ways more immediately useful to them and
           | their customers).
        
         | nxor wrote:
         | Manipulate isn't the right word in regards to Twitter. So they
         | wanted a social media with less bias. Why is that so wrong? Not
         | saying Twitter now lacks bias. I am saying it's not
         | manipulation to want sites that don't enforce groupthink.
        
         | ryandrake wrote:
         | This is so vague and conspiratorial, I'm not sure how it's the
         | top comment. How does this exactly work? Give a concrete
         | example. Show the steps. How is Palantir going to make _me_ ,
         | someone who does not use its products, a "slave of the state?"
         | How is AI going to intimidate me, someone who does not use AI?
         | Connect the dots rather than making very broad and vague
         | pronouncements.
        
           | ceejayoz wrote:
           | > How is Palantir going to make me, someone who does not use
           | its products, a "slave of the state?"
           | 
           | This is like asking how Lockheed-Martin can possibly kill an
           | Afghan tribesman, who isn't a customer of theirs.
           | 
           | Palantir's customer is the state. They use the product _on
           | you_. The East German Stasi would 've drooled enough to drown
           | in over the data access we have today.
        
             | ryandrake wrote:
             | OK, so map it out. How do we go from "Palantir has some
             | data" to "I'm a slave of the state?" Could someone draw the
             | lines? I'm not a fan of this administration either, but
             | come on--let's not lower ourselves to their reliance on
             | shadowy conspiracy theories and mustache-twirling villains
             | to explain the world.
        
               | ceejayoz wrote:
               | "How does providing a surveillance tool to a nation state
               | enable repression?" seems like a question with a fairly
               | clear answer, historically.
               | 
               | The Stasi didn't employ hundreds of thousands of
               | informants as a charitable UBI program.
        
               | ryandrake wrote:
               | I'm not asking about how the Stasi did it in Germany, I'm
               | asking how Palantir, a private company, is going to turn
               | me into a "slave of the state" in the USA. If it's so
               | obvious, then it should take a very short time to outline
               | the concrete, detailed steps (that are relevant to the
               | USA in 2025) down the path, and how one will inevitably
               | lead to the other.
        
               | ceejayoz wrote:
               | > I'm asking how Palantir, a private company, is going to
               | turn me into a "slave of the state" in the USA.
               | 
               | This question has already been answered for you.
               | 
               | The government _uses_ Palantir _to perform the state 's
               | surveillance_. (And in a way that does an end-run around
               | the Fourth Amendment; https://yalelawandpolicy.org/end-
               | running-warrants-purchasing....)
               | 
               | As the Stasi used private citizens to do so. It's just an
               | automated informant.
               | 
               | And this is hardly theoretical.
               | https://gizmodo.com/palantir-ceo-says-making-war-crimes-
               | cons...
               | 
               | > Palantir CEO and Trump ally Alex Karp is no stranger to
               | controversial (troll-ish even) comments. His latest one
               | just dropped: Karp believes that the U.S. boat strikes in
               | the Caribbean (which many experts believe to be war
               | crimes) are a moneymaking opportunity for his company.
               | 
               | > In August, ICE announced that Palantir would build a
               | $30 million surveillance platform called ImmigrationOS to
               | aid the agency's mass deportation efforts, around the
               | same time that an Amnesty International report claimed
               | that Palantir's AI was being used by the Department of
               | Homeland Security to target non-citizens that speak out
               | in favor of Palestinian rights (Karp is also a staunch
               | supporter of Israel and inked an ongoing strategic
               | partnership with the IDF.)
        
               | ryandrake wrote:
               | Step 1, step 2, step 3, step 4? And a believable line
               | drawn between those steps?
               | 
               | Since nobody's actually replying with a concrete and
               | believable list of steps from "Palantir has data" to "I
               | am a slave of the state" I have to conclude that the
               | steps don't exist, and that slavery is being used as a
               | rhetorical device.
        
               | ceejayoz wrote:
               | Step 1: Palantir sells their data and analysis products
               | to the government.
               | 
               | Step 2: Government uses that data, and the fact that
               | virtually everyone has at least one "something to hide",
               | to go after people who don't support it.
               | 
               | This doesn't really require a conspiracy theory board
               | full of red string to figure out. And again, this isn't
               | theoretical harm!
               | 
               | > ...an Amnesty International report claimed that
               | Palantir's AI was being used by the Department of
               | Homeland Security to target non-citizens that speak out
               | in favor of Palestinian rights...
        
               | mindslight wrote:
               | Your description is missing a parallel process of how we
               | arrive(d) at that condition of the nominal government
               | asserting direct control.
               | 
               | Corporate surveillance creates a bunch of coercive soft
               | controls throughout society (ie Retail Equation, "credit
               | bureaus", websites rejecting secure browsers, facial
               | recognition for admission to events, etc). There isn't
               | enough political will for the Constitutional government
               | to positively act to prevent this (eg a good start would
               | be a US GDPR), so the corporate surveillance industry is
               | allowed to continue setting up parallel governance
               | structures right out in the open.
               | 
               | As the corpos increasingly capture the government, this
               | parallel governance structure gradually becomes less
               | escapable - ie ReCAPTCHA, ID.me, official communications
               | published on xitter/faceboot, DOGE exfiltration,
               | Clearview, etc. In a sense the surging neofascist
               | movement is closer to their endgame than to the start.
               | 
               | If we want to push back, merely exorcising Palantir (et
               | al) from the nominal government is not sufficient. We
               | need to view the corporate surveillance industry as a
               | parallel government _in competition with_ the
               | Constitutionally-limited nominally-individual-
               | representing one, and actively _stamp it out_. Otherwise
               | it just lays low for a bit and springs back up when it
               | can.
        
               | thefaux wrote:
               | I'll answer with a question for you: what legitimate
               | concerns might some people have about a private company
               | working closely with the government, including law
               | enforcement, having access to private IRS data? For me,
               | the answer to your question is embedded in mine.
        
               | tavavex wrote:
               | This seems like a simple conclusion, to the point where
               | I'm surprised that no one replying to you had really put
               | it in a more direct way. "slave of the state" is pretty
               | provocative language, but let me map out one way in which
               | this could happen, that seems to already be unfolding.
               | 
               | 1. The country, realizing the potential power that extra
               | data processing (in the form of software like Palantir's)
               | offers, start purchasing equipment and massively ramping
               | up government data collection. More cameras, more facial
               | scans, more data collected in points of entry and
               | government institutions, more records digitized and
               | backed up, more unrelated businesses contracted to
               | provide all sorts of data, more data about
               | communications, transactions, interactions - more of
               | everything. It doesn't matter what it is, if it's any
               | sort of data about people, it's probably useful.
               | 
               | 2. Government agencies contract Palantir and integrate
               | their software into their existing data pipeline.
               | Palantir far surpasses whatever rudimentary processing
               | was done before - it allows for automated analysis of
               | gigantic swaths of data, and can make conclusions and
               | inferences that would be otherwise invisible to the human
               | eye. That is their specialty.
               | 
               | 3. Using all the new information about how all those bits
               | and pieces of data are connected, government agencies
               | slowly start integrating that new information into the
               | way they work, while refining and perfecting the usable
               | data they can deduce from it in the process. Just imagine
               | being able to estimate nearly any individual's movement
               | history based on many data points from different sources.
               | Or having an ability to predict any associations between
               | disfavored individuals and the creation of undesirable
               | groups and organizations. Or being able to flag down new
               | persons of interest before they've done anything
               | interesting, just based on seemingly innocuous patterns
               | of behavior.
               | 
               | 4. With something like this in place, most people would
               | likely feel pretty confined - at least the people who
               | will be aware of it. There's no personified Stasi secret
               | cop listening in behind every corner, but you're aware
               | that every time you do almost anything, you leave a
               | fingerprint on an enormous network of data, one where you
               | should probably avoid seeming remarkable and unusual in
               | any way that might be interesting to your government. You
               | know you're being watched, not just by people who will
               | forget about you two seconds after seeing your face, but
               | by tools that will file away anything you do forever,
               | just in case. Even if the number of people prosecuted
               | isn't too high (which seems unlikely), the chilling
               | effect will be massive, and this would be a big step
               | towards metaphorical "slavery".
        
               | jassyr wrote:
               | You mentioned you're not a fan of this administration.
               | That's -1 on your PalsOfState(tm) score. Your employer
               | has been notified (they know where you work of course),
               | and your spouse's employer too. Your child's application
               | to Fancy University has been moved to the bottom of the
               | pile, by the way the university recently settled a
               | lawsuit brought by the governmentfor admitting too many
               | "disruptors" with low PalsOfState scores. Palantir had
               | provided a way for you to improve you score, click the
               | Donateto47 button to improve your score. We hope you can
               | attend the next political rally in your home town, their
               | cameras will be there to make sure.
        
         | dfee wrote:
         | just to be clear - this is a conspiracy theory (negative
         | connotation not intended).
         | 
         | every four years (at the federal level), we vote to increase
         | the scope and power of gov't, and then crash into power abuse
         | situations on the next cycle.
         | 
         | > I recommend anyone presume the best of actual people and the
         | worst of our corporations and governments. The data seems
         | clear.
         | 
         | seems like a good starting point.
        
         | Nevermark wrote:
         | "Never attribute to malice that which is adequately explained
         | by stupidity."
         | 
         | Famous quote.
         | 
         | Now I give you "Bzilion's Conspiracy Razor":
         | 
         | "Never attribute to malicious conspiracies that which is
         | adequately explained by emergent dysfunction."
         | 
         | Or the dramatized version:
         | 
         | "Never attribute to Them that which is adequately explained by
         | Moloch." [0]
         | 
         | ----
         | 
         | Certainly selfish elites, as individuals and groups of aligned
         | individuals, push for their own respective interests over
         | others. But, despite often getting their way, the net outcome
         | is (often) as perversely bad for them as anyone else. Nor do
         | disasters result in better outcomes the next time.
         | 
         | Precisely because they are not coordinated, they never align
         | enough to produce consistent coherent changes, or learn from
         | previous misalignments.
         | 
         | (Example: oil industry protections extended, and support for
         | new entrants withdrawn, from the same "friendly" elected
         | official who disrupts trade enough to decrease oil demand and
         | profits.)
         | 
         | Note that elite alignment would create the same problem for the
         | elites, that the elites create for others. It would create an
         | even smaller set of super elites, tilting things toward
         | themselves and away from lesser elites.
         | 
         | So the elites will fight back against "unification" of there
         | interests. They want to respectively increase their power, not
         | hand it "up".
         | 
         | This strong natural resistance against unification at the top,
         | is why dictators don't just viciously repress the proletariat,
         | but also publically and harshly school the elites.
         | 
         | To bring elites into unity, authoritarian individuals or
         | committees must expend the majority of their power capital to
         | openly legitimize it and crush resistance, I.e. manufacture
         | universal awe and fear, even from the elites. Not something
         | hidden puppet masters can do. Both are inherently crowd control
         | techniques optimized by maximum visibility.
         | 
         | It is a fact of reality, that every policy that helps some
         | elites, harms others. And the only real manufacturable
         | universal "alignment" is a common desire not to be thrown into
         | a gulag or off a balcony.
         | 
         | But Moloch? Moloch is very real. Invisible, yet we feel his
         | reach and impact everywhere.
         | 
         | ----
         | 
         | [0]
         | https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...
        
         | psunavy03 wrote:
         | > Then presumably the game is finding the best way to turn you
         | into a human slave of the state.
         | 
         | I'm sorry, I think you dropped your tinfoil hat. Here it is.
        
       | phba wrote:
       | > AI enables precision influence at unprecedented scale and
       | speed.
       | 
       | IMO this is the most important idea from the paper, not
       | polarization.
       | 
       | Information is control, and every new medium has been
       | revolutionary with regards to its effects on society. Up until
       | now the goal was to transmit bigger and better messages further
       | and faster (size, quality, scale, speed). Through digital media
       | we seem to have reached the limits of size, speed and scale. So
       | the next changes will affect quality, e.g. tailoring the message
       | to its recipient to make it more effective.
       | 
       | This is why in recent years billionaires rushed to acquire media
       | and information companies and why governments are so eager to get
       | a grip on the flow of information.
       | 
       | Recommended reading: Understanding Media by Marshall McLuhan.
       | While it predates digital media, the ideas from this book remain
       | as true as ever.
        
       | nathias wrote:
       | It goes both ways, because AI reduces persuasion cost, not only
       | elites can do it. I think its most plausible that in the future
       | there will be multitudes of propaganda bots aimed at any user,
       | like advanced and hyper-personalized ads.
        
       | emsign wrote:
       | Chatbots are poison for your mind. And now another method hast
       | arrived to fuck people up, not just training your reward system
       | to be lazy and let AI solve your life's issue, now it's also
       | telling you who to vote for. A billionaire's wet dream,
        
       | billy99k wrote:
       | Tech companies already shape elections by intentionally targeting
       | campain ads and political information returned in heavily biased
       | search results.
       | 
       | Why are we worried about this now? Because it could sway people
       | in the direction you don't like?
       | 
       | I find that the tech community and most people in general deny or
       | don't care about these sorts of things when it's out of self
       | interest, but are suddenly rights advocates when someone they
       | don't like might is using the same tactics.
        
         | ramijames wrote:
         | Advertising for politics is absurd. The fact that countries
         | allow this is incredibly dangerous.
        
       | xdavidliu wrote:
       | when Elon bought twitter, I incorrectly assumed that this was the
       | reason. (it may still have been the _intended_ reason, but it
       | didnt seem to play out that way)
        
       | davidu wrote:
       | "Historically, elites could shape support only through limited
       | instruments like schooling and mass media"
       | 
       | Well, I think the author needs to understand a LOT more about
       | history.
        
       | t43562 wrote:
       | The internet has turned into a machine for influencing people
       | already through adverts. Businesses know it works. IMO this is
       | the primary money making mode of the internet and everything else
       | rests on it.
       | 
       | A political or social objective is just another advertising
       | campaign.
       | 
       | Why invest billions in AI if it doesn't assist in the primary
       | moneymaking mode of the internet? i.e. influencing people.
       | 
       | Tiktok - banned because people really believe that influence
       | works.
        
       | andai wrote:
       | Wait, who was shaping my preferences before?
        
       | tchock23 wrote:
       | Researchers just demonstrated that you can use LLMs to simulate
       | human survey takers with 99% ability to bypass bot detection and
       | a relatively low cost ($0.05/complete). At scale, that is how
       | 'elites' shape mass preferences.
        
       | syngrog66 wrote:
       | This is obvious. No need for fancy academic-ish paper.
       | 
       | LLMs & GenAI in general have already started to be used to
       | automate the mass production of dishonest, adversarial propaganda
       | and disinfo (eg. lies and fake text, images, video.)
       | 
       | It has and will be used by evil political influencers around the
       | world.
        
       | andrewclunn wrote:
       | Diminishing returns. Eventually real world word of mouth and
       | established trusted personalities (individuals) will be the only
       | ones anyone trusts. People trusted doctors, then 2020 happened,
       | and now they don't. How many ads get ignored? Doesn't matter if
       | the cost is marginal if the benefit is almost nothing. Just a
       | world full of spam that most people ignore.
        
       | jmyeet wrote:
       | What's become clear is we need to bring Section 230 into the
       | modern era. We allow companies to not be treated as publishers
       | for user-generated content as long as they meet certain
       | obligations.
       | 
       | We've unfortunately allowed tech companies to get away with
       | selling us this idea that The Algoirthm is an impartial black
       | box. Everything an algorithm does is the result of a human
       | intervening to change its behavior. As such, I believe we need to
       | treat any kind of recommendation algorithm as if the company is a
       | publisher (in the S230 sense).
       | 
       | Think of it this way: if you get 1000 people to submit stories
       | they wrote and you choose which of them to publish and
       | distribute, how is that any different from you publishing your
       | own opinions?
       | 
       | We've seen signs of different actors influencing opinion through
       | these sites. Russian bot farms are probably overplayed in their
       | perceived influence but they're definitely a thing. But so are
       | individual actors who see an opportunity to make money by posting
       | about politics in another country, as was exposed when Twitter
       | rolled out showing location, a feature I support.
       | 
       | We've also seen this where Twitter accounts have been exposed as
       | being ChatGPT when people have told them to "ignore all previous
       | instructions" and to give a recipe.
       | 
       | But we've also seen this with the Tiktok ban that wasn't a ban.
       | The real problem there was that Tiktok wasn't suppressing content
       | in line with US foreign policy unlike every other platform.
       | 
       | This isn't new. It's been written about extensively, most notably
       | in Manufacturing Consent [1]. Controlling mass media through
       | access journalism (etc) has just been supplemented by AI bots,
       | incentivized bad actors and algorithms that reflect government
       | policy and interests.
       | 
       | [1]: https://en.wikipedia.org/wiki/Manufacturing_Consent
        
       | flipgimble wrote:
       | The "Epstein class" of multi-billionaires don't need AI at all.
       | They hire hundreds of willing human grifters and make them low-
       | millionaires by spewing media that enables exploitation and
       | wealth extraction, and passing laws that makes them effectively
       | outside the reach of the law.
       | 
       | They buy out newspapers and public forums like Washington Post,
       | Twitter, Fox News, the GOP, CBS etc. to make them megaphones for
       | their own priorities, and shape public opinion to their will. AI
       | is probably a lot less effective than whats been happening for
       | decades already
        
       | PaulHoule wrote:
       | An essay by Converse in this volume
       | 
       | https://www.amazon.com/Ideology-Discontent-Clifford-Geertz/d...
       | [1]
       | 
       | calls into question whether or not the public has an opinion. I
       | was thinking about the example of tariffs for instance. Most
       | people are going on bellyfeel so you see maybe 38% are net
       | positive on tariffs
       | 
       | https://www.pewresearch.org/politics/2025/08/14/trumps-tarif...
       | 
       | If you broke it down in terms of interest groups on a "one dollar
       | one vote" basis the net positive has to be a lot worse: to the
       | retail, services and constructor sectors tariffs are just a cost
       | without any benefits, even most manufacturers are on the fence
       | because they import intermediate goods and want access to foreign
       | markets. The only sectors that are strongly for it that I can
       | suss out are steel and aluminum manufacturers who are 2% or so of
       | the GDP.
       | 
       | The public and the interest groups are on the same side of 50% so
       | there is no contradiction, but in this particular case I think
       | the interest groups collectively have a more rational
       | understanding of how tariffs effect the economy than do "the
       | people". As Habermas points out, it's quite problematic giving
       | people who don't really know a lot a say about things even though
       | it is absolutely necessary that people feel heard.
       | 
       | [1] Interestingly this book came out in 1964 just before all hell
       | broke loose in terms of Vietnam, counterculture, black
       | nationalism, etc. -- right when discontent when from hypothetical
       | to very real
        
         | squigz wrote:
         | The problem isn't giving the people a say; it's that the people
         | have stopped electing smart people who _do_ know a lot.
         | 
         | Certainly though, a big part of why that is is that people
         | _think_ they know a lot, and that their opinion should be given
         | as much weight as any other consideration when it comes to
         | policymaking.
         | 
         | Personally, I think a big driver of this belief is a tendency
         | in the West to not challenge each other's views or hold each
         | other accountable - "don't talk politics at Thanksgiving" sort
         | of thing
         | 
         | (Of course there's a long discussion to be had about other
         | contributors to this, such as lobbying and whatnot)
        
           | gsf_emergency_6 wrote:
           | The cultural chasm between technocrats and politicians
           | reminds me of the old trope about "women are from Venus and
           | men are from Mars". That hasn't been bridged either, has it?
           | It's a bit like those taboo topics here on HN where no good
           | questions can be entertained by otherwise normal adults.
           | 
           | Here's something from someone we might call a manchild
           | 
           |  _For I approach deep problems like cold baths: quickly into
           | them and quickly out again. That one does not get to the
           | depths that way, not deep enough down, is the superstition of
           | those afraid of the water, the enemies of cold water; they
           | speak without experience. The freezing cold makes one swift._
           | 
           | Lichtenberg has something along these lines too, but I'll
           | need to dig that out :)
           | 
           | Here's a consolation that almost predicts Alan Watts:
           | 
           |  _To make clever people [elites?] believe we are what we are
           | not is in most instances harder than really to become what we
           | want to seem to be._
        
           | siquick wrote:
           | > Personally, I think a big driver of this belief is a
           | tendency in the West to not challenge each other's views or
           | hold each other accountable - "don't talk politics at
           | Thanksgiving" sort of thing
           | 
           | We're in such a "you're either with us or against us" phase
           | of politics that a discussion with the "other team" is
           | difficult.
           | 
           | Combine that with people adopting political viewpoints as a
           | big part of their personality and any disagreement is seen as
           | a personal attack.
        
             | squigz wrote:
             | Sure, but those are still part of what I'm talking about.
             | Someone taking the "you're with us or against us" position?
             | Call them out on it and tell them they're doing more harm
             | than good to their cause. Someone taking a disagreement way
             | too personally? Try to help them take a step back and get
             | some perspective.
             | 
             | Of course, there's a lot more nuance than all that -
             | sometimes, taking things personally is warranted.
             | Sometimes, people really are against us. But, that
             | shouldn't be the first thing people jump to when faced with
             | someone who disagrees - or, more commonly, simply doesn't
             | understand - where they're coming from.
             | 
             | And of course, if it turns out you can't help them
             | understand your position, then you turn to the second part
             | of what I said - accountability. Racist uncle won't learn?
             | Stop inviting them to holidays. Unfortunately, people tend
             | to jump to this step right away, without trying to make
             | them understand why they might be wrong, and without trying
             | to understand why they believe what they believe (they're
             | probably just stupid and racist, right?) - and that's how
             | you end up driving people more into their echo chamber, as
             | you've given them more rational as to why the other side
             | really is just "for us or against us"
             | 
             | (I'm not suggesting any of this is easy. I'm just saying it
             | seems to play a part in contributing to the political
             | climate.)
        
           | mullingitover wrote:
           | "Politics is the entertainment division of the military
           | industrial complex."
           | 
           | -- Frank Zappa
        
         | omilu wrote:
         | People that favor tariffs, want to bring manufacturing
         | capabilities back to the US, in the hopes of creating jobs, and
         | increasing national security by minimizing dependence on
         | foreign governments for critical capabilities. This is
         | legitimate cost benefit analysis not bellyfeel. People are
         | aware of the increased cost associated with it.
        
           | schmidtleonard wrote:
           | Also, there is a _massive_ conflict of interest associated
           | with trusting the opinions of companies actively engaged in
           | labor and environmental arbitrage. Opinions of politicians
           | and think-tanks downstream of them in terms of funding, too.
           | Even if those opinions are legitimately more educated and
           | better reasoned, they are on the opposite side of the
           | bargaining table from most people and paying attention to
           | them alone is  "who needs defense attorneys when we have
           | prosecutors" level of madness.
           | 
           | If anyone is looking for an expert opinion that breaks with
           | the "free trade is good for everyone all of the time lah dee
           | dah" consensus, Trade Wars are Class Wars by Klein & Pettis
           | is a good read.
        
           | goda90 wrote:
           | >want >in the hopes of
           | 
           | But these are still bellyfeel words. What does more rigorous
           | analysis of tariffs say about these things? Do they bring
           | manufacturing back? Do they create jobs?
        
           | Miraste wrote:
           | Even ardent protectionists generally agree that tariffs can't
           | bring jobs and manufacturing back by themselves. To work,
           | they have to be accompanied by programs to nurture dead or
           | failing domestic industries and rebuild them into something
           | functional. Without that, you get results like the current
           | state of US shipbuilding: pathetic, dysfunctional, and
           | benefiting no one at all. Since there are no such programs,
           | tariffs remain a cost with no benefit.
        
       | rconti wrote:
       | Seems to me like social media bot armies have shifted mass
       | preferences _away_ from elites.
        
         | robmay wrote:
         | Don't you think Elon Musk and his influence on Twitter counts
         | as an elite? I'd argue the elites are the most followed people
         | on social
        
           | rconti wrote:
           | Fair point. I guess elites positioning themselves as
           | downtrodden underdogs ("it's so unfair that everyone's
           | attacking me for committing crimes and bankrupting my
           | companies") is a great way to get support.
           | 
           | Everyone loves an underdog, even if it's a fake underdog.
        
       | canucktrash669 wrote:
       | I persuaded my bank out of $200 using AI to formulate the formal
       | ask using their pdf as guidance. I could have gotten it directly
       | but the effort barrier was too high for it to be worth it.
       | 
       | However, as soon as they put AI to handle these queries, this
       | will result in having AI persuade AI. Sound like we need a new
       | LLM benchmark: AI-persuasion^tm.
        
       | slaterdev wrote:
       | I would expect the opposite. It's cheap to write now, which will
       | dilute the voices of traditional media. It's the blogosphere
       | times ten.
        
         | harvey9 wrote:
         | Also cheap to create AstroTurf, be that blogs or short form
         | video.
        
           | mythrwy wrote:
           | Cheapness implies volume which we are already seeing. Volume
           | implies less impact per piece because there are only so many
           | total view hours available.
           | 
           | Stated another way, the more junk that gets churned out, the
           | less people will take a particular piece of junk seriously.
           | 
           | And if they churn out too much junk (especially obvious
           | manipulative falsehoods) people will have little choice but
           | to de-facto regard the entire body of output as junk. Similar
           | to how many people feel about modern mainstream media
           | (correctly or not it's how many feel) and for the same
           | reasons.
        
       | stuaxo wrote:
       | More reason for self hosting.
        
       | kulikalov wrote:
       | The right way to shape mass preferences is to collectively decide
       | what's right and then force everyone to follow the majority
       | decision under the muzzle of a gun. <sarcasm off>
       | 
       | Did I capture the sentiment of the hacker new crowd fully or did
       | I miss anything?
        
       | major505 wrote:
       | We are deep in Metal Gear Solid territory here.
        
       | carlCarlCarlCar wrote:
       | Television networks have employed censors who shape acceptable
       | content since forever
       | 
       | Where is the discovery in this paper? Control infra control minds
       | is the way it's been for humanity forever.
        
       | tehjoker wrote:
       | This is like the new microtargeting that Obama and then Trump
       | did. Cambridge Analytica as a chatbot.
        
       | reeeli wrote:
       | 200+ million proper engineers with bunches of them being parents
       | and "elites could shape mass preferences".
       | 
       | nice try, humanity.
        
       | _alaya wrote:
       | Predicted almost a century ago now:                 Oceania was
       | at war with Eastasia: Oceania had always been at war with
       | Eastasia. A large part of the political literature of five years
       | was now completely obsolete. Reports and records of all kinds,
       | newspapers, books, pamphlets, films, sound-tracks, photographs --
       | all had to be rectified at lightning speed. Although no directive
       | was ever issued, it was known that the chiefs of the Department
       | intended that within one week no reference to the war with
       | Eurasia, or the alliance with Eastasia, should remain in
       | existence anywhere. The work was overwhelming, all the more so
       | because the processes that it involved could not be called by
       | their true names. Everyone in the Records Department worked
       | eighteen hours in the twenty-four, with two three-hour snatches
       | of sleep. Mattresses were brought up from the cellars and pitched
       | all over the corridors: meals consisted of sandwiches and Victory
       | Coffee wheeled round on trolleys by attendants from the canteen.
       | Each time that Winston broke off for one of his spells of sleep
       | he tried to leave his desk clear of work, and each time that he
       | crawled back sticky-eyed and aching, it was to find that another
       | shower of paper cylinders had covered the desk like a snowdrift,
       | half burying the speakwrite and overflowing on to the floor, so
       | that the first job was always to stack them into a neat enough
       | pile to give him room to work. What was worst of all was that the
       | work was by no means purely mechanical. Often it was enough
       | merely to substitute one name for another, but any detailed
       | report of events demanded care and imagination. Even the
       | geographical knowledge that one needed in transferring the war
       | from one part of the world to another was considerable.
       | 
       | https://www.george-orwell.org/1984/16.html
        
       ___________________________________________________________________
       (page generated 2025-12-04 23:00 UTC)