[HN Gopher] Unauthorized Experiment on CMV Involving AI-Generate...
       ___________________________________________________________________
        
       Unauthorized Experiment on CMV Involving AI-Generated Comments
        
       Author : pavel_lishin
       Score  : 34 points
       Date   : 2025-04-27 13:48 UTC (9 hours ago)
        
 (HTM) web link (simonwillison.net)
 (TXT) w3m dump (simonwillison.net)
        
       | montroser wrote:
       | > I think the reason I find this so upsetting is that, despite
       | the risk of bots, I like to engage in discussions on the internet
       | with people in good faith. The idea that my opinion on an issue
       | could have been influenced by a fake personal anecdote invented
       | by a research bot is abhorrent to me.
       | 
       | I like Simon's musings in general, but are we not way past this
       | point already? It is completely and totally inevitable that if
       | you try to engage in discussions on the internet, you will be
       | influenced by fake personal anecdotes invented by LLMs. The only
       | difference here is they eventually disclosed it, but aren't
       | various state and political actors already doing this in spades,
       | undisclosed?
        
         | gryfft wrote:
         | I keep seeing this take, and it makes me mad. "The house is on
         | fire, didn't you expect people to start burning to death?
         | People will inevitably die, why discuss when it happens?"
         | 
         | Engineering is fundamentally about exercising the power of
         | intelligence to change something in the physical world. Posts
         | to the effect of "<bad thing> is inevitable and unstoppable, so
         | it isn't worth talking about" strike me as the opposite of the
         | hacker ethos!
        
           | drjasonharrison wrote:
           | I think the other thing to keep discussing is that doing
           | research, or otherwise using an LLM, to manipulate people's
           | emotions without disclosure, is unethical.
           | 
           | By the way, people die in house fires from toxic smoke
           | inhalation and a lack of oxygen. Engineers created smoke
           | detectors and other devices to lower the risk of fire due to
           | electrical shorts, gas leaks, etc., and to create fire
           | suppression systems.
           | 
           | People still die because they didn't replace batteries,
           | didn't follow electrical cord/device warnings, or left
           | candles or other heat sources unattended. We discuss these
           | events as warnings and reminders that accidents kill when
           | warnings are not followed, when inattentiveness allows
           | failure to propagate, and as a reminder that rarely occurring
           | events still kill innocent people.
           | 
           | Maybe this will motivate people to meet in person, until that
           | is also corrupted with cyber brain augmentation and in-person
           | propaganda actors, rather than relying on only online
           | anecdotes.
        
         | simonw wrote:
         | Sure, but that doesn't mean I'm not furious when it happens.
        
         | drjasonharrison wrote:
         | I see this as further discounting the importance of anecdotes
         | and personal experiences when making decisions that affect
         | populations.
         | 
         | Yes, we know that personal stories can be compelling, and
         | communicating with someone with different experiences from ours
         | can be enlightening. Still, before applying these learnings to
         | larger groups, we should remember that individual experiences
         | do not capture the entire population.
        
       | robmerki wrote:
       | Unfortunately there is no way to combat this, and it seems like
       | the end of the internet we once knew. Even with a "proof of
       | human" technology, people could still just paste whatever AI-
       | generated text they wanted, under their "real" account.
       | 
       | This has likely been going on since the first ChatGPT was
       | released.
        
       | gnabgib wrote:
       | Discussion (212 points, 1 day ago, 144 comments)
       | https://news.ycombinator.com/item?id=43806940
        
       | bitshiftfaced wrote:
       | The subreddit has question-askers give feedback to whether their
       | view was changed. The askers are aware of how their response
       | might appear publicly. This makes me wonder if "appeal to
       | identity" is especially effective, at least superficially if not
       | actually. The fine-tuning might've been reacting to this.
        
       | knowitnone wrote:
       | "This project yields important insights, and the risks (e.g.
       | trauma etc.) are minimal." They can't possibly measure the
       | insights or claim that the trauma is minimal.
        
       | ivape wrote:
       | More of the same? Reddit's genesis included fake accounts and
       | content. I don't doubt upvotes and the frontpage is fully
       | curated:
       | 
       | https://economictimes.indiatimes.com/magazines/panache/reddi...
       | 
       | We all have an expectation that these message boards are like the
       | forums of the 2000s, but that's just not true and hasn't been for
       | a long time. We will never see that internet again it seems,
       | because AI was the atomic bomb on all this astroturfing and
       | engineered content. Educating people away from these synthetic
       | forums is appearing near impossible.
        
       | strathmeyer wrote:
       | > The idea that my opinion on an issue could have been influenced
       | by a fake personal anecdote invented by a research bot is
       | abhorrent to me.
       | 
       | Then stop basing your opinion on issues on personal anecdotes
       | from complete strangers. This is nothing new.
        
       ___________________________________________________________________
       (page generated 2025-04-27 23:01 UTC)