[HN Gopher] Time to act on the risk of efficient personalized te...
       ___________________________________________________________________
        
       Time to act on the risk of efficient personalized text generation
        
       Author : Jimmc414
       Score  : 47 points
       Date   : 2025-02-11 16:14 UTC (6 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | potato3732842 wrote:
       | The risk is not that one cannot forge correspondance in the style
       | of another.
       | 
       | The risk is that any one of us peasants can do it without having
       | to have a bunch of other people in on it
        
         | memhole wrote:
         | That's exactly what the problem is.
         | 
         | I've done some content work using LLMs. Once I started to think
         | about how inevitably it'll get coupled with ad networks and how
         | anybody can do this stuff, it made me go this isn't good.
         | 
         | On the bright side, it might push us back to paper or other
         | means of exchanging info. The cost should be prohibitive enough
         | that it increases the quality of content. That's very
         | hypothetical, though. Mailers are already a direct
         | contradiction.
        
         | PaulHoule wrote:
         | My fear is that it will be used for scams
         | 
         | https://archive.ph/uMRXa
         | 
         | Ordinary people have trouble seducing other people because they
         | can't deliver perfect mirroring because of their own self (e.g.
         | they are uncomfortable adapting to another person's emotional
         | demands because of the needs of their own self or aspects of
         | their self that are unappealing to the other person manifest)
         | Sociopaths and people with narcissistic personality disorder do
         | better than most people precisely because their self is less
         | developed.
         | 
         | An A.I. has no self so it has no limits.
        
           | deadbabe wrote:
           | Imagine being catfished for years, and in the end you
           | discover that not only was the person who catfished you not
           | who they said they were, they weren't even human.
        
             | nullc wrote:
             | "Doesn't matter, had cybersex"
        
             | portaouflop wrote:
             | Is it truly better if they were human though?
        
           | yorwba wrote:
           | Do sociopaths and people with narcissistic personality
           | disorder do better at seduction? How would we know? Would a
           | double-blind experiment setting up blind dates between
           | sociopaths and average people to rate their seductiveness
           | even be ethical if sociopaths are dangerously skilled at it?
        
             | memhole wrote:
             | I'm not sure about seduction. Afaik, one of the defining
             | traits is being very adept at manipulation.
        
             | PaulHoule wrote:
             | Consider
             | 
             | https://en.wikipedia.org/wiki/Charles_Manson
        
             | UltraSane wrote:
             | At the very least they are not hindered by any conscience
             | or empahty preventing their efforts and that is a major
             | "advantage"
        
       | Jimmc414 wrote:
       | The proliferation of harmful AI capabilities has likely already
       | occurred. Its naive to not accept this reality when tackling
       | this. A more realistic approach would be to focus on building
       | more robust systems for detection, attribution, and harm
       | mitigation.
       | 
       | The paper makes some good points - it doesn't take a lot of data
       | to convincingly emulate a writing style (~75 emails) and there is
       | a significant gap in legislation as most US "deepfake"
       | legislation explicitly excludes text and focuses heavily on
       | image/video/audio
        
         | johnsmith4739 wrote:
         | Agreed, it's just my experience, but for fb posts, I usually
         | get over 80% accuracy in tone and voice using open source llms
         | and maybe a dozen public posts from the original author.
        
       | Hizonner wrote:
       | Most readers aren't attentive enough to notice a person's style
       | enough that you have to resort to an LLM to fake it.
        
       | Zak wrote:
       | I don't think the important risk of efficient personalized text
       | generation is _impersonation_ as the article claims.
       | 
       | Humanity has already seen harmful effects from social media
       | algorithms that efficiently identify content a person can't turn
       | away from even if they consciously want to. The prospect of being
       | able to efficiently generate media that will be maximally
       | persuasive to each individual viewer on any given issue is
       | terrifying.
        
         | memhole wrote:
         | I was actually looking into this idea. Using AI to select the
         | content that would achieve the most engagement. Click bait and
         | rage bait certainly exists. I'm not entirely convinced that
         | having optimized content is really what matters so much as
         | having it exist for people to see and getting it in front of as
         | many people as possible. My own thoughts are definitely a
         | little mixed. Video content might be a little different. I was
         | only looking at text content.
        
       | pizza wrote:
       | I guess there's 3 players in the games that often get invoked in
       | discussions about AI cogsec:
       | 
       | - the corporation running the AI
       | 
       | - the user
       | 
       | - the AI, sometimes - depending on the particular conversation's
       | ascription of agency to the AI
       | 
       | It seems the downstream harms are of 2 kinds:
       | 
       | - social engineering to give up otherwise secured systems
       | 
       | - 'people engineering' - the kind of thing people complain about
       | when it comes to recommender systems. "Mind control", basically,
       | y'know. [0]
       | 
       | Things like r/LocalLlama and the general "tinkerer" environment
       | of open source AI makes me wonder if it wouldn't be rather
       | trivial in some sense for users to build the same capabilities
       | but for personal protection from malign influences. Like a kind
       | of user-controlled "debrief AI". But then, of course, you might
       | get a superintelligence that can pretend to be your friend but is
       | actually more like Iago from Othello.
       | 
       | But is that really a likelihood in a situation where the user can
       | make their own RLHF dataset and fit it to the desired behavior?
       | Generally I'd expect the user to get the behavior they were
       | looking for. Plus, like immune system memory, people could
       | continually train new examples of sleights into it. I guess maybe
       | there could be a hint of the "Waluigi problem", perhaps.
       | 
       | [0] I think it does the people who are distressed about it a
       | disservice to saturate all their news channels with reports about
       | how they are utterly incapable of outwitting an algorithmic super
       | intelligence. But that's a different discussion altogether
        
         | klabb3 wrote:
         | > it wouldn't be rather trivial in some sense for users to
         | build the same capabilities but for personal protection from
         | malign influences.
         | 
         | Yes the implication of AI in economics and society is like that
         | of scraping, spam, scalping or fraud, ie leading to an arms
         | race. Insurance companies denying claims, citizens fighting
         | back filing appeals with AI. Captchas and fingerprinting to
         | protect against bots, that use OCR and now AI to bypass the
         | defenses.
         | 
         | Well, this idea of increased net productivity relies on the
         | fact that we don't waste the excess on fighting each other in
         | the same games we were playing all along. It's like a feud
         | between tribes going from sticks to swords to guns. It's only
         | when you replace the zero-sum activity with a positive value
         | that the world actually improves.
         | 
         | The techno-optimists see only potential in an all-else-equal
         | world, which isn't the world we live in. Potential is
         | irrelevant in the face of incentives.
        
       | janalsncm wrote:
       | > Currently, the Huggingface Hub provides model publishers the
       | option of requiring pre-registration and/or pre-approval to
       | download a specific model's weights. However, downstream (e.g.,
       | finetuned) or even direct versions of these models are not
       | required to enforce these controls, making them easy to
       | circumvent. We would encourage Huggingface and other model
       | distributors to enforce that such controls propagate downstream,
       | including automated enforcement of this requirement (e.g., via
       | automated checks of model similarity).
       | 
       | None of the watermarking methods I have seen work in this way.
       | All of them require extra work at inference time. In other words,
       | Gemini might have watermarking technology on top of their model,
       | but if I could download the weights I could simply choose not to
       | watermark my text.
       | 
       | Stepping back, in section 6 the authors don't address what I see
       | as the main criticism: authentication via writing style is
       | extremely weak and none of the mitigation methods actually work.
       | If you want to prevent phishing attacks I would suggest the most
       | salient factor is the identity of the sender, not the style of
       | writing of the email itself.
       | 
       | Another thing that annoys me about these "safety" people is they
       | ignore the reality of running ML models. Getting around their
       | "safeguards" is trivial. Maybe you think it is "unsafe" to talk
       | about certain Tiananmen Square events. Whatever change you make
       | to a model to mitigate this risk can be quite easily reversed
       | using the same personalization methods the paper discusses.
        
       | ThinkBeat wrote:
       | I have the dread that in a relatively short amount of time the
       | personal traits in peoples writing will over time decrease and
       | may essentially end.
       | 
       | All our written interaction will have been polished and enhanced
       | by one LLM or another into a uniform template.
        
         | parliament32 wrote:
         | I doubt it'll matter much. If the future actually turns out to
         | be people using LLMs to write content, and other people using
         | LLMs to read content, we'll probably give up on long-form
         | writing entirely and just pass data around in some sort of LLM-
         | to-LLM data format.
        
           | notjoemama wrote:
           | Eliminating both the creator AND the customer? It's an almost
           | perfect business model.
        
         | esafak wrote:
         | What is the point of using an LLM to pad out something that
         | could be shorter? People would rather read less. The proper
         | uses of an LLM, it seems to me, are to proofread and critique.
        
         | DoingIsLearning wrote:
         | > All our written interaction will have been polished and
         | enhanced by one LLM or another into a uniform template.
         | 
         | Isn't that just a natural continuation of a global
         | homogenization of human culture and experience?
         | 
         | If you visit anywhere in Europe, or Asia, or the Americas you
         | now have virtually the same 'a la starbucks' coffee culture
         | (even in countries with strong cafe cultures). Same lighting,
         | same minimalist furniture, same music.
         | 
         | Couples now take the same style of wedding/newborn photos
         | across the globe.
         | 
         | Music globally has become less, not more, diverse.[0]
         | 
         | A loss of writer's voice or style would be just another step in
         | this global 'blandenization' of human experience/aesthetics.
         | 
         | [0] https://www.smithsonianmag.com/smart-news/science-proves-
         | pop...
        
         | vineyardmike wrote:
         | My first thought to reading what you wrote was fast denial, but
         | the more I thought about it, I agree. We've seen vocal accents
         | decline with globalized media, we've seen slang become more
         | global, same with the entire English language. For years my
         | text messages have been de-personalized by picking the
         | autocomplete word choice where it makes sense, so why wouldn't
         | better LLMs expand that to even more written text?
         | 
         | That said, I suspect this won't remain true in literary circles
         | nor in certain professional contexts - where word choice is an
         | art or signal, it will remain valuable. Politicians won't sound
         | like ChatGPT, for example. Think of some of the most famous
         | modern politicians... they all have a unique way of speaking
         | that makes it clear who's talking.
        
       | mediumsmart wrote:
       | the holy grail of course is adapting the text of the novel to the
       | readers mood in realtime via smartwatch monitor and making them
       | go and buy something before they reach the end of the chapter.
        
       | orbital-decay wrote:
       | Let's just concentrate this ability in the hands of few, so they
       | can do it responsibly.
        
       ___________________________________________________________________
       (page generated 2025-02-11 23:01 UTC)