[HN Gopher] The consumption of AI-generated content at scale
       ___________________________________________________________________
        
       The consumption of AI-generated content at scale
        
       Author : ivansavz
       Score  : 11 points
       Date   : 2025-12-01 22:08 UTC (7 days ago)
        
 (HTM) web link (www.sh-reya.com)
 (TXT) w3m dump (www.sh-reya.com)
        
       | bryanrasmussen wrote:
       | yeah everything sounds like AI, and why is that? Well it might be
       | because everything is AI but I think that writing style is more
       | LinkedIn than LLM, the style of people who might get slapped down
       | if they wrote something individual.
       | 
       | Much of the world has agreed to sound like machines.
       | 
       | Another thing I've noticed is that weird stuff that is perhaps
       | off in some way, also gets accused of being LLMs because it
       | doesn't feel right.
       | 
       | If you sound unique and weird you get accused of being a bad LLM
       | that can't falsify humanity well enough, and if you sounds boring
       | and bland and boosterist, you get accused of being a good LLM.
       | 
       | You can't write like no one else, but you also can't write like
       | everybody else.
        
         | 1bpp wrote:
         | Text feeling awkward or not flowing very well has ironically
         | become a very strong signal for human-written text for me, and
         | usually makes me pay more attention now
        
       | chemotaxis wrote:
       | The best part is that this article is almost certainly AI-
       | generated or heavily AI-assisted too.
       | 
       | Before people get angry with me... there's plenty of small tells,
       | starting with section headings, a lot of linguistic choices, and
       | low information density... but more importantly, the author
       | openly says she writes using LLMs: https://www.sh-
       | reya.com/blog/ai-writing/#how-i-write-with-ll...
        
         | absoluteunit1 wrote:
         | Was thinking this as well.
         | 
         | Just skimming throught the first two paragraphs felt like I as
         | reading a ChatGPT response. That and the fact that there's
         | multiple em dashes in the intro alone.
        
         | phainopepla2 wrote:
         | I would think a decent LLM would know the difference between a
         | metaphor and simile, unlike the author
        
       | SunshineTheCat wrote:
       | What's crazy is you're starting to see an overreaction to this
       | fact as well.
       | 
       | The other day I posted a short showcasing some artwork I made for
       | a TCG I'm in the process of creating.
       | 
       | Comments poured in saying it was "doomed to fail" because it was
       | just "AI slop"
       | 
       | In the video itself I explained how I made them, in Adobe
       | Illustrator (even showing some of the layers, elements, etc).
       | 
       | Next I'm actually posting a recording of me making a character
       | from start to finish, a timelapse.
       | 
       | Will be interesting if I get any more "AI slop" comments, but
       | it's becoming increasingly difficult to share anything drawn now
       | because people immediately assume it's generated.
        
         | p_l wrote:
         | The people commenting about AI Slop, at least considerable
         | portion, do so because it allows them to feel morally superior
         | at little effort.
         | 
         | Do not expect them to retract or stop if there's a way to not
         | see the making of :P
        
         | phainopepla2 wrote:
         | I have seen this as well. Any nicely formatted medium to long
         | text without obvious errors immediately comes under suspicion,
         | even without the obvious tells
        
       | furyofantares wrote:
       | Scroll through and read only the section headers. I would be
       | shocked if this wasn't at the very least run through an LLM
       | itself. For sure the section headers are, I'll skip the rest
       | unless someone posts that it's worth a read for some reason.
       | 
       | It doesn't appear to be section headings glued together with
       | bullet lists so maybe the content really does retain the author's
       | perspective but at this point I'd rather skip stuff I know has
       | been run through an LLM and miss a few gems rather than get
       | slopped daily.
        
       | krupan wrote:
       | "Now, with LLM-generated content, it's hard to even build mental
       | models for what might go wrong, because there's such a long tail
       | of possible errors. An LLM-generated literature review might cite
       | the right people but hallucinate the paper titles. Or the titles
       | and venues might look right, but the authors are wrong."
       | 
       | This is insidious and if humans were doing it they would be fired
       | and/or cancelled on the spot. Yet we continue to rave about how
       | amazing LLMs are!
       | 
       | It's actually a complete reversal on self driving car AI. Humans
       | crash cars and hurt people all the time. AI cars are already much
       | safer drivers than humans. However, we all go nuts when a Waymo
       | runs over a cat, but ignore the fact that humans do that on a
       | daily basis!
       | 
       | Something is really broken in our collective morals and reasoning
        
       | tensegrist wrote:
       | > There's a frustration I can't quite shake when consuming
       | content now--
       | 
       | perhaps even a frustration you can't quite name
        
       ___________________________________________________________________
       (page generated 2025-12-08 23:01 UTC)