Post Ayu2aC6nTM18k9btnU by mlohbihler@techhub.social
 (DIR) More posts by mlohbihler@techhub.social
 (DIR) Post #Ayu2KFQRNB5GOgC7bk by futurebird@sauropods.win
       2025-10-05T15:18:40Z
       
       0 likes, 0 repeats
       
       When you read LLM text do you think about the people who wrote the text the generated text is based on? Do you think "yeah this sounds a bit like reddit" or "this could have been from a sci-fi book" or "that phrase is probably based on some newspapers?"That is, does it *sound* recycled to you?
       
 (DIR) Post #Ayu2aC6nTM18k9btnU by mlohbihler@techhub.social
       2025-10-05T15:21:31Z
       
       0 likes, 0 repeats
       
       @futurebird depends on how esoteric my question was, which will relate to how many sources the LLM had available. I had responses that were pretty much lifted from stackoverflow
       
 (DIR) Post #Ayu2vWL1SGuDZ6QvVg by mensrea@freeradical.zone
       2025-10-05T15:25:23Z
       
       0 likes, 0 repeats
       
       @futurebird as soon as notice is llm generated i stop reading it
       
 (DIR) Post #Ayu36ZH8sFohrQt040 by jamesdoesdev@mastodon.world
       2025-10-05T15:27:21Z
       
       0 likes, 0 repeats
       
       @futurebird A lot of it is mostly recognisable as general LLM output at this point, just because it’s so homogenised. But there are definitely occasions where I recognise the style of a particular website or, sometimes, a writer.
       
 (DIR) Post #Ayu37j6fo8DwdoXek4 by futurebird@sauropods.win
       2025-10-05T15:27:27Z
       
       0 likes, 0 repeats
       
       @apLundell I'm not talking about saying this is true for a particular model, more that it's true for sentence fragments and word combinations in ALL LLMs.
       
 (DIR) Post #Ayu3EYnAYUYWg6g4Cu by mattmcirvin@mathstodon.xyz
       2025-10-05T15:28:48Z
       
       0 likes, 0 repeats
       
       @futurebird it all seems so averaged out and blandified.
       
 (DIR) Post #Ayu3Ju3EwR5M3nk4i8 by MLE_online@social.afront.org
       2025-10-05T15:29:44Z
       
       0 likes, 0 repeats
       
       @futurebird To me, most of it feels generic, but with occasional turns of phrase that seem like they were pulled directly from someone else's writing.
       
 (DIR) Post #Ayu3Ki07EKIMyafHma by silvermoon82@wandering.shop
       2025-10-05T15:29:53Z
       
       0 likes, 0 repeats
       
       @futurebird To an extent, I find it reads like every SEO-scarred website, every vapid LinkedIn post.
       
 (DIR) Post #Ayu3d6GrjV3tSfGlA8 by futurebird@sauropods.win
       2025-10-05T15:33:12Z
       
       0 likes, 0 repeats
       
       @apLundell Yup. I mostly use LLMs to ... well see how far I can push them so maybe I'm digging around in the edge cases anymore. One thing I have noticed is for all the talk of "guard rails" there are none. None that really matter.
       
 (DIR) Post #Ayu3iytm3YT2KkGyLQ by TerryHancock@realsocial.life
       2025-10-05T15:34:17Z
       
       0 likes, 0 repeats
       
       @futurebird What I notice most is how it seems to randomly meander around the point.But the LLM is like watching a top spinning down a set of guardrails. Occasionally it drifts too far off-topic, then bounces back into the lane, but it doesn't move purposefully.When a good writer writes, every sentence serves a purpose and moves the article closer to the point (or expands on the theme or promotes the atmosphere).But even bad writers aren't usually this bad. They have a purpose and they get to it.
       
 (DIR) Post #Ayu5l5Vge673KI8dqi by zenkat@sfba.social
       2025-10-05T15:57:05Z
       
       0 likes, 0 repeats
       
       @futurebird No.  LLMs are *transformers* and one thing they do well is reframing the same underlying information in different styles.  So the original voice of the authors is gone, it's all based on your prompt and/or what was favored by RLHF.eg, Think about the early graphic LLMs that would repaint a picture in the style of a famous artist.  Retaining the underlying information while outputting in a different style is foundational to what these models do.
       
 (DIR) Post #Ayu8Lbvfn0CgCopv2O by michael_w_busch@mastodon.online
       2025-10-05T16:26:06Z
       
       0 likes, 0 repeats
       
       @futurebird Yes.That would be why I refer to the text generators as automated plagiarism machines.
       
 (DIR) Post #Ayu8elU6UG2VLL4g08 by jhavok@mstdn.party
       2025-10-05T16:29:35Z
       
       0 likes, 0 repeats
       
       @futurebird I'm not sure, but I suspect most of the clickbait writing I run across is AI, simply because it is so bad and repetitive. Hard to imagine a person writing the same phrases over and over without advancing the point they are making.
       
 (DIR) Post #AyuMC0rxbfDk1DPwg4 by ryanjyoder@techhub.social
       2025-10-05T19:01:14Z
       
       0 likes, 0 repeats
       
       @futurebird Short answer, no I don't really think about the text the LLM was trained on. It doesn't seem easy to draw a direct line back to the source materials.
       
 (DIR) Post #AywZr6hWW5zVhMNgS8 by strangetruther@masto.ai
       2025-10-06T20:43:45Z
       
       0 likes, 0 repeats
       
       @futurebird It surprises me by seeming to be written in my own style.Either LLMs are written in my style, or they copy me as the asker of the question ...?