[HN Gopher] Anthropic's AI-generated blog dies an early death
       ___________________________________________________________________
        
       Anthropic's AI-generated blog dies an early death
        
       Author : Sourabhsss1
       Score  : 74 points
       Date   : 2025-06-09 15:25 UTC (7 hours ago)
        
 (HTM) web link (techcrunch.com)
 (TXT) w3m dump (techcrunch.com)
        
       | paxys wrote:
       | It's fascinating how creative these large AI companies are at
       | finding ways to burn through VC funding. Hire a team of
       | developers/content writers/editors, tune your models, set up a
       | blog and build an entire infrastructure to publish articles to
       | it, market it, and then...shut it all down in a week. And this is
       | a company burning through multiple billions of dollars every
       | quarter just to keep the lights on.
        
         | elzbardico wrote:
         | The joys of wealth transfer from the poor and the middle class
         | workers to the asset owning class via inflation and the
         | Cantillion Effect [1].
         | 
         | 1- https://www.adamsmith.org/blog/the-cantillion-effect
        
           | stanford_labrat wrote:
           | I've always thought of these VC fueled expeditions to nowhere
           | as the opposite. Wealth transfer from the owning class to the
           | middle class seeing as a lot of these ventures crash and burn
           | with nothing to show for it.
           | 
           | Except for the founders/early employees who get a modest
           | (sometimes excessive) paycheck.
        
             | chimeracoder wrote:
             | > I've always thought of these VC fueled expeditions to
             | nowhere as the opposite. Wealth transfer from the owning
             | class to the middle class seeing as a lot of these ventures
             | crash and burn with nothing to show for it.
             | 
             | That would be the case if VCs were investing their own
             | money, but they're not. They're investing on behalf of
             | their LPs. Who LPs are is generally an extremely closely-
             | guarded secret, but it includes institutional investors,
             | which means middle-class pensions and 401(k)s are wrapped
             | up in these investments as well, just as they were tied up
             | in the 2008 financial crisis.
             | 
             | It's not as clean-cut as it seems.
        
             | givemeethekeys wrote:
             | Can VC's get their funding from mutual funds and pension
             | plans?
        
               | rightbyte wrote:
               | I think that is the 'find bag holders' part of the plan?
        
             | hinkley wrote:
             | I think the chilling effect on mom and pop businesses
             | undoes all of that. When they (we) disrupt and industry the
             | power consolidates but in new hands. The idea is to get it
             | away from the entrenched interests but like a good cultural
             | revolution the second tier ends up in charge when the first
             | tier gets beheaded.
        
         | swyx wrote:
         | it's fascinating how you think being creative is an insult.
        
           | an-honest-moose wrote:
           | It's about how they're applying that creativity, not the
           | creativity itself.
        
           | bowsamic wrote:
           | What makes you think they think that? If someone says
           | "finding creative ways to murder people" you think they're
           | saying the problem is the "creative" part?
        
       | pscanf wrote:
       | People use AI to write blogs, passing them off as human-written.
       | AI companies use humans to write blogs, passing them off as AI-
       | written. :)
        
       | anon7000 wrote:
       | AI generated web content has got to be one of the most
       | counterproductive things to use AI on.
       | 
       | If I wanted an AI summary of a topic or answer to a question, a
       | chatbot of choice can easily provide that for you. There's no
       | need for yet another piece of blogspam that isn't introducing new
       | information into the world. That content is already available
       | inside the AI model. At some point, we'll get so oversaturated
       | with fake, generated BS that there won't be enough high quality
       | new information to feed them.
        
         | echelon wrote:
         | Using AI generated content to mass-scale torpedo the web could
         | be a tool to get people off of Google and existing social media
         | platforms.
         | 
         | I'm certainly using Google less and less these days, and even
         | niche subreddits are getting an influx of LLM drivel.
         | 
         | There are fantastic uses of AI, but there's an over-abundance
         | of low-effort growth hacking at scale that is saturating
         | existing conduits of signal. I have to wonder if some of this
         | might be done intentionally to poison the well.
        
           | tartoran wrote:
           | > Using AI generated content to mass-scale torpedo the web
           | could be a tool to get people off of Google and existing
           | social media platforms.
           | 
           | How? Fill the web with AI generated content or just using
           | LLMs to search for information? As more junk is poured into
           | training LLMs this too will take a hit at some point. I
           | remember how great the early web search was, one could find
           | from thousands to millions of hits for request. At some point
           | it got so polluted that it became nearly useless. It wasn't
           | only spam that made is less useful, it was also the search
           | providers who twisted the rules to get them to reap all the
           | benefits.
        
         | h1fra wrote:
         | hear me out: seo
        
         | saulpw wrote:
         | This is pretty reductive. Many people want to pump some new
         | thoughts they had into an AI to generate something tolerable to
         | post on their blog. The writing isn't the point; the thoughts
         | are. But they can't just post 200 words of bullet points (or
         | don't feel like they can, anyway). So the AI is an assistant
         | which takes their thoughts and makes them look acceptable for
         | publication.
        
           | mjr00 wrote:
           | > The writing isn't the point; the thoughts are. But they
           | can't just post 200 words of bullet points (or don't feel
           | like they can, anyway).
           | 
           | Who or what is clamoring for that AI-generated padding which
           | turns 200 words of bullet points into 2000 words of prose,
           | though? It's not like there's suddenly going to be 10x more
           | insight, it's just 10x more slop to slog through that dilutes
           | whatever points the writer had.
           | 
           | If you have 200 words' worth of thoughts you want to share...
           | you can just write 200 words.
        
           | staticman2 wrote:
           | Blogging is a pretty niche activity in general these days.
           | 
           | I think if writing more than 200 words is painful for you,
           | blogging probably isn't for you?
        
           | Capricorn2481 wrote:
           | > The writing isn't the point; the thoughts are
           | 
           | This is so, so wrong. The writing _is_ the thoughts. A person
           | 's un-articulated bullet points are not worth that much. And
           | AI is not going to pull novel ideas out of your brain via
           | your bullet points. It's either going to summarize them
           | incorrectly or homogenize them into something generic. It
           | would be like dropping acid with a friend and asking ChatGPT
           | to summarize our movie ideas.
           | 
           | The idea that writing is an irrelevant way to gatekeep people
           | with otherwise brilliant ideas is not reality. You don't have
           | to be James Baldwin, but I will not get a sense for what your
           | ideas even are via an AI summary.
        
           | ausbah wrote:
           | if you can't write your thoughts as something cohesive to
           | begin with i don't using LLMs is going to solve your problem.
           | writing is absolutely the point if you're trying to
           | communicate with text. lack of clarity is usually sign of
           | lack of understanding imo, i see it in my own writing
        
           | Veen wrote:
           | The writing is the point. A well-structured, well-argued, and
           | well-written article indicates the writer has devoted
           | considerable time to understanding and thinking through the
           | topic -- if they haven't, it quickly becomes obvious. A
           | series of bullet points indicates the opposite, and using an
           | AI to hide the fact that the "writer" has invested minimal
           | cognitive effort is dishonest.
        
           | nkrisc wrote:
           | It's ridiculous to expect people to read something you
           | couldn't even be bothered to write.
           | 
           | If you just want to get the information out then just post
           | the bullet points, what do you care?
           | 
           | If you want to be recognized as a writer, then write.
        
           | 4ndrewl wrote:
           | > The writing isn't the point; the thoughts are.
           | 
           | Writing _is_ thinking.
        
         | fullshark wrote:
         | Anthropic cares about that, every individual content creator
         | does not. Their goal is to win the war for attention, which is
         | now close to zero sum with everyone on the internet and there's
         | only 24 hours in the day.
        
         | jerf wrote:
         | This is the fundamental reason why I am in favor of a ban on
         | simply posting AI-generated content in user forums. It isn't
         | that AI is fundamentally bad per se, and to the extent that it
         | is problematic now, that badness may well be a temporary
         | situation. It's because there's not a lot of utility in you as
         | a human being basically just being an intermediary to what some
         | AI says today. Anyone who wants that can go get it themselves,
         | in an interactive session where they can explore the answer
         | themselves, with the most up-to-date models. It's fundamentally
         | no different than pasting in the top 10 Google results for a
         | search with no further commentary; if you're going to go that
         | route just give a letmegooglethat.com link. It's exactly as
         | helpful, and in its own way kind of carries the same sort of
         | snarkiness with it... "oh, are you too stupid to AI? let me
         | help you with that".
         | 
         | Similarly, I remember there was a lot of frothy startup ideas
         | around using AI to do very similar things. The canonical one I
         | remember is "using AI to generate commit messages". But I don't
         | want your AI commit messages... again, not because AI is just
         | Platonically bad or something, but because if I want an AI
         | summary of your commit, I'd rather do it in two years when I
         | actually need the summary, and then use a 2027 AI to do it
         | rather than a 2025 AI. There's little to no utility to
         | basically caching an AI response and freezing it for me. I
         | don't need help with that.
        
           | verall wrote:
           | I fully agree with this, besides that if an AI could auto-
           | generate a commit message that I can edit to make actually
           | correct and comprehensive, it will probably be a better, more
           | descriptive message than whatever I come up with in usually
           | ~3 minutes.
           | 
           | The value is a nice starting point but the message is still
           | confirmed by the actual expert. If it's fully auto-generated
           | or I start "accepting" everything, then I agree it becomes
           | completely useless.
        
           | 9rx wrote:
           | _> It 's because there's not a lot of utility in you as a
           | human being basically just being an intermediary to what some
           | AI says today._
           | 
           | To be fair, there has never been a lot of utility in you as a
           | human being involved, theoretically speaking. The users do
           | not use a forum _because_ you, a human, are pulling knobs and
           | turning levers somewhere behind a meaningless digital
           | profile. Any human involvement that has been required for the
           | software to function is merely an implementation detail. The
           | harsh reality, as software developers continually need to be
           | reminded of, is that users really don 't care about how the
           | software works under the hood!
           | 
           | For today, a human posting AI-generated content to a forum is
           | still providing all the other necessary functions required,
           | like curation and moderation. That is just as important and
           | the content itself, but something AI is still not very good
           | at. A low-value poster may not put much care into that,
           | granted, but "slop" would be dealt with the same way
           | regardless of whether it was generated by AI or hand written
           | by a person. The source of content is ultimately immaterial.
           | 
           | Once AI gets good, we'll all jump to AI-driven forums anyway,
           | so those who embrace it now will be more likely to stave off
           | the Digg/Slashdot future.
        
           | code_biologist wrote:
           | It's been interesting to watch this play out in microcosm in
           | different spaces. Danbooru and Gelbooru are two anime image
           | boards that banned AI image content, largely to their benefit
           | in my opinion. Rule34 is a similar image board that has
           | allowed AI images and they've need to make tagging and
           | searching adaptations to add to handle the high volume of AI
           | images versus human artists. I'm glad there's an ecosystem of
           | different options, but I find myself gravitating to the ones
           | that have banned AI content.
        
         | raincole wrote:
         | What we wish for: better search.
         | 
         | What we got: more content polluting search, aka worse search.
        
           | vouaobrasil wrote:
           | I don't think better search is exactly what we want. It would
           | also be great to have less quantity and more quality. I think
           | optimizing only search to make it better (including AI) only
           | furthers the quantity aspect of content, not quality.
           | Optimizing search or trying to make it better is the wrong
           | goal IMO.
        
             | tartoran wrote:
             | Better search implies separating the wheat from the chaff.
             | Unfortunately SEO spam took over and poisoned the whole
             | space.
        
         | xorokongo wrote:
         | This only means that the web (websites and web 2.0 platforms)
         | for public usage is becoming redundant because any type of data
         | that can be posted on the web can now be generated by an LLM.
         | LLMs have been only around for a short while but the web is
         | already becoming infested with AI spam. Future generations that
         | are not accustomed to the old pre AI web will prefare to use AI
         | rather than the web, LLMs will eventually be able to generate
         | all aspects of the web. The web will remain useful for private
         | communication and general data transfer but not for surfing as
         | we know it today.
         | 
         | Edit to add:
         | 
         | Projects like the Internet Archive will be even more important
         | in the future.
        
           | fallinditch wrote:
           | Editorial guidelines at many publications explicitly state
           | that AI can assist with drafts, outlines, and editing, but
           | not with generating final published stories.
           | 
           | AI is widely used for support tasks such as: - Transcribing
           | interviews - Research assistance and generating story
           | outlines - Suggesting headlines, SEO optimization, and
           | copyediting - Automating routine content like financial
           | reports and sports recaps
           | 
           | This seems like a reasonable approach, but even so I agree
           | with your prediction that people will mostly interact with
           | the web via their AI interface.
        
         | _fat_santa wrote:
         | > AI generated web content has got to be one of the most
         | counterproductive things to use AI on.
         | 
         | For something like a blog I would agree, but I found AI to be
         | fantastic at generating copy for some SaaS websites I run. I
         | find it to be a great "polishing engine" for copy that I write.
         | I will often write some very sloppy copy that just gets the
         | point across and then feed that to a model to get a more
         | polished version that is geared to a specific outcome. Usually
         | I will generate a couple variants of the copy I fed it,
         | validate it for accuracy, slap it into my CMS and run an a/b
         | test and then stick with the content that accomplishes the
         | specific goal of the content best based on user
         | engagement/click through/etc.
        
         | chermi wrote:
         | While I largely agree, I don't think it's quite correct to say
         | AI generated blogs contain no new information. At least not in
         | a practical sense. The output is a function of the the LLM and
         | the prompt. The output contains new information assuming the
         | prompt does. If the prompt/context contains internal
         | information no one outside the company has access to, then a
         | public post thus generated certainly contains information new
         | to the public.
        
         | hk1337 wrote:
         | Depends on the web content. I've been using Claude to generate
         | posts for things I am selling in Facebook Marketplace with good
         | results.
        
           | ysavir wrote:
           | What do you feel sets this apart from the rest?
        
       | neya wrote:
       | I can tell you this much - most people who are opposed to AI
       | writing blog articles are usually from the editorial team. They
       | somehow believe they're immune to being replaced by AI. And this
       | stems from the misconception that AI content will always sound
       | AI, soul-less, dry, boring, easy to spot and all that. This was
       | true with ChatGPT-3xx. It's not anymore. In fact, the models have
       | advanced so much so that you will have a really hard time
       | distinguishing between a real writer and an AI. We actually tried
       | this with a large Hollywood publisher in-house as a thought
       | experiment. We asked some of the naysayers from the editorial +
       | CXO team to sit in a room with us, while we presented on a large
       | white screen - a comparison of two random articles - one written
       | by AI, which btw wasn't trained, but just fed a couple of
       | articles of the said writer on the slide into the AI's context
       | window, and another which was actually written by the writer
       | themselves. Nobody in the room could tell which was AI and which
       | wasn't. This is where we stand today. Many websites you read
       | daily actually have so much AI in them, just that you can't tell
       | anymore.
        
         | koakuma-chan wrote:
         | Have you tried gptzero?
        
           | neya wrote:
           | Yep, it is not able to recognize. To be fair, it's not just
           | dump it into ChatGPT and copy paste kind of AI. We feed it
           | into the model in stages, we use 2-3 different models for
           | content generation, and another 2 later on to smoothen the
           | tone. But, all of these are just OTS models, not trained. For
           | example, we do use Gemini in one of the flows.
        
         | A_D_E_P_T wrote:
         | Counterpoint: GPT-4 and later variants, such as o3 and 4.5,
         | have such a characteristic style that it's hard _not_ to spot
         | them.
         | 
         | Em dashes, "it's not just (x), it's (y)," "underscoring (z),"
         | the limited number of ways it structures sentences and
         | paragraphs and likes to end things with an emphasized
         | conclusion, and I could go on all day.
         | 
         | DeepSeek is _a little bit_ better at writing in a generic and
         | uncharacteristic tone, but still... it 's not good.
        
           | bpodgursky wrote:
           | If you ask them to speak in a different voice, they will.
           | It's only characteristic if the user has made no effort at
           | all to mask that it is AI generated content.
        
         | whywhywhywhy wrote:
         | > We asked some of the naysayers from the editorial + CXO team
         | to sit in a room with us, while we presented on a large white
         | screen - a comparison of two random articles - one written by
         | AI, which btw wasn't trained
         | 
         | Needlessly close to bullying way to try and prove your point.
        
           | neya wrote:
           | > We asked some
           | 
           | Which part of this looks like bullying? It was opt-in. They
           | attended the presentation because they were interested.
        
       | Powdering7082 wrote:
       | Did the reporter reach out to Anthropic for public comment on
       | this? They list a "source familiar" with some details about what
       | the intended purpose was for, but no mention on the why
        
       | jasonthorsness wrote:
       | Is there an archive anywhere? People can argue to no end based on
       | some whimsical assumptions of what the blog was and why it was
       | taken down, but it really comes down to the content. I have found
       | even o3 cannot write high-quality articles on the topics I want
       | to write about.
        
         | linkage wrote:
         | Have you tried Perplexity's Discover feed? It's my go-to source
         | of news these days. I don't know what model they use to
         | generate content but it's really good.
        
       | jsemrau wrote:
       | Up until a few weeks ago, my LinkedIn seemed to become better
       | because of AI, but now it seems everything is lazy AI slop.
       | 
       | We meatbags are great pattern recognizers. Here is a list of my
       | current triggers:
       | 
       | "The twist?",
       | 
       | "Then something remarkable happened",
       | 
       | That said, this is more of an indictment of the lazyness of the
       | authors to provide clearer instructions on the style needed so
       | the app defaults to such patterns.
        
       | jsnider3 wrote:
       | We try things, sometimes they don't work.
        
       ___________________________________________________________________
       (page generated 2025-06-09 23:01 UTC)