[HN Gopher] Death by AI
       ___________________________________________________________________
        
       Death by AI
        
       Author : ano-ther
       Score  : 89 points
       Date   : 2025-07-19 14:35 UTC (8 hours ago)
        
 (HTM) web link (davebarry.substack.com)
 (TXT) w3m dump (davebarry.substack.com)
        
       | rf15 wrote:
       | So many reports like this, it's not a question of working out the
       | kinks. Are we getting close to our very own Stop the Slop
       | campaign?
        
         | randcraw wrote:
         | Yeah, after daily working with AI for a decade in a domain
         | where it _does_ work predictably and reliably (image analysis),
         | I continue to be amazed how many of us continue to trust LLM-
         | based text output as being useful. If any human source got
         | their facts wrong this often, we'd surely dismiss them as a
         | counterproductive imbecile.
         | 
         | Or elect them President.
        
           | BobbyTables2 wrote:
           | HAL 9000 in 2028!
        
           | locallost wrote:
           | I am beginning to wonder why I use it, but the idea of it is
           | so tempting. Try to google it and get stuck because it's
           | difficult to find, or ask and get an instant response. It's
           | not hard to guess which one is more inviting, but it ends up
           | being a huge time sink anyway.
        
         | trod1234 wrote:
         | Regulation with active enforcement is the only civil way.
         | 
         | The whole point of regulation is for when the profit motive
         | forces companies towards destructive ends for the majority of
         | society. The companies are legally obligated to seek profit
         | above all else, absent regulation.
        
           | Aurornis wrote:
           | > Regulation with active enforcement is the only civil way.
           | 
           | What regulation? What enforcement?
           | 
           | These terms are useless without details. Are we going to fine
           | LLM providers every time their output is wrong? That's the
           | kind of proposition that sounds good as a passing angry
           | comment but obviously has zero chance of becoming a real
           | regulation.
           | 
           | Any country who instituted a regulation like that would see
           | all of the LLM advancements and research instantly leave and
           | move to other countries. People who use LLMs would sign up
           | for VPNs and carry on with their lives.
        
       | draw_down wrote:
       | Man, this guy is still doing it. Good for him! I used to read his
       | books (compendia of his syndicated column) when I was a kid.
        
       | hibert wrote:
       | Leave it to a journalist to play chicken with one of the most
       | powerful minds in the world on principle.
       | 
       | Personally, if I got a resurrection from it, I would accept the
       | nudge and do the political activism in Dorchester.
        
       | jwr wrote:
       | I'd say this isn't just an AI overview thing. It's a Google
       | thing. Google will sometimes show inaccurate information and
       | there is usually no way to correct it. Various "feedback" forms
       | are mostly ignored.
       | 
       | I had to fight a similar battle with Google Maps, which most
       | people believe to be a source of truth, and it took years until
       | incorrect information was changed. I'm not even sure if it was
       | because of all the feedback I provided.
       | 
       | I see Google as a firehose of information that they spit at me
       | ("feed"), they are too big to be concerned about any
       | inconsistencies, as these don't hurt their business model.
        
         | muglug wrote:
         | No, this is very much an AI overview thing. In the beginning
         | Google put the most likely-to-match-your-query result at the
         | top, and you could click the link to see whether it answered
         | your question.
         | 
         | Now, frequently, the AI summaries are on top. The AI summary
         | LLM is clearly a very fast, very dumb LLM that's cheap enough
         | to run on webpage text for every search result.
         | 
         | That was a product decision, and a very bad one. Currently a
         | search for "Suicide Squad" yields
         | 
         | > The phrase "suide side squad" appears to be a misspelling of
         | "Suicide Squad"
        
         | hughw wrote:
         | Well it was accurate if you were asking about the Dave Barry in
         | Dorchester.
        
         | o11c wrote:
         | I remember when the biggest gripe I had with Google was that
         | when I searched for Java documentation (by class name), it
         | defaulted to showing me the version for 1.4 instead of 6.
        
       | _ache_ wrote:
       | Can you please re-consult a physician? I just check on ChatGPT,
       | I'm pretty confident you are dead.
        
       | devinplatt wrote:
       | This reminds me a lot of the special policies Wikipedia has
       | developed through experience about sensitive topics, like
       | biographies of living persons, deaths, etc.
        
         | pyman wrote:
         | I'm worried about this. Companies like Wikipedia spent years
         | trying to get things right, and now suddenly Google and
         | Microsoft (including OpenAI) are using GenAI to generate
         | content that, frankly, can't be trusted because it's often made
         | up.
         | 
         | That's deeply concerning, especially when these two companies
         | control almost all the content we access through their search
         | engines, browsers and LLMs.
         | 
         | This needs to be regulated outside the US [0]. These companies
         | should be held accountable for spreading false information or
         | rumours, as it can have unexpected consequences.
         | 
         | [0] I say outside because in the US, big tech controls the
         | politicians.
        
           | Aurornis wrote:
           | > This needs to be regulated. They should be held accountable
           | for spreading false information or rumours,
           | 
           | Regulated how? Held accountable how? If we start fining LLM
           | operators for pieces of incorrect information you might as
           | well stop serving the LLM to that country.
           | 
           | > since it can have unexpected consequences
           | 
           | Generally you hold the person who takes action accountable.
           | Claiming an LLM told you bad information isn't any more of a
           | defense than claiming you saw the bad information on a Tweet
           | or Reddit comment. The person taking action and causing the
           | consequences has ownership of their actions.
           | 
           | I recall the same hand-wringing over early search engines:
           | There was a debate about search engines indexing bad
           | information and calls for holding them accountable for
           | indexing incorrect results. Same reasoning: There could be
           | consequences. The outrage died out as people realize they
           | were tools to be used with caution, not fact-checked and
           | carefully curated encyclopedias.
           | 
           | > I'm worried about this. Companies like Wikipedia spent
           | years trying to get things right,
           | 
           | Would you also endorse the same regulations against
           | Wikipedia? Wikipedia gets fined every time incorrect
           | information is found on the website?
           | 
           | EDIT: Parent comment was edited while I was replying to add
           | the comment about outside of the US. I welcome some country
           | to try regulating LLMs to hold them accountable for
           | inaccurate results so we have some precedent for how bad of
           | an idea that would be and how much the citizens would switch
           | to using VPNs to access the LLM providers that are turned off
           | for their country in response.
        
         | eloeffler wrote:
         | I know one story that may have become such an experience. It's
         | about Wikipedia Germany and I don't know what the policies
         | there actually are.
         | 
         | A German 90s/2000s rapper (Textor, MC of Kinderzimmer
         | Productions) produced a radio feature about facts and how hard
         | it can be to prove them.
         | 
         | One personal example he added was about his Wikipedia Article
         | that stated that his mother used to be a famous jazz singer in
         | her birth country Sweden. Except she never was. The story had
         | been added to an Album recension in a rap magazine years before
         | the article was written. Textor explains that this is part of
         | 'realness' in rap, which has little to do with facts and more
         | with attitude.
         | 
         | When they approached Wikipedia Germany, it was very difficult
         | to change this 'fact' about the biography of his mother. There
         | was published information about her in a newspaper and she
         | could not immediately prove who she was. Unfortunately, Textor
         | didn't finish the story and moved on to the next topic in the
         | radio feature.
        
       | jh00ker wrote:
       | I'm interested how the answer will change once his article gets
       | indexed. "Dave Barry died in 2016, but he continues to dispute
       | this fact to this day."
        
       | SoftTalker wrote:
       | Dave Barry is dead? I didn't even know he was sick.
        
       | ChrisMarshallNY wrote:
       | Dave Barry is the best!
       | 
       | That is such a _classic_ problem with Google (from long before
       | AI).
       | 
       | I am not optimistic about anything being changed from this, but
       | hope springs eternal.
       | 
       | Also, I think the trilobite is cute. I have a [real fossilized]
       | one on my desk. My friend stuck a pair of glasses on it, because
       | I'm an old dinosaur, but he wanted to go back even further.
        
       | ChrisMarshallNY wrote:
       | This brings this classic to mind:
       | https://www.youtube.com/watch?v=W4rR-OsTNCg
        
       | jongjong wrote:
       | Maybe it's the a genuine problem with AI that it can only hold
       | one idea, one possible version of reality at any given time.
       | Though I guess many humans have the same issue. I first heard of
       | this idea from Peter Thiel when he described what he looks for in
       | a founder. It seems increasingly relevant to our social structure
       | that the people and systems who make important decisions are able
       | to hold multiple conflicting ideas without ever fully accepting
       | one or the other. Conflicting ideas create decision paralysis of
       | varying degrees which is useful at times. It seems like an
       | important feature to implement into AI.
       | 
       | It's interesting that LLMs produce each output token as
       | probabilities but it seems that in order to generate the next
       | token (which is itself expressed as a probability), it has to
       | pick a specific word as the last token... It can't just build
       | more probabilities on top of previous probabilities. It has to
       | collapse the previous token probabilities as it goes.
        
       ___________________________________________________________________
       (page generated 2025-07-19 23:00 UTC)