[HN Gopher] A.I. Is Getting More Powerful, but Its Hallucination...
       ___________________________________________________________________
        
       A.I. Is Getting More Powerful, but Its Hallucinations Are Getting
       Worse
        
       Author : dewarrn1
       Score  : 22 points
       Date   : 2025-05-05 19:33 UTC (3 hours ago)
        
 (HTM) web link (www.nytimes.com)
 (TXT) w3m dump (www.nytimes.com)
        
       | dewarrn1 wrote:
       | So, in reference to the "reasoning" models that the article
       | references, is it possible that the increased error rate of those
       | models vs. non-reasoning models is simply a function of the
       | reasoning process introducing more tokens into context, and that
       | because each such token may itself introduce wrong information,
       | the risk of error is compounded? Or rather, generating more
       | tokens with a fixed error rate must, on average, necessarily
       | produce more errors?
        
         | ActorNightly wrote:
         | Its a symptom of asking the models to provide answers that are
         | not exactly in the training set, so the internal interpolation
         | that the models do probably runs into edge cases where
         | statistically it goes down the wrong path.
        
       | datadrivenangel wrote:
       | This may be an issue with default settings:
       | 
       | "Modern LLMs now use a default temperature of 1.0, and I theorize
       | that higher value is accentuating LLM hallucination issues where
       | the text outputs are internally consistent but factually wrong."
       | [0]
       | 
       | 0 - https://minimaxir.com/2025/05/llm-use/
        
       | dimal wrote:
       | I wish we called hallucinations what they really are: bullshit.
       | LLMs don't perceive, so they can't hallucinate. When a person
       | bullshits, they're not hallucinating or lying, they're simply
       | unconcerned with truth. They're more interested in telling a
       | good, coherent narrative, even if it's not true.
       | 
       | I think this need to bullshit is probably inherent in LLMs. It's
       | essentially what they are built to do: take a text input and
       | transform it into a coherent text output. Truth is irrelevant.
       | The surprising thing is that they can ever get the right answer
       | at all, not that they bullshit so much.
        
         | elpocko wrote:
         | Or maybe we could stop anthropomorphizing tech and call the
         | "hallucinations" what they really are: artifacts introduced by
         | lossy compression.
         | 
         | No one is calling the crap that shows up in JPEGs
         | "hallucinations" or "bullshit"; it's commonly accepted side
         | effects of the compression algorithm that makes up shit that
         | isn't there in the original image. Now we're doing the same
         | with language and suddenly it's "hallucinations" and "bullshit"
         | because it's so uncanny.
        
       | scudsworth wrote:
       | https://archive.ph/Jqoqa
        
       | bdangubic wrote:
       | "self-driving cars are getting more and more powerful but the
       | number of deaths they are causing is rising exponentially" :)
        
       ___________________________________________________________________
       (page generated 2025-05-05 23:01 UTC)