[HN Gopher] AI gets more 'meh' as you get to know it better
       ___________________________________________________________________
        
       AI gets more 'meh' as you get to know it better
        
       Author : rntn
       Score  : 45 points
       Date   : 2025-10-08 18:32 UTC (4 hours ago)
        
 (HTM) web link (www.theregister.com)
 (TXT) w3m dump (www.theregister.com)
        
       | taylodl wrote:
       | Welcome to the trough of disillusionment!
        
       | baobun wrote:
       | One anecdote. I was worried about a recent friend of mine (non-
       | technical solo traveler) becoming besties with ChatGPT and overly
       | trusting and depending on it for basically everything.
       | 
       | Last time we met they had cancelled their subscription and cut
       | down on the daily chats because they started feeling drained by
       | the constant calls for engagement and follow-up questions,
       | together with "she lost EQ after an update".
        
         | jncfhnb wrote:
         | > Last time we met they had cancelled their subscription and
         | cut down on the daily chats because they started feeling
         | drained by the constant calls for engagement and follow-up
         | questions, together with "she lost EQ after an update".
         | 
         | Can you explain what this means?
         | 
         | Your friend felt drained because chat gpt was asking for her
         | engagement?
        
           | neom wrote:
           | Not OP, but:
           | 
           | 4o, the model most non-tech people use (that I wish they
           | would depreciate) is very...chatty, it will actively try to
           | engage you, and give you "useful things" you think you need,
           | and take you down huge long rabbit holes. On the second
           | point, it used to be very "high EQ" to people (sycophantic).
           | Once they rolled back the sycophancy thing, even a couple of
           | my non-technical friends msg'd me asking what happened to
           | ChatGPT. I know one person who we've currently lost to 4o,
           | it's got them talked into a very strange place friends can't
           | reason them out of, and one friend who has recently "come
           | back from it" so to speak.
        
             | lxgr wrote:
             | Since when is sycophancy the same thing as "high EQ"?
             | 
             | A high EQ might well be a prerequisite for successful
             | sycophancy, but the other way definitely does not hold.
        
               | neom wrote:
               | It's not, I'm simply saying that I believe the
               | sycophantic version of 4o that they rolled backed
               | appeared "higher EQ" to it's users.
        
           | coldtea wrote:
           | ChatGPT got on their nerves for nagging and baiting for more
           | engagement.
        
           | baobun wrote:
           | > Your friend felt drained because chat gpt was asking for
           | her engagement?
           | 
           | Basically yeah (except the "she" in my comment is referring
           | to ChatGPT).
        
       | andrewinardeer wrote:
       | I'm fairly bored with AI now.
       | 
       | I genuinely wonder where the next innovative leap in AI will come
       | from and what it will look like. Inference speed? Sharper
       | reasoning?
        
         | mdhb wrote:
         | I think there's an extremely high likelihood that we just DON'T
         | see huge advancements at least in terms of accuracy or
         | capabilities which are probably the two major nuts to crack to
         | bring it to a different level.
         | 
         | I'm open to the possibility of faster, cheaper and smaller (we
         | saw an instance of that with deepseek) but think there's a real
         | chance we hit a wall elsewhere.
        
       | BoredPositron wrote:
       | It's even worse for image/video generation. The models get better
       | in fidelity (prompt adherence) but raw image quality stagnated
       | for close to 1 1/2 years now.
        
         | lxgr wrote:
         | It's the exact opposite for me. Image quality has been more
         | than fine for me for a year or two, while prompt adherence has
         | massively improved but still leaves much to be desired.
        
       | dimmuborgir wrote:
       | Nano Banana for me. After the initial wow phase it's meh now.
       | Randomly refuses to adhere to the prompt. Randomly makes
       | unexpected changes. Randomly triggers censorship filter. Randomly
       | returns the image as is without making any changes.
        
       | DamnInteresting wrote:
       | It's just like spending time with a human bullshitter. At first,
       | their energy and confidence are fun! But the spell is broken
       | after a handful of "confidently incorrect" moments, and the
       | realization that they will _never stop doing that_. It 's usually
       | more work than it's worth to extract the kernels from the crap.
        
         | lxgr wrote:
         | Knowing whether (ostensible) solutions are easy or costly to
         | verify is key to using LLMs efficiently.
        
       ___________________________________________________________________
       (page generated 2025-10-08 23:01 UTC)