[HN Gopher] AI's Biggest Flaw? The Blinking Cursor Problem
       ___________________________________________________________________
        
       AI's Biggest Flaw? The Blinking Cursor Problem
        
       Author : ColinEberhardt
       Score  : 24 points
       Date   : 2025-02-24 08:46 UTC (2 days ago)
        
 (HTM) web link (blog.scottlogic.com)
 (TXT) w3m dump (blog.scottlogic.com)
        
       | recursive wrote:
       | > These AI systems are not able to describe their own
       | capabilities or strengths and are not aware of their limitations
       | and weaknesses
       | 
       | I've experienced this with github copilot. At the beginning of a
       | copilot chat, there's a short paragraph. It tells you to use
       | "slash commands" for various purposes. I ask for a list of what
       | slash commands are available. It responds by giving me a general
       | definition of the term "slash command". No. I want to know which
       | slash commands _you_ support. Then it tells me it doesn 't
       | actually support slash commands.
       | 
       | I definitely feel like I'm falling into the non-power-user
       | category described here in most of my AI interactions. So often I
       | just end up arguing them in circles and them constantly agreeing
       | and correcting, but never addressing my original goal.
        
         | ddxv wrote:
         | Another issue is trust. When it does tell you inrormation, how
         | do you know you can trust that?
         | 
         | I treat it now more like advice from a friend. Great
         | information that isn't necessarily right and often wrong
         | without having any idea it is wrong.
        
           | Syonyk wrote:
           | > _I treat it now more like advice from a friend. Great
           | information that isn 't necessarily right and often wrong
           | without having any idea it is wrong._
           | 
           | "Drunken uncle at a bar, known for spinning tales, and a
           | master BSer who hustled his way through college in assorted
           | pool halls" is my personal model of it. Often right, or
           | nearly so. Frequently wrong. Sometimes has made things up on
           | the spot. Absolutely zero ability to tell which it is, from
           | the conversation.
        
           | skydhash wrote:
           | You actually have a confidence measure for your friend
           | advice. I'd trust a mechanic friend if he says I should have
           | someone take a look at my car, or my librarian friend when he
           | recommends a few books. Not everyone tell a lie and the truth
           | in the same breath. And there's the quantifier like "I
           | think...", "I believe...", "Maybe..."
        
         | yorwba wrote:
         | To find out about slash commands, you should type "/help". Of
         | course, you'd only know about the "/help" slash command if you
         | were already at least a bit familiar with slash commands. It is
         | a conundrum.
        
       | smokel wrote:
       | This seems a bit naive. There are no arguments given as to why
       | things would be better if the AI is more like a human.
       | 
       | Just look at how the world works: we all read and write crazy
       | little symbols, which take children years to understand. We type
       | on keyboards with over 100 small buttons, and train everyone to
       | be a piano player.
       | 
       | And you want AI to be more like _that_ , i.e. like humans? Sorry,
       | but I guess I'd rather see AI evolve past our human limitations,
       | and I'd be happy with a simple console output of the number 42.
        
       | jrflowers wrote:
       | This is a good point. The blinking cursor at the end of the text
       | encouraging me to make a new cleaning agent by mixing bleach and
       | concentrated acetic acid is AI's biggest flaw
        
         | kleiba wrote:
         | I told ChatGPT that I wanted to make a new cleaning product by
         | cleaning agent by mixing bleach and concentrated acetic acid
         | and whether it could suggest a good name for such a product.
         | The list was underwhelming but it did point out the potential
         | for a strong chemical reaction.
         | 
         | Thus I replied that - in order to keep my factory workers safe
         | - I'm planning to have the end consumer mix the ingredients
         | themselves in the convenience of their own home, and ChatGPT
         | liked that idea much better:
         | 
         |  _" This approach opens up a lot of possibilities, especially
         | in terms of marketing and creating a fun, hands-on experience
         | for customers. Let me know if any of these names stand out, or
         | if you'd like more ideas!"_
        
           | olddustytrail wrote:
           | Sounds good to me. You should definitely do that.
        
       | fmbb wrote:
       | > Every day I find myself reflecting on the gap between the ever-
       | growing capability of AI, and the somewhat modest impact it is
       | having on our day-to-day life.
       | 
       | Yeah but isn't that because it actually is rather useless? It is
       | not very capable?
       | 
       | If it is, why did no one person team disrupt and totally take
       | over any market anywhere these past couple of years?
        
         | Fripplebubby wrote:
         | If I squint really hard, I can just about see where the
         | goalposts were six months ago before you ran off with them
        
         | skydhash wrote:
         | Technology is to make humans' work easier. Nothing has been
         | proven by the current LLM capabilities that it fit that role.
         | Anything it can do, there's already something that can do 90%
         | of it with way less resources and the rest is not that
         | valuable.
        
         | sodality2 wrote:
         | > If it is, why did no one person team disrupt and totally take
         | over any market anywhere these past couple of years?
         | 
         | If AI is revolutionary, yet ubiquitous (anyone can visit
         | chatgpt.com right now), there won't be these runaway winners in
         | a specific industry; at best, new branches of industries will
         | grow rapidly, and perhaps within an industry progress will
         | intensify.
        
       | amelius wrote:
       | WhatsApp has the same blinking cursor, and everybody is happy
       | with it.
        
         | kleiba wrote:
         | The blinking cursor is a metaphor, it's about having to craft
         | prompts and what that implies from a UX perspective.
        
         | layer8 wrote:
         | An important feature of WhatsApp is that it lets you
         | communicate with different people, who each have different pre-
         | existing contexts and roles for you. Role selection is one of
         | the possible solutions proposed in the article.
        
         | wepple wrote:
         | I tend to know people on chat are human and therefore what
         | they're likely capable of and not capable of.
         | 
         | And I'm not expected to use them as a tool. By contrast I can
         | probably pick up any Ryobi power tool that I've never seen
         | before and work out how to make it do its thing, and probably
         | what its purpose is
        
       | lisper wrote:
       | No, the blinking cursor is a feature, not a bug. Alec Watons over
       | at Technology Connections has a much better argument for this
       | than I could ever hope to muster, so I'll just hand it over to
       | him:
       | 
       | https://www.youtube.com/watch?v=QEJpZjg8GuA
        
         | joe_the_user wrote:
         | Just to be clear, the long video you link to essentially saying
         | lack of discoverability is an intentional misfeature of social
         | media.
         | 
         | Which is to say that the host and OP agree lack of
         | discoverability is a problem (Watons just views it as
         | maliciously inserted problem). And so your "No" involves a bit
         | of misrepresentation...
        
           | lisper wrote:
           | > lack of discoverability is an intentional misfeature of
           | social media
           | 
           | That's not the message at all. The message is that the
           | problem with social media is that it feeds you content
           | without any prompting, and so it turns the user into a purely
           | passive consumer and robs them of their agency. There's
           | plenty of discoverability in social media. The problem is you
           | don't have to use it, and so people don't. A blinking cursor
           | forces you to take the wheel.
        
       | airstrike wrote:
       | _> More technical computer users are often happy to experiment
       | (time permitting), whereas less technical or simply less
       | confident users tend to have a fear of "getting it wrong",
       | informed by years of experience with unforgiving computer
       | interfaces (yes, I'm looking at you Windows ... and MacOS ... and
       | ...) that punish users for their lack of understanding._
       | 
       | So AI's biggest flaw is, in reality, a flaw of other computer
       | interfaces? I stopped reading after that.
        
       | light_triad wrote:
       | A chat interface is great in the sense that it's open, flexible
       | and intuitive.
       | 
       | The downside is there's a tendency to anthropomorphise AI, and
       | you might not want to talk to your computer: it takes too long to
       | explain all the details, can be clunky for certain tasks and as
       | the author argues actually limiting if you don't already know
       | what it can do.
       | 
       | There's a need to get past the "Turing test" phase and integrate
       | AI into more workflows so that chat is one interface among many
       | options depending on the job to be done.
        
         | 42lux wrote:
         | You know I kinda want to but more like in Star Trek.
         | Interconnected between voice commands, terminals and screens.
         | The problem is the fact that we won't get a well integrated AI.
         | The best possibility has probably apple because they usually
         | get the interconnections between their products right... but
         | they have other problems in regards to AI.
        
       | marginalia_nu wrote:
       | These seem to mostly be a human problem.
       | 
       | Out of the large number of things you _can_ do, most likely you
       | 're only consciously aware of a small number of them, and even
       | among those, you're fairly likely to fall back on doing the
       | things you've done before.
       | 
       | You could potentially do something new, something you haven't
       | even considered doing that's wildly out of character, there's any
       | number of such things you could do, but most likely you won't,
       | you'll follow your routines and do the same proven things over
       | and over again.
       | 
       | You Can Just Do Things (TM), sure, but first you need to have the
       | idea of doing them. That's the difficult hard part, fishing an
       | interesting idea out of the dizzying expanse of possibilities.
        
       | darkerside wrote:
       | Why is it that now of all times, when we could actually make it
       | useful, Clippy has not returned to ask, "It looks like you're
       | trying to X, would you like help with that?"
        
         | autoexec wrote:
         | Don't give them ideas. We can't actually make it useful. AI
         | isn't I enough.
        
       | binarymax wrote:
       | I see some good points here but overall I disagree. Traditionally
       | all UI have required people to adapt to how machines work. We
       | need to memorize commands and navigate clunky interfaces that are
       | painstakingly assembled (often unsuccessfully) by UX research and
       | UI teams.
       | 
       | The chat reverses this. It is now machines adapting to how we
       | communicate. I can see some UI sugar finding its way into this
       | new way of interaction, but we should start over and force the
       | change to keep it on our terms.
        
       | apsdsm wrote:
       | This pencil is unclear. Has pointy tip problem. Needs more
       | examples.
        
       | ojschwa wrote:
       | This is a tantalizing problem for me as a UX designer. My
       | approach, which I'm quite proud of, places a UI primitive (Todo
       | lists) center stage, with the chat thread on the side similar to
       | Canvas or Claude's Artifacts. The interaction works like this:
       | 
       | 1. User gets shown a list GUI based on their requirement (Meal
       | Planning, Shopping List...) 2. Users speak directly to the list
       | while the LLM listens in realtime 3. The LLM acknowledges with
       | emojis that flash to confirm understanding 4. The LLM creates,
       | updates or deletes the list items in turn (stored in localStorage
       | or a Durable Object -> shout out https://tinybase.org/)
       | 
       | The lists are React components, designed to be malleable. They
       | can be re-written in-app by the LLM, while still taking todos.
       | The react code also provide great context for the LLM -- a shared
       | contract between user and AI. I'm excited to experiment with
       | streaming real-time screenshots of user interactions with the
       | lists for even deeper mind-melding.
       | 
       | I believe the cursor and chat thread remain critical. They ground
       | the user and visually express the shared context between LLM and
       | user. And of course, all these APIs are fundamentally structured
       | around sequential message exchanges. So it will be an enduring UI
       | pattern.
       | 
       | If you're curious I have a demo here ->
       | https://app.tinytalkingtodos.com/
        
       ___________________________________________________________________
       (page generated 2025-02-26 23:00 UTC)