[HN Gopher] Chatbots are not the future
       ___________________________________________________________________
        
       Chatbots are not the future
        
       Author : paulshen
       Score  : 123 points
       Date   : 2023-05-01 15:14 UTC (7 hours ago)
        
 (HTM) web link (wattenberger.com)
 (TXT) w3m dump (wattenberger.com)
        
       | itake wrote:
       | > Good tools make it clear how they should be used.
       | 
       | This is such a weird statement from someone in the tech space.
       | Programming languages rarely have an opinion for how they are to
       | be used (for example JS MUST only run the browser or which code
       | style to use).
       | 
       | When I chat with customer support, I wish they could meet me
       | where I am instead of me needing to learn their tools. For
       | example, I want to say "cancel my subscription" and my subscript
       | get cancelled. I don't want to have to figure out which sub menu
       | of the sub menu that has the magic "end subscription" button.
       | 
       | I know how to use my tool (english). LLMs teach computers how to
       | use that tool too.
        
         | mxuribe wrote:
         | Actually, as i recall, that statement is somewhat foundational
         | for human computer interaction...of course my recollection is
         | from my HCI college course a couple of decades ago...But, yeah,
         | whenever i make something - digital or otherwise - i hope to
         | design it properly enough that the intended user intuitively
         | understands how it should be used. (I do add documentation
         | beyond the base design, but because i want to further help the
         | person.)
        
         | karmakurtisaani wrote:
         | I think the problem in your example of cancelling the
         | subscription is that service providers often make it difficult
         | on purpose. I doubt they'll allow chatbots to make it any
         | simpler.
        
           | itake wrote:
           | I get support emails for "how do I cancel my Apple App Store
           | Subscription?" when Apple governs the cancellation process in
           | a centralized and simple manner.
           | 
           | I also get support emails for password resets, which I try to
           | make as simple as possible.
           | 
           | People don't want to learn new tools if their existing tools
           | (language) work just fine.
        
         | phillipcarter wrote:
         | > Programming languages rarely have an opinion for how they are
         | to be used
         | 
         | Erm, they absolutely have an opinion. That's why I can't just
         | write however I like in whatever language. I need to stick to
         | the designer's opinions on syntax and semantics otherwise it
         | won't work.
        
           | itake wrote:
           | When chatting with a Chatbot, you also would need to stick
           | with the language's syntax and semantics otherwise it won't
           | work. You can't just write however and whatever you want and
           | expect the bot to understand you.
        
         | rurp wrote:
         | Subscription cancellation buttons are _intentionally_ confusing
         | and hard to find. It 's easy to add a big red Cancel button in
         | an obvious place that works immediately, but companies choose
         | to avoid that route in order to extract more money from
         | customers. Nothing about that dynamic will change with LLMs,
         | those same companies will have the same priorities.
         | 
         | The only difference will be that script-following customer
         | service reps giving you the runaround will be replaced by
         | indefatigable chatbots giving you the runaround, which honestly
         | sounds pretty hellish to me.
        
       | swyx wrote:
       | Amelia presented a bit more on her demo at our meetup last week:
       | 
       | https://www.latent.space/p/build-ai-ux
       | 
       | full recording in the video at the bottom!
        
       | itsuka wrote:
       | Someone said that using a chat-only interface is like using CLI
       | tools. It's even worse, I would add, because there is no
       | autocomplete or man command to help you out. Most of us here
       | probably get it, but my dad had a hard time getting a good answer
       | from GPT when he first tried it, even with GPT-4 model.
        
         | LesZedCB wrote:
         | isn't it ironic that there's no autocomplete on the tool that's
         | literally autocomplete? haha
         | 
         | where's the fine-tuned prompt helper model?
        
           | andrepd wrote:
           | We clearly need to ask chatgpt to write a model for the
           | prompt helper model.
        
           | abe-101 wrote:
           | Both Bard and Bing chat have auto-complition
        
         | twelve40 wrote:
         | Hence the need for specially trained people who can do the
         | back-and-forth with humans to understand exactly what is
         | required, then enter it (into the brittle prompt in this case)
         | in a way that the computer would understand.
        
           | moffkalast wrote:
           | Or more like a basic understanding of what the thing can do
           | and what it can't do. It's not that complicated.
        
       | born-jre wrote:
       | Very very offtopic:
       | 
       | He called ChatGPT oracle, nice but not enough.
       | 
       | I want someone to name chatbot their `oracle of delphi` plz.
       | thank you.
        
         | hoppyhoppy2 wrote:
         | She*
        
       | pornel wrote:
       | When it's about the future, limitations of current
       | implementations aren't a strong argument.
       | 
       | ChatGPT can be confidently stupid, but what if it gets better?
       | 
       | You need explanations/affordances of what it can and can't do
       | only when its capabilities are limited. If it really could do
       | whatever you asked, you wouldn't need it. Just say what you want.
        
       | uxabhishek wrote:
       | Contextually relevant suggested options (that can be acted upon
       | with a single click), alongside the free form input box will
       | emerge as the norm.
       | 
       | Down the line people will expect applications to be chat ready.
       | They will see an input box and expect the application to
       | understand natural language and respond in the most helpful way.
       | Which might be showing an error message or suggesting relevant
       | next steps.
        
       | MattPalmer1086 wrote:
       | TL;DR, more targeted tools could be more helpful for specific
       | tasks than an unstructured text interface for everything.
        
       | morelisp wrote:
       | > I've convinced you that chatbots are a terrible interface for
       | LLMs.
       | 
       | I was already convinced of this. What I'm not convinced of, and
       | the article has little to say about, is
       | 
       | > Chatbots Are Not the Future... chatbots are not the future of
       | interfaces.
       | 
       | Chatbots are a terrible interface to LLMs, and yet they are
       | absolutely going to be the future of every third godawful website
       | I must visit.
        
       | c7b wrote:
       | Not sure I'm convinced - natural language is one of the most
       | intuitive interfaces we have, it's also how most instructions in
       | professional contexts are delivered. What current ChatBots are
       | missing right now isn't radiobuttons for different styles but
       | context. No one who just reads my messages can know what kind of
       | style I'm looking for until they either get laborious
       | instructions, or, probably better, they see some examples.
       | 
       | I'm guessing that training custom language models on a company's
       | data must be one of the hottest things you can be working on
       | right now if you're looking for VC money (if there's something
       | out there that compares to how well StableDiffusion+Dreambooth
       | works for images, I'd be thankful for any pointers).
        
       | tekni5 wrote:
       | Not a very convincing argument, your gizmo sliders and checkboxes
       | or whatever aren't going to replace chatbots but only slightly
       | extend them and really aren't needed if AI gets better over time.
       | 
       | Some neural connection to the brain that will interpret your
       | thoughts is the only logical thing that will supersede chat ai,
       | but that won't happen for a while. Maybe a connection to all your
       | data will happen first so the AI will better understand what type
       | of person you are and what you want, that's already probably
       | happening based on past responses.
        
       | skybrian wrote:
       | Learning how to use freeform text input can be pretty annoying
       | when only a few things work. Some examples are playing a text
       | adventure ("guess the verb") and using an unfamiliar command line
       | interface. Good error messages can help.
       | 
       | Web search changed that. Most queries work, at least somewhat.
       | 
       | There's a point where freeform text input becomes better than
       | structured input. A simple search box is what people mostly use
       | instead of an advanced search form, let alone a web directory
       | (like Yahoo! back in the day).
       | 
       | For web search, there are very few error messages. If you enter a
       | query that doesn't work very well, you get back results that
       | aren't very good or what you wanted, so you try something else.
       | 
       | With AI chatbots, expectations are sky-high, but there are times
       | when they _should_ refuse with a good error message, because they
       | really can 't do what you're hoping to do. An example is when you
       | ask it to explain its reasoning. An LLM never knows why it wrote
       | what it did, but it will try to invent a plausible explanation
       | anyway. [1]
       | 
       | Better error messages that help users understand what chatbots
       | can actually do would help avoid misconceptions, but this won't
       | happen unless the error messages are trained in.
       | 
       | [1] https://skybrian.substack.com/p/ai-chatbots-dont-know-why-
       | th...
        
         | JusticeJuice wrote:
         | Yeah, exactly. A freeform input promises it can do 'anything at
         | all' but if it can't in reality, it's always a frustrating
         | interface.
         | 
         | Classic example, Siri. It's so easy to quickly find stuff you
         | feel like it should be able to do, but it just can't. "When was
         | my last message from Steve" etc.
        
       | rocketbop wrote:
       | I don't know why it hadn't occurred to me before now, that using
       | ChatGPT is quite to similar to playing Zork and Infocom games
       | from the 1980s, with less trial and error needed to get something
       | out of it.
       | 
       | The point and click adventure games from Sierra and Lucas Arts
       | were a huge step forward in interaction, although you didn't have
       | to use your imagination as much to solve the puzzles.
       | 
       | And here we are again asking users to type their way to success.
        
         | cout wrote:
         | Chatgpt or a successor dynamically generating in-game dialogue
         | could be fun. Or maybe it won't be. I'm interested in seeing it
         | done.
        
         | notahacker wrote:
         | The other obvious UX comparison point is with sending instant
         | messages to a person until they get it right....
         | 
         | Provided the responses aren't _too_ brittle (and the LLM
         | getting it wrong isn 't too upsetting or find-out-too-late)
         | lots of non power-users are going to prefer it, at least in
         | cases where a menu or form input with about six options won't
         | suffice.
        
       | autokad wrote:
       | > When I go up the mountain to ask the ChatGPT oracle a question,
       | I am met with a blank face. What does this oracle know?
       | 
       | I think if your attitude is that its an oracle, then you already
       | have the wrong attitude for using the tool. Chatgpt is a tool, if
       | you dont know how to use the tool, stop complaining that you
       | don't like it. Imagine telling everyone scalpels are horrible
       | tools because they tried to perform surgery on someone and
       | botched it up.
       | 
       | One day, not too far off, we are going to be able to tell a 'chat
       | bot': make me a cartoon, it takes place in a steam punk fantasy.
       | make it 2 seasons with 22 episodes each season. great, add a
       | cliff hanger at the end of episode 11. add a love story component
       | to it. reduce it back down to 1 season but 44 minute episodes.
       | 
       | content creation is no longer going to be tied back to knowing
       | how to draw cartoons, or have armies of writers. yes, we can get
       | garbage out of the system. but its a tool, plenty of tools
       | produce garbage results if you dont know how to use them.
       | 
       | As an experiment, I asked chatgpt to write a business plan (one
       | my brother started). The business plan was very close to what my
       | brother produced, after working on it for a month. That's
       | powerful, that's worthy of being 'the future'.
        
         | vkou wrote:
         | > Chatgpt is a tool, if you dont know how to use the tool, stop
         | complaining that you don't like it.
         | 
         | It's a tool that can be opaquely configured to be used in a
         | million different ways, and when using it does not bring about
         | the desired result, its acolytes sneer, and suggest that you're
         | using it wrong.
         | 
         | It's like a multi-tool, that only works when you're
         | blindfolded. Sure, it can be used to hammer nails and tighten
         | screws and strip wires and measure a tire's pressure, but it
         | makes it quite difficult to find the magical incantation that
         | will apply the right end to the job.
         | 
         | (And most of the time, it quietly leaves the screw untightened,
         | the wire clipped, and the tire with a hole in it. It's the user
         | who's wrong, of course.)
        
         | regularjack wrote:
         | Scalpel manufacturers don't advertise their scalpels as capable
         | of performing surgery on their own.
        
           | warent wrote:
           | Do you have an example of AI false advertising?
        
             | __loam wrote:
             | _gesticulates broadly at the entire field_
        
               | moffkalast wrote:
               | Fair enough
        
         | anonymouskimmer wrote:
         | > As an experiment, I asked chatgpt to write a business plan
         | (one my brother started). The business plan was very close to
         | what my brother produced, after working on it for a month.
         | That's powerful, that's worthy of being 'the future'.
         | 
         | How much knowledge and ideas did your brother personally
         | develop over this month in addition to the business plan? Being
         | handed a working plan is sometimes less useful than the
         | aggregate experience leading up to the plan.
        
         | ambicapter wrote:
         | Every "impressive chatgpt" story I hear is about someone
         | comparing what chatgpt produced to what a human produced, and
         | saying it's scary close. No one talks about all the times they
         | asked chatgpt to produce something, didn't compare it to what a
         | human could do, and then realized it was catastrophically wrong
         | once implemented.
        
         | MuffinFlavored wrote:
         | > Chatgpt is a tool
         | 
         | When you use a hammer or a drill, do you expect it to sometimes
         | not hit/screw the nail?
         | 
         | If ChatGPT is a tool for knowledge transfer/extraction, it
         | can't hallucinate/lie to you/be wrong/make stuff up.
         | 
         | If it's a tool for potentially discovering some knowledge that
         | may be true and needs to almost always be verified by either a
         | compiler or a followup "find me a reference/discussion" Google
         | search to make sure it's accurate, then sure. But I don't think
         | that's what it's primarily being advertised as.
        
           | brokencode wrote:
           | Beyond the obvious fact that you can accidentally hit your
           | thumb with a hammer or strip the head off a screw with a
           | screwdriver, I'd very much like to hear about any tool for
           | collecting knowledge that is perfect.
           | 
           | Web searches will for sure give you wrong answers. Even
           | professors or other experts in a field will be wrong
           | sometimes. Heck, even Einstein got some things wrong.
           | 
           | Your goalpost is in the wrong spot. Tools don't need to be
           | and probably never can be perfect. But that doesn't mean
           | they're not useful.
        
           | ukuina wrote:
           | > When you use a hammer or a drill, do you expect it to
           | sometimes not hit/screw the nail?
           | 
           | Not sure about drills, but this absolutely happens with
           | drivers if you fumble mating the bit to the screw head, or if
           | you miss the stud, or if you overtighten, or if you don't
           | sometimes pre-drill, or if you strip the head, or if you
           | don't correctly gauge underlying material composition, or
           | thickness, or if you...
        
         | tootie wrote:
         | I think the point of the article is that this does not
         | constitute a chatbot. It's not conversational. It's not really
         | general-purpose. That doesn't mean it isn't powerful, just that
         | telling users to have a freeform conversation isn't going to
         | work. It's also why everyone is getting excited for "prompt
         | engineering" to make better use of it. The biggest user value
         | is still going to be a level above the open-ended chat UI we
         | have right now. He's not saying GPT is useless, he's saying we
         | haven't put it into it's optimal context yet.
        
         | willio58 wrote:
         | Exactly. I use ChatGPT for help coding sometimes and it's like
         | 50/50 if I get an answer that is truly helpful, but that's
         | infinitely better of a tool than I thought we'd have 1 year
         | ago.
        
         | hammyhavoc wrote:
         | Yes, sure. But you know what makes most of human history's
         | output of art, music, literature et al great? Intent, attention
         | to detail and _self-expression_.
         | 
         | Broad strokes are broad strokes. I can procedurally generate
         | levels all day long in a video game, but for me, they're never
         | going to be as compelling or interesting as a lower-resolution
         | and low quality textured game from the '90s or '00s where every
         | single tree and rock is placed with intent.
         | 
         | I already think modern cartoons are fairly sterile and soulless
         | versus their hand drawn or hybrid counterparts. It isn't even
         | elitism, they just don't hold my attention or interest me
         | artistically, stylistically, or in terms of content.
         | 
         | If you choose to express yourself in broad strokes, that's
         | fine, whatever floats your boat. I'll continue to chase things
         | that have intent and artistry behind every aspect of them.
         | Generic and formulaic is generic and formulaic all day long.
         | It's also why I don't like most modern anime, it's sterile
         | visually and isn't why I enjoyed the medium.
        
           | autokad wrote:
           | I'm not arguing artists are going to go away, just that story
           | telling is going to become a lot easier. it also doesn't mean
           | the content produced is necessarily painted with 'broad
           | strokes' and absolutely doesn't mean its soulless. It means
           | the barriers to entry for producing stories just came down a
           | whole lot - akin to the printing press did for books.
        
             | JohnFen wrote:
             | But is the person using the tool an artist? I think this is
             | an important question. If I give detailed instructions to a
             | human ghost writer about a story I want them to write, I
             | don't think anyone would say that I wrote the story. It was
             | written by the ghost writer.
             | 
             | If a piece of art is made by a computer based on detailed
             | instructions, that art was made by a computer, not a
             | person.
             | 
             | If you are in the camp that you don't care whether or not
             | art was made by a human, this isn't even a little problem.
             | If, however, you are in the camp that cares a lot about
             | that, then this is a very, very serious problem.
             | 
             | Either way, this means that this isn't "just a tool" like a
             | printing press. It's something completely different, and
             | more than a tool.
        
               | hammyhavoc wrote:
               | It comes under the category of "generative art". It isn't
               | intentional or self-expression, regardless of what the
               | prompt engineers may kid you. Is it "art"? Sure. Is it
               | traditional art? No.
        
               | krapp wrote:
               | It is intentional, it's just a generated approximation of
               | intention.
        
               | hammyhavoc wrote:
               | No, it isn't intentional, it's generative.
               | StableDiffusion DALL-E, MidJourney et al are quite
               | literally by definition _a generative art model_. This
               | isn 't up for debate. That is quite literally how they
               | function.
        
               | JohnFen wrote:
               | I totally agree with this. But as an art lover, I want
               | some real way of being able to tell if a piece of art is
               | this, or is traditional.
        
               | hammyhavoc wrote:
               | Artistry is a beautiful thing. It's also why I love
               | hearing from industrial designers on how they arrived at
               | their design decisions, challenges they faced, intent,
               | nuances in the design you might overlook that were
               | difficult etc.
               | 
               | Unfortunately, short of more behind-the-scenes material
               | and interviews, the only way to really get a feel for it
               | is looking at their body of work as a whole. The great
               | ones always have a specific style that they refine or
               | evolve over time. It's unmistakably x.
               | 
               | I have to say, I've yet to see any AI artist hit a
               | signature style, and I've yet to have an AI generated
               | piece of art move me emotionally or conceptually.
               | 
               | Are they interesting? Sure. So's glitch art, but there's
               | not much substance to any of it, and I remember none of
               | it. Intent and self-expression is such a huge part of
               | art.
               | 
               | Can you imagine what Hitchcock would say about AI
               | _anything_? He wanted it 100% his way.
        
               | jhbadger wrote:
               | In the early days of Pixar, some traditionalist animators
               | claimed that Pixar was "cheating" because they used 3D
               | modelling tools rather than physically drawing the
               | frames. The traditionalists didn't see computer animation
               | as a "tool"; they saw it as the computer doing the work
               | of the animator. Were they right? Computers make making
               | things easier. But the human using the computer is the
               | real creator.
        
               | hammyhavoc wrote:
               | Toy Story was not generative art though. Pointless
               | comparison.
               | 
               | Compare generative art tools to other generative art
               | tools.
        
               | krapp wrote:
               | Artists using 3d modeling software actually design and
               | create the model, rigging and animation themselves, they
               | don't tell the software "Metal articulated lamp bouncing
               | across wooden table, 3d render, realistic lighting and
               | shadows" and call it a day.
               | 
               | It's a matter of how much control you have over the end
               | product, with AI it's very little. At best, if you want
               | to be charitable, you could describe the role of the
               | person using AI as an art director, but not an artist.
        
               | mejutoco wrote:
               | > If a piece of art is made by a computer based on
               | detailed instructions, that art was made by a computer,
               | not a person.
               | 
               | I completely disagree with this. According to this, no
               | code can be art. For instance, videogames.
               | 
               | It has been enough time since ready-made (Duchamps
               | urinal) and found objects, djing and sampling, and
               | concept art. Art is not only drawing beautiful
               | illustrations since at least the 2 world wars.
               | 
               | > If I give detailed instructions to a human ghost writer
               | about a story I want them to write, I don't think anyone
               | would say that I wrote the story.
               | 
               | This is exactly how many artists work today, with a small
               | army of workers, even interns, to execute the plan of the
               | artist. Even Rembrandt had people painting to produce
               | more pictures. Another example would be architects: does
               | the star architect execute everything, or do they have
               | the vision and instruct their very large teams?
               | 
               | IMO it is all about the intent and interpretation of a
               | human.
        
               | erksa wrote:
               | Yes, the artist that chooses to use AI-tools to generate
               | art-work are in fact artist.
               | 
               | For those who are afraid of AI being content generators
               | that puts artist out of work will most likely be
               | disappointed. However the technical gatekeeping some
               | artist do goes away, and it makes room for more people
               | being able to express them self creatively.
               | 
               | Art is about the why. We as humans will always ask that
               | question, and we will produce answers, no matter the
               | tools.
               | 
               | We've gone through multiple iterations of technology
               | questioning if you are really an "artist" for using it,
               | but the new creations and the new generation of artist
               | puts that to shame in my opinion.
               | 
               | Digital pixels are not paint brushes. So if you do not
               | move your mouse/brush to generate a stroke? What does it
               | matter?
               | 
               | AI-tools speeds up the creative process which for some
               | will let them go places we currently are having a hard
               | time to imagine.
        
               | JohnFen wrote:
               | > Digital pixels are not paint brushes. So if you do not
               | move your mouse/brush to generate a stroke? What does it
               | matter?
               | 
               | If you are literally explaining every stroke, then it
               | doesn't. But that's not what we're talking about here.
               | We're talking about describing something in pretty
               | general terms and allowing a computer to make the
               | creative decisions (what "brush strokes" to make).
        
               | renewiltord wrote:
               | And if you move a horsehair brush, do you determine where
               | each hair lands? If you spray paint, do you say where
               | each drop lands? We have, for long, handed control to
               | physical random processes. To modify that to land on
               | mathematical random processes is not some categorical
               | shift.
        
               | alimov wrote:
               | > If a piece of art is made by a computer based on
               | detailed instructions, that art was made by a computer,
               | not a person.
               | 
               | Tell that to music producers and digital artists. They
               | don't know what detailed instructions are run by the cpu
               | of the device they use, and yet it is still art and they
               | are still artists.
        
               | JohnFen wrote:
               | But that's entirely different. A digital artist is
               | directly engaging in art. The computer is, in that case,
               | just a tool like a paintbrush. The artist is still the
               | one making all the creative decisions.
               | 
               | To go back to my ghost writer analogy, the reason that
               | nobody would say I was the author is because I wasn't the
               | one who made the creative decisions. I just described
               | what I wanted to another person who made the creative
               | decisions. Therefore, the other person is the author, not
               | me.
        
           | anonylizard wrote:
           | Really?
           | 
           | For a writer, if you write out a plot, you can get the AI to
           | actually simulate the character's responses and dialogue
           | (even voiced by AI!). There, you've LITERALLY brought a
           | character to life, each character is driven by a different
           | persona simulated by a different AI, the quality of stories
           | that will create, will annihilate what came before.
           | 
           | You want to write an adventure, but want to keep it
           | unpredictable. Ask the AI for ideas, there, the adventure is
           | now a true adventure, not a fake mirage created by the
           | writer.
           | 
           | No need to describe scenery, no need to describe character
           | appearances. Feed those descriptions into txt2img, and you
           | get portraits that would have cost $1000/pic from top tier
           | artists.
           | 
           | Generic and formulaicness, comes from having TOO MANY PEOPLE.
           | Too many people involved in production, means the creator
           | must dilute intent, appeal to wider audiences, and limit
           | risks, to ensure costs are reclaimed. Once AI gets going,
           | you'll see indie creators making full anime series, and
           | releasing them on youtube. Because for an individual creator,
           | even ad + patreon revenue alone would be able to sustain a
           | comfortable existence, with no dependency on corporate or
           | teams.
           | 
           | I thought people who love art, would be exhilarated by AI. I
           | realized, the majority of artists don't love art. They love
           | drawing, but not art. They love socializing with artists, but
           | not art. They love receiving attention and income from their
           | art, but not art. That's all fair and fine. But there will be
           | people, who just want to create the best possible art, no
           | matter the method, no matter the reward, and with AI, this
           | latter group will outcompete the first, hard.
        
             | wwweston wrote:
             | > Generic and formulaicness, comes from having TOO MANY
             | PEOPLE.
             | 
             | You know what else comes from the output of many people?
             | What AIs produce.
             | 
             | > They love drawing, but not art. They love socializing
             | with artists, but not art. They love receiving attention
             | and income from their art, but not art.
             | 
             | Love of drawing, socializing with artists, and attention
             | and income when people connect and buy their art are all
             | forms of _deeper engagement /investment with art_.
             | 
             | The _generous_ reading I can make of the mistake that sets
             | these up as the other side of an inimical dichotomy where
             | engaging these things is _not_ loving art is if you 're
             | limited to the perspective of someone whose experience with
             | art is that of a consumer and equates that perspective with
             | loving art.
             | 
             | Being concerned about alienation from drawing, from
             | connection with a community of other artists making
             | efforts, and yes from attention and income that makes their
             | focus socially/economically viable seems pretty reasonable
             | (much more than the "they love drawing but not art"
             | bullshit).
             | 
             | It's also reasonable to be interested in what new tools can
             | do and I rather imagine there will be people who enjoy that
             | as well, some of whom may be able to produce visions that
             | were previously inaccessible to them. That's interesting
             | and exciting, but it doesn't mean there's no downsides.
        
             | akiselev wrote:
             | _> You want to write an adventure, but want to keep it
             | unpredictable. Ask the AI for ideas, there, the adventure
             | is now a true adventure, not a fake mirage created by the
             | writer._
             | 
             | There's something really funny about using an algorithm
             | that _predicts the next most likely token_ to generate
             | _unpredictable_ adventures.
        
               | LASR wrote:
               | Reminds me of:
               | 
               | https://en.wikipedia.org/wiki/Unexpected_hanging_paradox
               | 
               | You can certainly generate unpredictability from a
               | predictable mechanism.
        
               | pharrington wrote:
               | >algorithm that predicts the next most likely token to
               | generate unpredictable adventures
               | 
               | but enough about the human nervous system
        
             | starmftronajoll wrote:
             | Perhaps the resolution to your cognitive dissonance is not
             | to conclude that artists hate art. Instead I'd suggest that
             | often, artists' love of art doesn't stem from concepts like
             | "annihilating what came before" or "outcompeting hard."
             | Perhaps, like the parent comment said, it comes from self-
             | expression -- the pleasure, for both creator and audience,
             | of the "mirage created by the writer," to borrow your
             | poetic term.
        
               | bravura wrote:
               | "Art is not a mirror held up to reality, but a hammer
               | with which to shape it" - Bertolt Brecht.
               | 
               | I'm neither dissenting with your or OP here, FYI.
               | 
               | Mediocre artists will just copy what exists more easily.
               | 
               | And powerful artists will have new tools to shape
               | reality.
        
               | renewiltord wrote:
               | There is some competitive notion, for certain, since
               | there is a lot of pushback on AI-generated art from
               | artists.
        
             | kookylooky wrote:
             | [dead]
        
             | hammyhavoc wrote:
             | Well? So what?
             | 
             | If I was writing dialogue, I already know exactly what I
             | want to say, that's why I'm writing it in the first place.
             | Sure, some people might like that, but I never feel devoid
             | of inspiration because I consider the greater narrative of
             | a project.
             | 
             | Why would I want to ask the AI for ideas? The joy is in the
             | process of having it my way by my own hand, word-for-word,
             | pixel-by-pixel, note-by-note.
             | 
             | If other people enjoy another method, fine, but you are
             | invalidating the traditional and highly enjoyable method.
             | Creativity isn't about efficiency or whatever. This is why
             | people leave AAA to go indie, and have it their way, not to
             | whatever the budget is or what the shareholders or managers
             | want, but because they have a vision and are naturally
             | creative, making a labour of love they know every detail of
             | because it's in their head.
             | 
             | I'd rather support the artists and pay the thousand bucks
             | because it's all a part of the traditional process.
             | 
             | Indies already release anime on YouTube, Netflix, TV et al,
             | and have done for decades.
             | 
             | Art isn't about the output for most artists. Art is about
             | the process. People consume art. Artists enjoy the creative
             | process and having it their way.
             | 
             | I hate attention. I haven't had a photo taken in 13 years.
             | I've stopped all interviews. I mostly work behind-the-
             | scenes on documentaries and games these days. I don't even
             | care about if a project makes a return, we do it because we
             | love it and enjoy the process.
             | 
             | There is no "best", art is subjective. Art isn't even a
             | competition. Crass and vulgar. Are you sure you actually
             | understand art and aren't conflating it with monetizing it
             | in a business context?
        
           | jutrewag wrote:
           | Procedural generation is miles away from what these LLMs are
           | doing. It's not just coming up with a random maze/map for a
           | fetch mission. Right now, it can generate a novel quest, with
           | all of its characters, come up with a unique set of mission
           | goals and theoretically generate those assets and characters
           | (atleast in 2D). Turns out, all this self-expression or
           | whatever is cheap.
        
             | hammyhavoc wrote:
             | Not in this context of self-expression. Neither are
             | intentional, both are broad strokes.
             | 
             | Technologically? Yes, you're right, very different.
             | 
             | But in output? It's still generative.
        
               | jutrewag wrote:
               | "Intentional" is a weasel word in this context. It means
               | very little in terms of the end result. Humans making art
               | is generative as well even all the way back to cave
               | paintings which were interpretations of animals in
               | nature.
        
               | hammyhavoc wrote:
               | No, it isn't. Do you really think Alfred Hitchcock would
               | leave his film up to others, let alone a machine? No. It
               | wasn't his vision, thus it wasn't his self-expressive
               | intent. Read about auteur theory.
               | 
               | That's not what generative art is, mate.
               | https://en.wikipedia.org/wiki/Generative_art
        
           | moffkalast wrote:
           | > I can procedurally generate levels all day long in a video
           | game, but for me, they're never going to be as compelling or
           | interesting as a lower-resolution and low quality textured
           | game from the '90s or '00s where every single tree and rock
           | is placed with intent.
           | 
           | The popularity of Minecraft and other procedural games would
           | imply that there is still a large number of people who value
           | exploring the unknown generated content, even if it means
           | it's not curated.
           | 
           | Yes the quality won't be as good, but you do get quantity
           | instead. And the quality will improve.
        
             | lelandfe wrote:
             | Not to mention the middleground of handmade pieces and
             | putting them together in a randomly generated way. That's
             | how a ton of games work - from Noita, to Cataclysm DDA, to
             | Dead Cells.
        
               | hammyhavoc wrote:
               | That's quite literally how Minecraft works. That's also
               | how games I've worked on function.
        
             | hammyhavoc wrote:
             | Yup, I ran a Minecraft server for about 8 years, I loved a
             | bunch of procedurally generated games. I've worked on games
             | in the past and present that use procedural generation.
             | Furthermore, the most interesting part of Minecraft was
             | what people built, not what world the seed generated, IME.
             | Usually we'd WorldEdit vast swathes of it away to have flat
             | areas to build and have fun, or we'd WE with brushes and
             | materials to hand create terrain because the procedural
             | generation was so-so.
             | 
             | My point still remains that there's a very different
             | experience in something intentional and exact. One feels
             | very human.
             | 
             | The quality isn't the issue, the broad strokes without
             | intent are.
             | 
             | This is like preferring impressionism over a Dutch master.
             | Audiences for both. I'm not invalidating any of them, I'm
             | just saying that for me, I prefer something more human and
             | different.
        
         | root_axis wrote:
         | ChatGPT including the GPT-4 variant sucks quite terribly at
         | creative writing, especially the kind of writing necessary to
         | create long-form narratives like that in serial television and
         | especially novels. The technology will certainly improve and it
         | will definitely become a staple in the editor's toolbox, but it
         | is so far away from being able to produce long form narratives
         | well that it's not a given it will eventually get there.
        
       | graiz wrote:
       | Chatbots are the future but your points are valid. They don't
       | provide affordances, however chatbots provide a form of
       | progressive disclosure and direct interaction that was previously
       | impossible.
       | 
       | Toolbars and menus provide affordances but you still need to know
       | what things are called and what order to use them. "I'd like to
       | email this file as a PDF and I'd also like to print it." may be
       | much easier in a chat UX than in a menu based UX. Often these
       | things can co-exist but chatUX has access to much more nuanced UI
       | that would otherwise be too complex to build or expose.
        
       | stephencoyner wrote:
       | I think the future interface is a smart assistant for your life
       | that gives you suggestions on what you should be doing (both for
       | work and personal life). Sure, there may be a prompting text box,
       | but the assistants will be so good at suggesting that you won't
       | need it very often (besides searching for the occasional thing or
       | giving feedback).
       | 
       | Driving these suggestions is all of your data as well as your
       | goals and values that you can give to the assistant in natural
       | language.
       | 
       | At work the goal might be: "I want to sell $100,000 worth of
       | widgets this quarter" and it will break down step by step how
       | that might be possible.
       | 
       | For personal life it might be "I want to get involved in the
       | kayaking community" and it will recommend activities, clubs, etc.
       | 
       | Once these assistants are good enough, it will be reckless to not
       | use one (especially at work). We will then live in a world where
       | AI and human live together and make decisions together hand in
       | hand. Buckle up.
        
       | throwaway4837 wrote:
       | If you think the far future (100s of years) involves being able
       | to talk to a synthetic humanoid using spoken language, then
       | Chatbots are almost certainly a point on the curve.
        
       | aiisahik wrote:
       | Google Search got to be a pretty successful business and probably
       | still the single most popular information retrieval tool on earth
       | - it was done using with a single input box.
        
       | dahwolf wrote:
       | A lot of people are projecting for ChatGPT to come for the search
       | market, but I wonder how that will play out for lowly cognitive
       | queries.
       | 
       | Quite a few people use search by awkwardly typing on mobile one
       | or two words, probably misspelled and/or auto-completed as they
       | type it. The query isn't sophisticated, lacks a lot of context
       | and parameters, which the search engine then tries to guess.
       | 
       | When you use ChatGPT in that way, you'll get useless generic
       | answers. It seems to shine specifically when being more specific,
       | detailed, which also suggests users are willing and able
       | (education level) to give such rich input.
       | 
       | The idea that it's better than search for this specific normy
       | behavior, I openly question. And let's not forget about the
       | economics. More expensive to run, vastly less ad space, and
       | content owners (the whole web) are going to be pissed and will
       | put up ever higher walls.
        
         | bigyikes wrote:
         | Just wait until we have a model that automatically translates
         | vague, awkward prompts into something more useful.
         | 
         | Put differently: if Google's search models already have the
         | ability to return great results for poor queries, why couldn't
         | a large language model (or a plug-in for one) learn the same
         | algorithm?
        
         | unshavedyak wrote:
         | > When you use ChatGPT in that way, you'll get useless generic
         | answers. It seems to shine specifically when being more
         | specific, detailed, which also suggests users are willing and
         | able (education level) to give such rich input.
         | 
         | Sidenote, i've found GPT useful enough to pay for (GPT Plus) by
         | doing the opposite. Or rather, i find it very useful when i
         | struggle to search for problems. ChatGPT helps guide me to new
         | search or research terms, sometimes even providing the answer
         | more directly.
         | 
         | It feels like the olden days where Google was great at finding
         | a movie based on some vague movie description. GPT does that
         | for a ton of things for me, enough that i found it useful.
         | 
         | It hasn't replaced online research but it has accelerated it
         | for me.
        
         | LASR wrote:
         | What people forget is the underlying capability - LLMs are able
         | to do reasoning.
         | 
         | So the one-track thinking of garbage-in-garbage-out is not the
         | limitation any more.
         | 
         | What we're precisely now able to do is garbage-in-less-garbage
         | out.
         | 
         | You can take a vague prompt in and ask GPT to hypothesize on
         | what it means, why the user is asking that question and then
         | generate a detailed prompt. Then use that that prompt and then
         | perform the search.
         | 
         | Trying this out in the playground, I see a suprisingly capable
         | search experience.
        
       | raggi wrote:
       | one thing I can say for certain: scroll handlers definitely
       | aren't the future
        
         | dang wrote:
         | I agree, but:
         | 
         | " _Please don 't complain about tangential annoyances--e.g.
         | article or website formats, name collisions, or back-button
         | breakage. They're too common to be interesting._"
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
           | morelisp wrote:
           | For once, that everyone agrees scroll handlers are godawful
           | but yet after years they're still omnipresent is germane to
           | the article's topic.
        
           | freediver wrote:
           | Comments like this confirm that you are indeed reading every
           | comment on the website, which should be out of the realm of
           | humanely possible.
        
             | dang wrote:
             | It's all an illusion.
        
             | slater wrote:
             | Or there's lots of people who click the flag link
        
       | sitkack wrote:
       | > Text inputs have no affordances
       | 
       | It has _all_ the affordances. You can turn it into anything you
       | want. Want it respond with json, check, it can do that. Turn a
       | wall of text into a Python data structures, check, it can do that
       | too.
       | 
       | You can take your LLM text interface and put what ever api you
       | want on top. You start with clay and mold it into anything you
       | need. You construct a parser so that the way data is already
       | constrained and validated. Same goes for the output.
        
         | disconcision wrote:
         | > Text inputs have no affordances
         | 
         | >It has all the affordances.
         | 
         | these are two different ways of saying the same thing
        
         | tdaltonc wrote:
         | Whether you want to call it an "empty set" or "the set of all
         | possible sets" isn't really relevant to the authors point that
         | an empty box has discoverability problems.
        
         | lxgr wrote:
         | But what if I don't know what I want?
         | 
         | A graphical UI can provide much more and much more intuitive
         | guidance than a chat input ever will. And I say that as a big
         | fan of Unix and the shell.
        
           | IanCal wrote:
           | You can ask it. You can explain your problem and ask how it
           | might be able to help. You can discuss and narrow down with
           | some back and forth what it is you want to do.
           | 
           | I could tell a chat bot I am finding the horizontal split in
           | my editor is annoying because I have a wide monitor, and have
           | it tell me there's a setting for that and ask if I want the
           | default changed.
           | 
           | With a gui I might have to go through the files menu for
           | settings, check if it's in edit-preferences, check tools-
           | options, before maybe having to find out online it's if it's
           | in some settings file.
        
         | zmmmmm wrote:
         | Yeah this was so weird to read. This is probably the number one
         | property I think makes interfaces like ChatGPT compelling - you
         | don't _need_ to know how to use it - just use the human
         | language you already know. If you don 't understand something
         | it says, just ask it to explain it. Essentially, it makes
         | affordances obsolete.
        
         | JusticeJuice wrote:
         | I dunno, I think his point is very valid. Chat bots can't do
         | literally anything at all, and an empty text input isn't going
         | to help guide a user towards what it can do, and what it's good
         | at. Just because a system has a LLM to interact with it,
         | doesn't mean it'll suddenly support any desired action the user
         | wants done.
        
         | kmtrowbr wrote:
         | I thought this was a funny statement too. Text is the most
         | powerful, original medium. It's difficult to overstate how
         | powerful and valuable it is. Of course, text won't necessarily
         | be the only UI exposed for interacting with these "Large
         | Language Models." But that UI can be built on top of the text
         | -- which wouldn't work in the other direction.
        
           | alanbernstein wrote:
           | I think the point is that a "chatbot" is, by (the author's)
           | definition, a UI with only a bare text prompt. Once you start
           | building more UI on top of that, you're ... doing what the
           | article suggests.
        
             | kmtrowbr wrote:
             | I see -- my apologies I must not have read it closely
             | enough! :blush:
        
               | sitkack wrote:
               | I re-read the post. The author is arguing against a
               | statement that no one made and then uses the rest of the
               | essay to outline their work. It is a bit of a setup.
               | 
               | An LLM-Chatbot is an extremely flexible tool, but I
               | haven't seen anyone argue that all of our UIs and
               | applications should be replaced by chat. That is
               | ridiculous.
        
       | davidthewatson wrote:
       | No, they're not, but my reasons for reaching that conclusion are
       | somewhat different:
       | 
       | 1) I don't think chatting with anything, human or machine, is a
       | learning experience, particularly since the machine veracity is
       | poor, untrustworthy, and Hinton's resignation today tells you
       | everything you need to know about the narrative inside big
       | research orgs right now.
       | 
       | 2) Recognition vs. recall. Given that it's the equivalent of an
       | informal language CLI, which I prefer by the way; but there is no
       | recognition (as in symbols) only recall.
       | 
       | Long story short, I think the emergent need is for written
       | communication, with a tip of the hat to Daniele Procida:
       | 
       | https://ubuntu.com/blog/engineering-transformation-through-d...
       | 
       | Except that what's missing is a human-computer collaboration,
       | i.e. sensemaking with another tip of the hat to Peter Pirolli:
       | 
       | https://www.efsa.europa.eu/sites/default/files/event/180918-...
        
       | wilg wrote:
       | Yeah, chat isn't a universally great interface. But it's a great
       | default because it's totally free form.
       | 
       | It should be pretty easy (even with today's APIs and technology)
       | to have an LLM design a user interface for you for your current
       | task.
       | 
       | Simplest way: output a JSON of simple control definitions with
       | every answer.
       | 
       | Coolest way: Just have it generate a full-ass React front end or
       | whatever on every message.
        
         | fatherzine wrote:
         | Interesting. Futuristic idea: on-the-fly generate a front-end
         | customized to the user learned preferences. Motivation: humans
         | are poor learners, so save them the trouble to learn a new
         | interface style. At the limit, real-time generate a whole
         | world-in-the-world, fulfill Zuckerberg's dream -- an utterly
         | lonely Matrix. Now that's a thinly sugar-coated version of the
         | Hell described in e.g. Catholic doctrine. And we will chose it
         | ourselves. Deeply sobering.
        
           | efields wrote:
           | We are very good learners. That's how we got this far. Not
           | all new interface is good and worth learning. Sometimes it
           | feels best to stick with what works.
        
       | paulddraper wrote:
       | Quality quote:
       | 
       | "There's an ongoing trend pushing towards continuous consumption
       | of shorter, mind-melting content. Have a few minutes? Stare at
       | people putting on makeup on TikTok. Winding down for sleep? A
       | perfect time to doomscroll 180-character hot takes on Twitter."
        
         | BulgarianIdiot wrote:
         | 140 chars, he means.
         | 
         | *280 chars, he means.
         | 
         | *4000 chars, he means.
         | 
         | *10000 chars, he means.
        
           | skybrian wrote:
           | *she
        
       | efields wrote:
       | A contextualized chatbot is still a chatbot. I think they're
       | going to stick around for a while... we've effectively been
       | trying this out on the web since the AskJeevs days, and that
       | dream is mostly realized now.
        
       ___________________________________________________________________
       (page generated 2023-05-01 23:02 UTC)