[HN Gopher] UI vs. API. vs. UAI
       ___________________________________________________________________
        
       UI vs. API. vs. UAI
        
       Author : bckmn
       Score  : 61 points
       Date   : 2025-08-11 16:11 UTC (6 hours ago)
        
 (HTM) web link (www.joshbeckman.org)
 (TXT) w3m dump (www.joshbeckman.org)
        
       | metayrnc wrote:
       | This is already true for just UI vs. API. It's incredible that we
       | weren't willing to put the effort into building good APIs,
       | documentation, and code for our fellow programmers, but we are
       | willing to do it for AI.
        
         | bubblyworld wrote:
         | I think this can kinda be explained by the fact that agentic AI
         | more or less _has_ to be given documentation in order to be
         | useful, whereas other humans working with you can just talk to
         | you if they need something. There 's a lack of incentive in the
         | human direction (and in a business setting that means priority
         | goes to other stuff, unfortunately).
         | 
         | In theory AI can talk to you too but with current interfaces
         | that's quite painful (and LLMs are notoriously bad at admitting
         | they need help).
        
           | freedomben wrote:
           | I also think it makes a difference that an AI agent can read
           | the docs very quickly, and don't typically care about
           | formatting and other presentation-level things that humans
           | have to care about, whereas a human isn't going to read it
           | all, and may read very little of it. I've been at places
           | where we invested substantial time documenting things, only
           | to have it be glanced at maybe a couple of times before
           | becoming outdated.
           | 
           | The idea of writing docs for AI (but not humans) does feel a
           | little reflexively gross, but as Spock would say, it does
           | seem logical
        
           | zahlman wrote:
           | > agentic AI more or less has to be given documentation in
           | order to be useful, whereas other humans working with you can
           | just talk to you if they need something. ... In theory AI can
           | talk to you too but with current interfaces that's quite
           | painful (and LLMs are notoriously bad at admitting they need
           | help).
           | 
           | Another framing: documentation _is_ talking to the AI, in a
           | world where AI agents won 't "admit they need help" but will
           | read documentation. After all, they process documentation
           | fundamentally the same way they process the user's request.
        
         | righthand wrote:
         | We are only willing to have the Llm generate it for AI. Don't
         | worry people are writing and editing less.
         | 
         | And all those tenets of building good APIs, documentation, and
         | code are opposite the incentive of building enshittified APIs,
         | documentation, and code.
        
         | arscan wrote:
         | The feedback loop from potential developer users of your API is
         | excruciatingly slow and typically not a process that an API
         | developer would want to engage in. Recruit a bunch of
         | developers to read the docs and try it out? See how they used
         | it after days/weeks? Ask them what they had trouble with?
         | Organize a hackathon? Yuck. AI, on the other hand, gives you
         | immediate feedback as to the usability of your "UAI". It makes
         | something, in under a minute, and you can see what mistakes it
         | made. After you make improvements to the docs or API itself,
         | you can effectively wipe its memory by cleaning out the
         | context, and see if what you did helped. It's the difference
         | between debugging a punchcard based computing system and one
         | that has a fully featured repl.
        
       | darepublic wrote:
       | if you want your app to be automated wouldn't you just publish
       | your api and make that readily available? I understand the need
       | for agentic UI navigation but obviously an api is still easier
       | and less intensive right. The problem is that it isn't always
       | available, and there ui agents can circumvent that. But you want
       | to embrace the automation of your app so.. just work on your API?
       | You can put an invisible node in your UI to tell agents to stop
       | wasting compute and use the api.
        
       | jngiam1 wrote:
       | https://mcpui.dev/ is worth checking out, really nice project;
       | get the tools to bring dynamic ui to the agents.
        
       | showerst wrote:
       | I really vehemently disagree with the 'feedforward, tolerance,
       | feedback' pattern.
       | 
       | Protocols and standards like HTML built around "be liberal with
       | what you accept" have turned out to be a real nightmare. Best-
       | guessing the intent of your caller is a path to subtle bugs and
       | behavior that's difficult to reason about.
       | 
       | If the LLM isn't doing a good job calling your api, then make the
       | LLM get smarter or rebuild the api, don't make the API looser.
        
         | arscan wrote:
         | > Protocols and standards like HTML built around "be liberal
         | with what you accept" have turned out to be a real nightmare.
         | 
         | This feels a bit like the setup to the "But you have heard of
         | me" joke in Pirates of the Caribbean [2003].
        
           | paulddraper wrote:
           | Or "There are only two kinds of languages: the ones people
           | complain about and the ones nobody uses."
        
         | mort96 wrote:
         | I'm not sure it's possible to have a technology that's user-
         | facing with multiple competing implementations, and not also,
         | in some way, "liberal in what it accepts".
         | 
         | Back when XHTML was somewhat hype and there were sites which
         | actually used it, I recall being met with a big fat "XML parse
         | error" page on occasion. If XHTML really took off (as in a
         | significant majority of web pages were XHTML), those XML parse
         | error pages would become way more common, simply because
         | developers sometimes write bugs and many websites are server-
         | generated with dynamic content. I'm 100% convinced that some
         | browser would decide to implement special rules in their XML
         | parser to try to recover from errors. And then, that browser
         | would have a significant advantage in the market; users would
         | start to notice, "sites which give me an XML Parse Error in
         | Firefox work well in Chrome, so I'll switch to Chrome". And
         | there you have the exact same problem as HTML, even though the
         | standard itself is strict.
         | 
         | The magical thing of HTML is that they managed to make a
         | standard, HTML 5, which incorporates most of the special case
         | rules as implemented by browsers. As such, all browsers would
         | be lenient, but they'd all be lenient _in the same way_. A
         | strict standard which mandates e.g  "the document _MUST_ be
         | valid XML " results in implementations which are lenient, but
         | they're lenient _in different ways_.
         | 
         | HTML should arguably have been specified to be lenient from the
         | start. Making a lenient standard from scratch is probably
         | easier than trying to standardize commonalities between many
         | differently-lenient implementations of a strict standard like
         | what HTML had to do.
        
           | lucideer wrote:
           | History has gone the way it went & we have HTML now, there's
           | not much point harking back, but I still find it very odd
           | that people today - with the wisdom of foresight - believe
           | that the world opting for HTML & abandoning XHTML was the
           | sensible choice. It seems odd to me that it's not seen as one
           | of those "worse winning out" stories in the history of
           | technology, like betamax.
           | 
           | The main argument about XHTML not being "lenient" always
           | centred around client UX of error display - Chrome even went
           | on to actually implement a user-friendly partial-
           | parse/partial-render handling of XHTML files that literally
           | solved everyone's complaints via UI design without any spec
           | changes but by this stage it was already too late.
           | 
           | The whole story of why we went with HTML is somewhat
           | hilarious: 1 guy wrote an ill informed blog post bitching
           | about XHTML, generated a lot of hype, made zero concrete
           | proposals to solve its problems, & then somehow convinced
           | major browser makers (his current & former employers) to form
           | an undemocratic rival group to the W3C, in which he was
           | appointed dictator. An absolutely bizarre story for the ages,
           | I do wish it was documented better but alas most of the
           | resources around it were random dev blogs that link rotted.
        
             | integralid wrote:
             | >The whole story of
             | 
             | Is that really the story? I think it was more like
             | "backward compatible solution soon about more pure,
             | theoretically better solution"
             | 
             | There's enormous non-xhtml legacy than nobody wanted to
             | port. And tooling back in the day didn't make it easy to
             | write correct xhtml.
             | 
             | Also like it or not, HTML is still written by humans
             | sometimes, and they don't like parser blowing up because of
             | a minor problem. Especially since such problems are often
             | detected late, and a page which displays slightly wrong is
             | much better outcome than the page blowing up.
        
           | chowells wrote:
           | Are you aware of HTML 5? Fun fact about it: there's zero
           | leniency in it. Instead, it specifies a precise semantics (in
           | terms of parse tree) for every byte sequence. Your parser
           | either produces correct output or is wrong. This is the
           | logical end point of being lenient in what you accept -
           | eventually you just standardize everything so there is no
           | room for an implementation to differ on.
           | 
           | The only difference between that and not being lenient in the
           | first place is a whole lot more complex logic in the
           | specification.
        
             | mort96 wrote:
             | > Are you aware of HTML 5? Fun fact about it: there's zero
             | leniency in it.
             | 
             | I think you understand what I mean. Every byte sequence has
             | a
             | 
             | > The only difference between that and not being lenient in
             | the first place is a whole lot more complex logic in the
             | specification.
             | 
             | Not being lenient is how HTML started out.
        
           | com2kid wrote:
           | > I recall being met with a big fat "XML parse error" page on
           | occasion. If XHTML really took off (as in a significant
           | majority of web pages were XHTML), those XML parse error
           | pages would become way more common
           | 
           | Except JSX is being used now all over the place and JSX is
           | basically the return of XHTML! JSX is an XML schema with
           | inline JavaScript.
           | 
           | The difference now days is all in the tooling. It is either
           | precompiled (so the devs see the error) or generated on the
           | backend by a proper library and not someone YOLOing PHP to
           | super glue strings together, as per how dynamic pages were
           | generated in the glory days of XHTML.
           | 
           | We basically got full circle back to XHTML, but with a lot
           | more complications and a worse user experience!
        
           | pwdisswordfishz wrote:
           | lol CVE-2020-26870
        
         | wvenable wrote:
         | HTML being lenient is what made progressive enhancement
         | possible -- right down the original <img> tag. The web would
         | not have existed at all if HTML was strict right from the
         | start.
        
           | arccy wrote:
           | That's poor reasoning. The web now counts as strict but still
           | extensible: you just have to clearly define how to handle
           | unknown input. The web treats all unknowns as a div.
        
             | wvenable wrote:
             | > you just have to clearly define how to handle unknown
             | input.
             | 
             | That is being lenient. Allowing any unknown input is being
             | lenient in what you accept. Not allowing unknown input at
             | all is being strict.
        
         | dathinab wrote:
         | oh yes so true, but I would generalize it to "to flexible"
         | 
         | - content type sniffing spawned a whole class of attacks, and
         | should have been unnecessary
         | 
         | - a ton of historic security issues where related to html
         | parsing being too flexible, or some JS parts being to flexible
         | (e.g. Array prototype override)
         | 
         | - or login flows being too flexible creating a easy to overlook
         | way to bypass (part of) login checks
         | 
         | - or look at the mess OAuth2/OIDC had been for years because
         | they insisted to over-enginer it and how especially it being
         | liberal about quite many parts lead to more then one or two big
         | security incidents
         | 
         | - (more then strictly needed) cipher flexibility is by now
         | widely accepted to have been an anti pattern
         | 
         | - or how so much theoretically okay but "old" security tech is
         | such a pain to use because it was made to be supper tolerant to
         | everything, like every use case imaginable, every combination
         | of parameters, every kind of partial uninterpretable parts (I'm
         | looking at you ASN.1, X509 certs and many old CA software,
         | theoretically really not bad designed, practically such a
         | pain).
         | 
         | And sure you also can be too strict, high cipher flexibility
         | being an anti-pattern was incorporated into TLS 1.3. But TLS
         | still needs some cipher flexibility, so they fund a compromise
         | of (oversimplified) you can choose 1 of 5 cipher suites but
         | can't change any parameter of that suites.
         | 
         | Just today I read an article (at work, I don't have the link at
         | hand) about some so hypothetical but practically probably
         | doable (with a bunch of more work) scenarios to trick very
         | flexible multi step agents into leaking your secrets. The core
         | approach was that they found a way to have a relative small
         | snippet of text which if it end up in the context has a high
         | chance to basically override the whole context with just your
         | instruction (quite a bit oversimplified). In turn if you can
         | sneak it into someones queries (e.g. you GTP model is allowed
         | to read you mails and it's in a mail send to you) you can then
         | trick the multi step model to grab a secret from your computer
         | (because the agents often run with user permissions) and send
         | it to you (by e.g. instrumenting the agent to scan a website
         | under an url which happens to now contain the secret).
         | 
         | Its a bit hypothetical, its hard to pull of, but it's very well
         | in the realm of possibility due to how content and instructions
         | are on a very fundamental level not cleanly separated (I mean
         | AI vendors do try, but so far that never worked reliable it's
         | in the end all the same input).
        
       | kylecazar wrote:
       | Separating presentation layer from business logic has always been
       | a best practice
        
       | throwanem wrote:
       | So, this gets to a fundamental or "death of the author" ie
       | philosophical difference in how we define what an API is "for."
       | Do I as its publisher have final say, to the extent of forbidding
       | mechanically permissible uses? Or may I as the audience, whom the
       | publisher exists to serve, exercise the machine to its not
       | intentionally destructive limit, trusting its maker to prevent
       | normal operation causing (even economic) harm?
       | 
       | The answer of course depends on the context and the circumstance,
       | admitting no general answer for every case though the cognitively
       | self-impoverishing will as ever seek to show otherwise. What is
       | undeniable is that if you didn't specify your reservations API to
       | reject impermissible or blackout dates, sooner or later whether
       | via AI or otherwise you will certainly come to regret that. (Date
       | pickers, after all, being famously among the _least_ bug-prone of
       | UI components...)
        
       | cco wrote:
       | We recently released isagent.dev [1] exactly for this reason!
       | 
       | Internally at Stytch three sets of folks had been working on
       | similar paths here, e.g. device auth for agents, serving a
       | different documentation experience to agents vs human developers
       | etc and we realized it all comes down to a brand new class of
       | users on your properties: agents.
       | 
       | IsAgent was born because we wanted a quick and easy way to
       | identify whether a user agent on your website was an agent (user
       | permissioned agent, not a "bot" or crawler) or a human, and then
       | give you a super clean <IsAgent /> and <IsHuman /> component to
       | use.
       | 
       | Super early days on it, happy to hear others are thinking about
       | the same problem/opportunity.
       | 
       | [1] GitHub here: http://github.com/stytchauth/is-agent
        
       | kordlessagain wrote:
       | All you need is AHP: https://ahp.nuts.services
        
       ___________________________________________________________________
       (page generated 2025-08-11 23:00 UTC)