[HN Gopher] Show HN: TypeLeap: LLM Powered Reactive Intent UI/UX
___________________________________________________________________
Show HN: TypeLeap: LLM Powered Reactive Intent UI/UX
I'm building this resource to dive deeper into "TypeLeap," a UI/UX
concept where interfaces dynamically adapt based on as-you-type
_intent detection_. Seeking real-world examples of intent-driven
UIs in the wild and design mock-ups! Design inspiration &
contributions especially welcome.
Author : eadz
Score : 14 points
Date : 2025-03-08 20:37 UTC (2 hours ago)
(HTM) web link (www.typeleap.com)
(TXT) w3m dump (www.typeleap.com)
| artificialprint wrote:
| Interesting, but also true that intent can be much more
| accurately passed with words, than inferred with guessing? Visit
| apple.com, reorganize this list in alphabet order, find me a ...
|
| You get it
| teaearlgraycold wrote:
| This seems like a very expensive way to do basic NLP
| keyserj wrote:
| Cool idea. FYI the GitHub link at the bottom leads to "page not
| found". Maybe the repo is not public?
| n49o7 wrote:
| What the Windows search box wanted to be.
| F7F7F7 wrote:
| This is great. But the vast majority of non-gaming non-terminal-
| loving-developers human beings want to avoid the keyboard at all
| costs.
|
| They are doing it via trackpads, mousepads, touch screens, etc.
| Which are all inputs that transcend language or the ability to
| find meaningful words.
| kevmo314 wrote:
| Neat idea. Regarding the performance, I think you could get a lot
| better performance by training a small classifier model,
| essentially an embedding model, and using the LLM as the
| distillation source. This would both be much smaller, addressing
| your desire for it to run in browser, while also being much more
| performant, addressing your quantization need. Using the full LLM
| is a bit overkill and you can extract the core of what you're
| looking for out of it with something a little custom.
| smokel wrote:
| This looks like a neat idea, but I'm not too positive about it.
|
| This makes using computers even harder to explain to people who
| do not spend their entire day keeping up with the latest
| developments. They cannot form a mental image or reuse any memory
| of what will happen next, because it is all context dependent.
|
| On the other end of the spectrum, for power users, dynamically
| adapting user interfaces can also be quite annoying. One can't
| type ahead, or use shortcut keys, because one doesn't know what
| the context will be. Having to wait any positive amount of time
| for feedback is limiting.
|
| Then again, there are probably tons of places where this _is_
| useful. I 'm just a bit disappointed that we (as a society)
| haven't gotten the basics covered: programming still requires
| text files that can be sent to a matrix printer, and the latency
| of most applications is increasing instead of decreasing as
| computers become faster.
| blueboo wrote:
| LLMs generating just-in-time UI has a lot of interest and effort
| going into it. It's usually called "generative UI" or "dynamic UI
| generation". It was a pretty hot about a year ago. Here's a HF
| blog on it https://huggingface.co/blog/airabbitX/llm-chatbots-30.
| Also check out Microsoft's Adaptive Cards. Nielsen Group wrote
| about it too. https://www.nngroup.com/articles/generative-ui/
|
| The problem is it's hard to come up with better examples than
| your toy examples of weather and maps. Goodness there are so many
| travel planning demos. Who actually wants the context switch of a
| UI popping up mid-typed-sentence? Is a date picker really more
| convenient than typing "next school break"? Visualizations are
| interesting -- but that changes the framing from soliciting input
| to densifying information presentation. Datagrids and charts'll
| be valuable.
|
| Anyway, it's a space that's still starving for great ideas. Good
| luck!
___________________________________________________________________
(page generated 2025-03-08 23:00 UTC)