[HN Gopher] Show HN: Light like the Terminal - Meet GTK LLM Chat...
___________________________________________________________________
Show HN: Light like the Terminal - Meet GTK LLM Chat Front End
Author here. I wanted to keep my conversation with #Gemini about
code handy while discussing something creative with #ChatGPT and
using #DeepSeek in another window. I think it's a waste to have
Electron apps and so wanted to chat with LLMs on my own terms. When
I discovered the llm CLI tool I really wanted to have convenient
and pretty looking access to my conversations, and so I wrote gtk-
llm-chat - a plugin for llm that provides an applet and a simple
window to interact with LLM models. Make sure you've configure llm
first (https://llm.datasette.io/en/stable/) I'd love to get
feedback, PRs and who knows, perhaps a coffee!
https://buymeacoffee.com/icarito
Author : icarito
Score : 25 points
Date : 2025-04-21 16:36 UTC (6 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| Gracana wrote:
| This looks quite nice. I would like to see the system prompt and
| inference parameters exposed in the UI, because those are things
| I'm used to fiddling with in other UIs. Is that something that
| the llm library supports?
| icarito wrote:
| Yeah absolutely, I've just got to point where I'm happy with
| the architecture so I'll continue to add UI. I've just added
| support for fragments and I've thought to add them as if they
| were attached documents. I've in the radar to switch models in
| mid conversation and perhaps the ability to rollback a
| conversation or remove some messages. But yeah, system prompt
| and parameters would be nice to move too! Thanks for the
| suggestions!
| Gracana wrote:
| Awesome. It would be great to see a nice gtk-based open
| source competitor to lm-studio and the like.
| guessmyname wrote:
| It'd be better if it was written in C or at least Vala. With
| Python, you have to wait a couple hundred milliseconds for the
| interpreter to start, which makes it feel less native than it can
| be. That said, the latency of the LLM responses is higher than
| the UI, so I guess the slowness of Python doesn't matter.
| icarito wrote:
| Yeah I agree, I've been thinking about using Rust. But
| ultimately it's a problem with GTK3 vs GTK4 too because if we
| could reuse the Python interpreter from the applet that would
| speed things up but GTK4 doesn't have support for AppIndicator
| icons(!).
|
| I've been pondering whether to backport to GTK3 for this sole
| purpose. I find that after the initial delay to startup the
| app, its speed is okay...
|
| Porting to Rust is not really planned because I'd loose the
| llm-python base - but still something that triggers my
| curiosity.
| cma wrote:
| What's the startup time now with 9950X3D, after a prior start
| so the pyc's are cached in RAM?
| icarito wrote:
| I wonder! In my more modest setup, it takes a couple of
| seconds perhaps. After that it's quite usable.
| cma wrote:
| With a laptop 7735HS, using WSL2, I get 15ms for the
| interpreter to start and exit without any imports.
| icarito wrote:
| I've got a i5-10210U CPU @ 1.60GHz.
|
| You triggered my curiosity. The chat window takes
| consistently 2.28s to start. The python interpreter takes
| roughly 30ms to start. I'll be doing some profiling.
| indigodaddy wrote:
| Does this work on Mac or Linux only?
| icarito wrote:
| I'd truly like to know! But I've no access to a Mac to try. If
| you can, try it and let me know? If it does, please send a
| screenshot!
___________________________________________________________________
(page generated 2025-04-21 23:01 UTC)