[HN Gopher] gptel: a simple LLM client for Emacs
___________________________________________________________________
gptel: a simple LLM client for Emacs
Author : michaelsbradley
Score : 49 points
Date : 2024-11-03 17:52 UTC (5 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| michaelsbradley wrote:
| And here's a nice writeup:
|
| _gptel: Mindblowing integration between Emacs and ChatGPT_
|
| https://www.blogbyben.com/2024/08/gptel-mindblowing-integrat...
| kleiba wrote:
| This is really sweet. I've only recently started dabbing into AI-
| assisted programming, and I think this integration into Emacs is
| really smooth.
|
| What would be really neat is to add REPL-like functionality to an
| LLM buffer so that code generated by the LLM can be evaluated
| right away in place.
| foobarqux wrote:
| gptel can output org-babel.
| TeMPOraL wrote:
| Indeed; fire up gptel-mode in an Org Mode buffer, and you'll
| get to work with Org Mode, including code blocks with
| whatever evaluation support you have configured in your
| Emacs.
|
| Also I really like the design of the chat feature - the
| interactive chat buffer is still just a plain Markdown
| buffer, which you can simply _save to file_ to persist the
| conversation. Unlike with typical interactive buffers (e.g.
| shell), nothing actually breaks - gptel-mode just appends the
| chat settings to the buffer in the standard Emacs fashion
| (key /value comments at the bottom of the file), so to
| continue from where you left off, you just need to open file
| and run M-x gptel.
|
| (This also means you can just run M-x gptel in a random
| Markdown buffer - or an Org Mode buffer, if you want
| aforementioned org-babel functionality; as long as gptel
| minor mode is active, saving the buffer will also update
| persisted chat configuration.)
| kleiba wrote:
| Org code blocks are great but not quite the same as having
| a REPL. But like I said above, I think this is really a
| great piece of software. I can definitely see this being a
| game changer in my daily work with Emacs.
| TeMPOraL wrote:
| Used the right way, Org mode code blocks are _better_ ,
| though setting things up to allow this can be tricky, and
| so I rarely bother.
|
| What I mean is: the first difference between a REPL and
| an Org Mode block (of non-elisp code[0]) is that in REPL,
| you eval code sequentially _in the same runtime session_
| ; in contrast, org-babel will happily run each execution
| in a fresh interpreter/runtime, unless steps are taken to
| keep a shared, persistent session. But once you get that
| working (which may be more or less tricky, depending on
| the language), your Org Mode file effectively becomes a
| REPL with editable scrollback.
|
| This may not be what you want in many cases, but it is
| very helpful when you're collaborating with an LLM -
| being able to freely edit and reshape the entire
| conversation history is useful in keeping the model on
| point, and costs in check.
|
| --
|
| [0] - Emacs Lisp snippets run directly on your Emacs, so
| your current instance _is_ your session. It 's nice that
| you get a shared session for free, but it also sucks, as
| there only ever is _one_ session, shared by all elisp
| code you run. Good luck keeping your variables from
| leaking out to the global scope and possibly overwriting
| something.
| sourcepluck wrote:
| In this post yesterday https://justine.lol/lex/ there is this
| quote:
|
| > The new highlighter and chatbot interface has made llamafile so
| pleasant for me to use, combined with the fact that open weights
| models like gemma 27b it have gotten so good, that it's become
| increasingly rare that I'll feel tempted to use Claude these
| days.
|
| Leaving me tempted more than ever to see if I can integrate some
| sort of LLM workflow locally. I would only consider doing it
| locally, and I've an older computer, so I didn't think this would
| be possible till reading that post yesterday.
|
| The only thing I thought was - how would I work it in to Emacs?
| And then today, this post. It looks very well integrated. Has
| anyone any experience using gemma 27b it with llamafile and
| gptel? I know very little about the whole space, really.
| whartung wrote:
| I'm on an Intel iMac, and llama can't leverage its GPU. So,
| it's 1990s slow. It's literally like talking to a machine in
| the 90s, from the slow response time to the 1200-2400 baud
| output.
|
| It's easy to give tasks to, hard to have a conversation with.
| Just paste the task in and let it churn, come back to it later.
| jfdi wrote:
| Anyone know of a similar option for vi/m? (Not neovim etc).
|
| Been searching and have found some but nothing stands out yet.
___________________________________________________________________
(page generated 2024-11-03 23:00 UTC)