[HN Gopher] Show HN: Magenta.nvim - AI coding plugin for Neovim ...
___________________________________________________________________
Show HN: Magenta.nvim - AI coding plugin for Neovim focused on tool
use
I've been developing this on and off for a few weeks. There are a
few videos on the README page showing demos of the plugin. I just
shipped an update today, which adds: - inline editing with forced
tool use - better pinned context management - prompt caching for
anthropic - port to node (from bun) Check it out!
Author : anonymid
Score : 58 points
Date : 2025-01-21 03:07 UTC (3 days ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| vessenes wrote:
| Watched the video. I like this! Really smart idea to get us one
| step closer to workable human workflows.
|
| Watching you work with it, I found I wanted something that mushes
| aider and your stuff together. Where I am spending time in aider,
| I am spending time complaining about claude using bad patterns,
| assuming bad structure/types/etc or misunderstanding the purpose
| of a set of code. Where it looks like you are spending time is in
| fixing coding errors, and manually reviewing patches.
|
| At the very least, it seems like automating / making easy
| compilation and type checking results would be nice.
|
| Having used aider for over a year now, I know he's spent a lottt
| of time on prompting customization for both improved code quality
| and also DIFF format, and I wonder if you would benefit from some
| of those lessons in terms of getting better code out of different
| models.
|
| Anyway, this is awesome, and I love the idea of giving some tools
| to the LLM to engage with the codebase, pull context and then
| code. Super cool.
| anonymid wrote:
| There are lsp tools available to the agent, so in theory it
| should be able to ask for types, references and diagnostics.
|
| In using it I've found that the agent doesn't really make use
| of these tools unless I instruct it to. I think I need to do
| some messing around with prompts to encourage the agent to use
| them more.
|
| I think the next major thing I want to add is commands to allow
| you to send symbol types / diagnostics to the agent before it
| asks for them, which should help speed up some of these
| workflows.
|
| Thanks for the pointer to take another look at aider - the DIFF
| format sounds really interesting. Though it seems like the
| license is Apache 2.0 so I think I'll have to learn a bit about
| what that means since I made mine MIT licensed.
| whimsicalism wrote:
| How does this compare to avante.nvim?
|
| and does it support r1 endpoints?
| dimtion wrote:
| Both those questions are answered clearly in the readme:
|
| > compared to avante > I think it's fairly similar. However,
| magenta.nvim is written in typescript and uses the sdks to
| implement streaming, which I think makes it more stable. I
| think the main advantage is the architecture is very clean so
| it should be easy to extend the functionality. Between
| typescript, sdks and the architecture, I think my velocity is
| pretty high. I haven't used avante in a while so I'm not sure
| how close I got feature-wise, but it should be fairly close,
| and only after a couple of weeks of development time.
|
| And:
|
| > Another thing that's probably glaringly missing is model
| selection and customization of keymappings, etc...
| whimsicalism wrote:
| thank you - missed that section at the bottom on my phone
| bashtoni wrote:
| r1 doesn't support function calling, so I'd assume the answer
| will be no.
| anonymid wrote:
| The interface for a provider does abstract over this, so you
| could implement it in the way that most other plugins do by
| rolling your own prompts and parsing: https://github.com/dlan
| ts/magenta.nvim/blob/main/node/provid...
|
| But yeah... it probably would take substantially more effort
| and be more brittle than having the model providers do it for
| you.
|
| I'm happy to accept PRs if folks want to go for it :)
___________________________________________________________________
(page generated 2025-01-24 23:01 UTC)