[HN Gopher] LSP-AI: open-source language server serving as back ...
___________________________________________________________________
LSP-AI: open-source language server serving as back end for AI code
assistance
Author : homarp
Score : 128 points
Date : 2024-06-08 12:24 UTC (10 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| azinman2 wrote:
| I've tried these tools a bit but I'm having trouble finding long
| term value of them. I'd love to hear from people who use them
| what their specific workflow is.
|
| Personally I find it just easier to either write the code myself,
| or ask ChatGPT/whatever for snippets for specific problems which
| I then heavily modify to suit my needs (and fix its bugs, which
| happen quite often). But maybe I'm just too engrained in existing
| behavior.
| brainless wrote:
| I use RustRover/VS Code + Codeium or Zed + Supermaven and I
| have used Copilot before. To be honest it takes some time to
| get used to the flow. I have turned them off multiple times and
| finally the workflow has set well in my brain. Now I feel very
| productive with them.
|
| I work full-time for my own product (very early stage) but I am
| happy to share my own journey of using AI code assistants.
| Please feel free to check the commits:
| https://github.com/brainless/dwata
| theptip wrote:
| The workflow of having AI suggestions pop up on the rest of the
| line is really nice. You can ignore when you know what you are
| trying to write, but the really low-friction interaction of
| "write a very rigorous comment and then let the LLM
| autocomplete the implementation" is often enough to solve one-
| liners in languages where I'm not fluent in the standard lib,
| which means I don't need to break flow to go read docs or
| whatever.
|
| Seems small but I think it's actually a major productivity win
| for polyglot programming (which is a lot of my current
| $dayjob).
|
| I also like the convenience of "start a session with my current
| file automatically in the context", again, lowers the friction
| substantially.
| evilduck wrote:
| LLMs must be trained for full-in-middle completion to be useful
| in this scenario, but think "the next stage of autocomplete
| that uses the context around it" more than "writes entire
| functions".
|
| I've found it great when manipulating data between two formats,
| like a CSV export into a JSON config. Something that might be
| too short to write a script for but long enough to be tedious,
| you can now tab complete your way through it.
| smarvin2 wrote:
| You can specify FIM objective models for completion. I find
| that Codestral from Mistral works pretty well.
|
| That next stage is currently what I am working. I'm building
| out a code splitter using TreeSitter right now and already
| have experimental vector search in the language server.
| CuriouslyC wrote:
| Do you find yourself having to go to 3 or 4 different files to
| get everything chat gpt needs for its context to solve the
| problem? Tools like this can help with that use case.
| simonw wrote:
| You have to learn how to prompt tools like Copilot. A few
| tricks I use a lot:
|
| 1. Write comments saying what you want the next lines of code
| to do
|
| 2. Write function definitions with clear function names and
| type annotations on the arguments - this can result in the full
| function body being provided if your definitions are clear
| enough
|
| 3. For repetitive code (like parameterized unit tests) provide
| a couple of examples and then use comments to hint at what it
| should write for you next based on those examples
|
| 4. Sometimes it's good to temporarily copy and paste a chunk of
| code in from elsewhere. For example, copy a CREATE TABLE SQL
| statement into a Python file when you are writing code that
| will interact with that table - or even an HTML page when you
| are writing out the HTML for a form associated with that table
| verdverm wrote:
| For 4, this is where I think using the target language(s) LSP
| / intellesense would be useful. The AI tools should know that
| I'm referring to a specific tables/types/libraries through
| references/locality/imports and supply that automatically to
| the context for me.
| simonw wrote:
| Copilot does that in a really clever way already: any time
| it performs a completion it looks at other nearby files in
| the project and tries to find pieces of code that look
| semantically similar to the area you are editing, then
| includes those snippets in its prompt.
|
| This works well as a general rule, but sometimes it doesn't
| fine the right snippets - which is why I occasionally help
| it out through copy and paste.
| verdverm wrote:
| As described, that seems to miss out on files that are
| far away through imports, and if not similar to existing
| code, such as when writing unique code.
|
| My main thoughts behind this are that
|
| 1. The LLMs tend to hallucinate library functions
|
| 2. I don't want to have to copy and paste a schema
| simonw wrote:
| Definitely not disagreeing that this stuff can be done
| better!
| verdverm wrote:
| Yeah, needs a lot of plumbing for sure, and I suspect
| more complex (agent) systems over calling a single LLM
| noman-land wrote:
| Auto complete directly inline inside the editor is pretty
| magical feeling. Sometimes you'll pause for a moment and it
| will suggest the exact next 3-4 lines you were about to type.
| You press tab and move on.
| tosh wrote:
| Try Cody with gpt-4o and explicit prompting (option+k). It is a
| nice experience to have the llm at your finger tips instead of
| having to do round-trips to a different ui.
|
| The models used for autocompletion in Github Copilot and other
| systems are usually not as strong but faster and cheaper.
|
| You can still get decent results from the autocomplete models
| if you guide them with comments but I find explicit prompting
| less frustrating when I care about getting a good result.
| brainless wrote:
| I have been thinking of LSP enabled AI code assistant, this looks
| lovely. I guess there are other efforts like this as well.
| smarvin2 wrote:
| Yes! I would love to hear your thoughts on our current features
| and roadmap. If you have any ideas or want to contribute, feel
| free to make a github issue.
| eloh wrote:
| Nice. I saw this coming. Next up is a "generic" webserver which
| just serves HTTP response data based on some system prompt. :)
| noman-land wrote:
| A natural language http API sounds like a hilarious
| proposition.
| maturz wrote:
| A couple of other ones
|
| https://github.com/TabbyML/tabby
|
| https://github.com/fauxpilot/fauxpilot
| letmeinhere wrote:
| also https://github.com/ex3ndr/llama-coder
| smarvin2 wrote:
| These are all awesome projects! After skimming them, one big
| difference between these and LSP-AI is that LSP-AI is a
| language server. That means we work out of the box with almost
| all popular text editors and IDEs, you don't need any plugins
| to get completion.
|
| For custom use cases like chat windows and some of the things
| we are working on next, you will still need plugins, but making
| it an LSP simplifies things like synchronizing document changes
| and a communicating with the client.
| icholy wrote:
| I want an LLM which uses an LSP to gather more context.
| smarvin2 wrote:
| This is actually what we are working on adding next! We are
| working on code crawling and a vector search for better context
| gathering. Stay tuned for some more info on this
| CGamesPlay wrote:
| I hope your LSP client module ends up being reasonably
| isolated and reusable! I found that client support for LSP is
| the weakest part of the ecosystem. The story generally seems
| to be "the editor has a bespoke client and there is never a
| reason to use LSP from any other context".
| convexstrictly wrote:
| Aider uses Treesitter to improve code generation.
| https://aider.chat/2023/10/22/repomap.html
|
| Aider: https://github.com/paul-gauthier/aider
|
| It is state of the art on SWE-Bench and SWE-Bench Lite.
| https://aider.chat/2024/06/02/main-swe-bench.html
| smarvin2 wrote:
| Hey everyone! Thank you so much for posting and upvoting this.
| Just wanted to say I'm the primary author here and happy to try
| and answer any questions anyone might have! Feel free to ask away
| here! This is a very new project and we have a lot of ground we
| are hoping to cover. Thank you for the support!
| FloatArtifact wrote:
| It would be very interesting to leverage this to edit documents
| as much as code. Obviously editor-dependent due to the language
| server.
| jsjohnst wrote:
| Curious how this compares to the locally run AI code assistance
| that Jetbrains has added to their products. I've found it to be
| pretty good for what I want (which is very minimal assistance),
| it's just a bit too eager to suggest code sometimes.
| smarvin2 wrote:
| You have complete control over the prompt configuration for
| LSP-AI. Feel free to check out the prompt section on our wiki:
| https://github.com/SilasMarvin/lsp-ai/wiki/Prompting It is very
| much still a work in progress, but I find the prompt I have
| listed as the default on that page does a good job of
| encouraging gpt-4o / llama 70b to provide only minimal
| completion suggestions.
| jsjohnst wrote:
| Awesome, thanks for the pointer to get me jump started!
| smarvin2 wrote:
| Of course, let me know how it goes and if I can help with
| anything else
| mark_l_watson wrote:
| Silas: I didn't see in the docs how to configure for Emacs. Link,
| please.
| smarvin2 wrote:
| There are no docs for configuration for Emacs. That is
| something we need to add! I don't have experience using emacs,
| but if anyone does and wants to create an issue with some
| examples that would be awesome, otherwise I can find some time
| later to add it
| FeepingCreature wrote:
| Hell yes! I've been waiting for somebody to do this.
| smarvin2 wrote:
| Thank you! Let me know if there are features you are wanting
| that are currently missing!
| anotherpaulg wrote:
| Very interesting approach. The folks at rift [0] worked on LSP as
| an integration point for AI coding. But I think they are focusing
| on other things now.
|
| Do you think the LSP abstraction can support interactions beyond
| copilot style autocomplete? While that's a super helpful UX, it's
| also pretty narrow and limited.
|
| My project aider [1] provides a pair-programming UX, which allows
| complex interactions like asking for a change that will modify
| multiple files. Could LSP servers support more general AI coding
| like this?
|
| [0] https://github.com/morph-labs/rift
|
| [1] https://github.com/paul-gauthier/aider
| smarvin2 wrote:
| LSP-AI is meant to work with plugins. It can provide auto
| complete features without them, but to have the kind of
| experience Copilot provides with VS Code, you will need editor
| specific plugins.
|
| LSP-AI makes writing these plugins easier. Check out "The Case
| for LSP-AI" https://github.com/SilasMarvin/lsp-ai?tab=readme-
| ov-file#the... for more info on why I think that is true
| skybrian wrote:
| It's an interesting experiment, but an issue is that you might
| not want to give up using your regular language server. Do
| editors let you use more than one at a time? How do they display
| the combined results?
|
| If you can't have more than one at a time, one possibility would
| be to make the AI language server a proxy that passes through any
| query results from the other language server, maybe modifying or
| them improving them somehow. Whatever the other language server
| returns might or might not be useful context for the AI.
| smarvin2 wrote:
| Most editors should let you use multiple language severs at
| once, they merge the results together which can be annoying.
|
| LSP-AI is not meant to replace plugins. It works best when
| wrapped and used by a plugin. Our VS Code plugin is a great
| example of this: https://github.com/SilasMarvin/lsp-
| ai/wiki/Plugins
|
| LSP-AI abstracts complex implementation details like
| maintaining document / context parity and providing different
| LLM backends.
|
| I have a section no the Github Title: "The Case for LSP-AI" you
| might like: https://github.com/SilasMarvin/lsp-ai?tab=readme-
| ov-file#the...
| jryb wrote:
| neovim allows multiple LSPs. In my setup the different sources
| show virtual text comments with unique colors for each lsp, one
| message after the other - I think this is default behavior but
| i'm not sure
___________________________________________________________________
(page generated 2024-06-08 23:00 UTC)