[HN Gopher] Show HN: Continue - Open-source coding autopilot
       ___________________________________________________________________
        
       Show HN: Continue - Open-source coding autopilot
        
       Hi HN, we're Nate and Ty, co-founders of Continue, an open-source
       autopilot for software development built to be deeply customizable
       and continuously learn from development data. It consists of an
       extended language server and (to start) a VS Code extension.  Our
       GitHub is https://github.com/continuedev/continue. You can watch a
       demo of Continue and download the extension at https://continue.dev
       -- -- --  A growing number of developers are replacing Google +
       Stack Overflow with Large Language Models (LLMs) as their primary
       approach to get help, similar to how developers previously replaced
       reference manuals with Google + Stack Overflow.  However, existing
       LLM developer tools are cumbersome black boxes. Developers are
       stuck copy/pasting from ChatGPT and guessing what context Copilot
       uses to make a suggestion. As we use these products, we expose how
       we build software and give implicit feedback that is used to
       improve their LLMs, yet we don't benefit from this data nor get to
       keep it.  The solution is to give developers what they need:
       _transparency, hackability,_ and _control_. Every one of us should
       be able to reason about what's going on, tinker, and have control
       over our own development data. This is why we created Continue.  --
       -- --  At its most basic, Continue removes the need for
       copy/pasting from ChatGPT--instead, you collect context by
       highlighting and then ask questions in the sidebar or have an edit
       streamed directly to your editor.  But Continue also provides
       powerful tools for managing context. For example, type '@issue' to
       quickly reference a GitHub issue as you are prompting the LLM,
       '@README.md' to reference such a file, or '@google' to include the
       results of a Google search.  And there's a ton of room for further
       customization. Today, you can write your own  - slash commands
       (e.g. '/commit' to write a summary and commit message for staged
       changes, '/docs' to grab the contents of a file and update
       documentation pages that depend on it, '/ticket' to generate a
       full-featured ticket with relevant files and high-level
       instructions from a short description)  - context sources (e.g.
       GitHub issues, Jira, local files, StackOverflow, documentation
       pages)  - templated system message (e.g. "Always give maximally
       concise answers. Adhere to the following style guide whenever
       writing code: {{ /Users/nate/repo/styleguide.md }}")  - tools (e.g.
       add a file, run unit tests, build and watch for errors)  - policies
       (e.g. define a goal-oriented agent that works in a write code, run
       code, read errors, fix code, repeat loop)  Continue works with any
       LLM, including local models using ggml or open-source models hosted
       on your own cloud infrastructure, allowing you to remain 100%
       private. While OpenAI and Anthropic perform best today, we are
       excited to support the progress of open-source as it catches up
       (https://continue.dev/docs/customization#change-the-default-l...).
       When you use Continue, you automatically collect data on how you
       build software. By default, this development data is saved to
       `.continue/dev_data` on your local machine. When combined with the
       code that you ultimately commit, it can be used to improve the LLM
       that you or your team use (if you allow).  You can read more about
       how development data is generated as a byproduct of LLM-aided
       development and why we believe that you should start collecting it
       now: https://medium.com/@continuedev/its-time-to-collect-data-
       on-...  Continue has an Apache 2.0 license. We plan to make money
       by offering organizations a paid development data engine--a
       continuous feedback loop that ensures the LLMs always have fresh
       information and code in their preferred style.  -- -- --  We'd love
       for you to try out Continue and give us feedback! Let us know what
       you think in the comments : )
        
       Author : sestinj
       Score  : 75 points
       Date   : 2023-07-26 18:04 UTC (4 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | satvikpendem wrote:
       | Looks like this is similar to GitHub Copilot Chat [0], just open
       | source right? I like that you're supporting open models as well
       | rather than just ChatGPT. Is there a way for your extension to
       | read the file you're in as input before you ask any questions, so
       | that it has the context of what you want to do?
       | 
       | [0] https://github.blog/2023-07-20-github-copilot-chat-beta-
       | now-...
        
         | sestinj wrote:
         | Right now we are similar, but the open-source part is really
         | key. We think that the ability to write custom plugins will
         | make for a completely different kind of product.
         | 
         | And yes, by default Continue sees your open file, but you can
         | also highlight multiple code snippets or type '@' to include
         | context from outside your codebase, like GitHub issues.
        
       | milani wrote:
       | In my experience working with GPT4, if I give enough context on
       | types, other functions definitions and the libraries I use, I get
       | very accurate results. But it is a tedious task to copy paste
       | from multiple places (type definitions, function definitions,
       | packages, etc.).
       | 
       | In addition to the selected lines, does Continue support getting
       | related definitions from the language server and inject them in
       | the prompt? That would be huge.
        
       | johnfn wrote:
       | When I installed it, I immediately get this error:
       | 
       | > You are using an out-of-date version of the Continue extension.
       | Please update to the latest version.
        
         | sestinj wrote:
         | Just fixed, thanks for the heads up. Latest version should be
         | v0.0.207.
        
       | krono wrote:
       | A cursory look through the source reveals the presence of three
       | different telemetry suites of which only Posthog looks to be
       | properly documented. I could have overlooked it, but do you have
       | any more information on what Segment and Sentry are doing there?
        
         | sestinj wrote:
         | Just deleted:
         | https://github.com/continuedev/continue/commit/eba2f57a6462f...
         | 
         | Neither were doing anything, we simply forgot to `npm
         | uninstall` after (a while ago) playing around to decide which
         | service to use. Thanks for pointing it out.
        
           | krono wrote:
           | That clears it up, cheers and all the best with this project!
        
       | weekay wrote:
       | Seems interesting will definitely give it a try. Few
       | observations/ questions from reading the documentation- >
       | Continue will only be as helpful as the LLM you are using to
       | power the edits and explanations
       | 
       | Are there any others apart from gpt4 suitable for programming
       | copilot tasks ?
       | 
       | > If files get too large, it can be difficult for Continue to fit
       | them into the limited LLM context windows. Try to highlight the
       | section of code that include the relevant context. It's rare that
       | you need the entire file.
       | 
       | Most of the value and real world use case benefits come from
       | usage in a brownfield development where a legacy code isn't well
       | understood and is large (exceed current LLM context ?)
       | 
       | > telemetry through posthog Can organisations setup their own
       | telemetry and development data collection to further analyse how
       | and where the Copilot is being used ?
       | 
       | > Finops How does one get visibility of token / api usage and
       | track api spends ?
        
         | sestinj wrote:
         | Appreciate the deep read into the docs!
         | 
         | > We've found claude-2 very capable, especially for chat
         | functionality, and especially in situations where you're
         | looking for the equivalent of a faster Google search, even
         | smaller models will do. For inline edits, gpt4 well outperforms
         | others, but we've only optimized the prompt for gpt4. There's a
         | LOT of tinkering to be done here, and it seems clear that OSS
         | models will be capable soon.
         | 
         | > Definitely value there. We have an embeddings search plugin
         | heading out the door soon, but we very consciously avoided this
         | for a while - it obstructs understanding of what code enters
         | the context window, and we think transparency is underrated.
         | 
         | > Yes! You could have your own PostHog telemetry by simply
         | switching out the key, but we also deposit higher quality
         | development data on your machine (we never see it). Benefits
         | being both 1) understanding ROI of the tool, and 2) being able
         | to train custom models.
         | 
         | > This is a reasonable request! We'll add a feature for this.
         | Right now, you can use the usage dashboard of whichever
         | provider's key you use.
        
       | jierlich wrote:
       | Been using Continue for a few weeks in combination with GH co-
       | pilot. Overall it's been a solid experience. After a few days of
       | adjusting, it's become my go to because I don't feel like I need
       | to leave VSCode to get questions answered. Although there are
       | constraints, the edit functionality works ~80% of the time after
       | figuring out how to prompt it.
       | 
       | It's clear the team is shipping a ton too since almost every day
       | I see VSCode popup about restarting my editor for the new version
       | of Continue.
       | 
       | Excited to see where things go with this!
        
         | mritchie712 wrote:
         | What LLM are you using it with?
        
           | jierlich wrote:
           | I'm using the default, which is GPT4 iirc
        
       | nraf wrote:
       | Any plans for supporting Jetbrains IDEs?
        
         | sestinj wrote:
         | Focusing on VS Code for at least the next few weeks, but we've
         | planned from the start! You can read more here
         | (https://continue.dev/docs/how-continue-works), but the
         | Continue server abstracts over the IDE APIs, so it will be
         | easier to support any IDE, even letting you run Continue in
         | "headless mode" from the a Python script, the CLI, or in CI/CD.
        
       | rodrigodlu wrote:
       | Hey! Thanks for this tool. I was testing and paying copilot,
       | including the new chat integrated tool, but I feel your work flow
       | proposal is more compelling.
       | 
       | That said, I'm not sure what's the difference between providing
       | my own openapi key or not. The "Customization" doc is not
       | entirely clear on what using my own key enables me to do.
       | 
       | For instance, is this required for gpt4? What are the limits of
       | the free trial key?
       | 
       | I don't want to evaluate this not knowing which model is really
       | using, and it's not clear what difference the key makes.
       | 
       | Edit: Also when I asked the Continue chat how to change my key
       | and the model being used, it said that this is not possible since
       | the key is yours, instead of pointing me to the "Extension
       | Settings" inside the extension tab using the cog wheel.
        
         | sestinj wrote:
         | We wanted to make it as easy as possible for people to try
         | Continue, so we allow 250 free requests. These use gpt4. If you
         | plug in your own API key in VS Code settings, it will also use
         | gpt4 by default.
         | 
         | We'll update the /help messaging so it knows this, and you can
         | read more about choosing a model here:
         | https://continue.dev/docs/customization
        
       | dimal wrote:
       | This looks great! I've been pretty underwhelmed with the UX of
       | the other VS Code extensions, for just the reasons you list. This
       | looks a lot like how I imagined an AI extension should work.
       | Gonna try it out.
        
         | sestinj wrote:
         | Really appreciate this! Would love to hear feedback once you
         | try
        
       ___________________________________________________________________
       (page generated 2023-07-26 23:00 UTC)