[HN Gopher] Launch HN: Continue (YC S23) - Create custom AI code...
___________________________________________________________________
Launch HN: Continue (YC S23) - Create custom AI code assistants
Hi HN. We are Nate and Ty, co-founders of Continue
(https://www.continue.dev), which enables developers to create,
share, and use custom AI code assistants. Today, we are launching
Continue Hub and sharing what we've learned since our Show HN that
introduced our open-source VS Code extension in July 2023
(https://news.ycombinator.com/item?id=36882146). At Continue,
we've always believed that developers should be amplified, not
automated. A key aspect of this philosophy is providing choices
that let you customize your AI code assistant to fit your specific
needs, workflows, and preferences. The AI-native development
landscape constantly evolves with new models, MCP servers,
assistant rules, etc. emerging daily. Continue's open architecture
connects this ecosystem, ensuring your custom code assistants
always leverage the best available resources rather than locking
you into yesterday's technology. The Continue Hub makes it even
easier to customize with a registry for defining, managing, and
sharing building blocks (e.g. models, rules, MCP servers, etc).
These building blocks can be combined into custom AI code
assistants, which you can use with our open-source VS Code and
JetBrains extensions (https://github.com/continuedev/continue).
Here are a few examples of different custom AI code assistants that
we've built to show how it works: A custom assistant that
specializes in helping with data load tool (dlt) using their MCP:
https://www.loom.com/share/baf843d860f44a91b8c580063fcfbf4a?... A
custom assistant that specializes in helping with Dioxus using only
models from Mistral:
https://www.loom.com/share/87583774753045b1b3c12327e662ea38?... A
custom assistant that specializes in helping with LanceDB using the
best LLMs from any vendor via their public APIs (Anthropic, Voyage
AI, etc):
https://www.loom.com/share/3059a35f8b6f436699ab9c1d1421fc8d?...
Over the last 18+ months since our Show HN, our community has
rapidly grown to 25k+ GitHub stars, 12.5k+ Discord members, and
hundreds of thousands of users. This happened because developers
want to understand how their tools work, figure out how to better
use them, and shape them to fit their development practices /
environments. Continue does not constrain their creativity like the
vertically integrated, proprietary black box AI code assistants
that lack transparency and offer limited customizability. Before
Continue Hub, developers faced specific technical challenges when
building custom AI assistants. They manually maintained separate
configuration files for different models, wrestled with breaking
API changes from providers, and built redundant context retrieval
systems from scratch. We've seen teams spend weeks setting up
systems that should take hours. Many developers abandoned the
effort entirely, finding it impossible to keep up with the rapidly
evolving ecosystem of models and tools. Our open-source IDE
extensions now read a standardized configuration format that fully
specifies an AI code assistant's capabilities--from models and
context providers to prompts and rules. Continue Hub hosts these
configurations, syncs them with your IDE, and adds versioning,
permissions, and sharing. Assistants are composed of atomic
"blocks" that use a common yaml format, all managed through our
registry with both free solo and paid team plans. We're releasing
Continue 1.0 today, which includes both Continue Hub and the first
major release of our Apache 2.0 licensed VS Code and JetBrains
extensions. While the Hub currently only supports our IDE
extensions, we've designed the underlying architecture to support
other tools in the future (https://blog.continue.dev/continue-1-0).
The config format is intentionally tool-agnostic--if you're
interested in integrating with it or have ideas for improvement,
we'd love to hear your thoughts!
Author : sestinj
Score : 133 points
Date : 2025-03-27 15:06 UTC (7 hours ago)
(HTM) web link (hub.continue.dev)
(TXT) w3m dump (hub.continue.dev)
| talos_ wrote:
| Congrats on the launch HN!
|
| I've been following the IDE + LLM space. What's Continue's
| differentiator vs GitHub Copilot, Cursor, Cline, Claude Desktop,
| etc. ?
|
| What are you looking to build over the next year?
| sestinj wrote:
| Thank you! The biggest difference in our approach is the goal
| of allowing for custom assistants. What we've found is that
| most developers work in entirely different environments,
| whether that be IDE, tech stack, best practices, enterprise
| requirements, etc. The baseline features that we've come to
| expect from a coding assistant are amazing, but to truly meet
| people where they are it takes something different for
| everyone. Over the last two years we've seen tons of people
| customizing, and with hub.continue.dev we just want to make
| that accessible to all, and hope that people will share what
| they learn.
|
| We're going to keep building at the edge of what's possible in
| the IDE, including focusing on Agent mode
| (https://docs.continue.dev/agent/how-to-use-it), next edit
| prediction, and much more. And we're going to keep making it
| easier to build custom assistants as the ecosystem grows!
| technoabsurdist wrote:
| Congrats on the launch! Big fan of Continue <3
| sestinj wrote:
| Appreciate it!!
| changhis wrote:
| Congrats on the launch! I think this is totally the right next
| level abstraction for AI-assisted coding. Don't generate
| everything from scratch but make it easy to plug in the tools you
| care about and make the generation way more accurate. Way to go!
| sestinj wrote:
| Yeah we think that MCP was a really solid building block at a
| lower level but ultimately a higher level abstraction is what
| will make customization really accessible. Being able to define
| rules, models, MCP servers, docs, prompts, data flow, all in
| one seems to be important
| johnisgood wrote:
| What does this mean exactly? I checked the website, and it
| seems to have very specific assistants. I do not know if I
| personally could take advantage of this, unless there is going
| to be a "C assistant" or "OCaml assistant" (or just a "coder"
| one) or something.
| sestinj wrote:
| The goal of hub.continue.dev isn't to pre-build exactly what
| people will need (this might not be possible). We've started
| with a few examples for inspiration, but the hope is that
| hub.continue.dev makes it easier for developers to build
| assistants for themselves that match their personal needs
|
| Even within the subset of developers that use C or OCaml,
| there are likely to be a large variety of best practices,
| codebase layouts, and internal knowledge--these are the
| things we want to empower folks to codify
| johnisgood wrote:
| Okay, that sounds cool. I hope I will be able to take
| advantage of this. Right now I am using LLMs (sadly not
| local, can't afford GPUs, my PC is pretty obsoleted).
| addandsubtract wrote:
| This looks great, but there's a bug on the "Remix" page that
| prevents me from actually customizing my own bot. Whenever
| there's a new API request to "/remix" (GET or POST), the form
| elements reset to their original value, making the changes
| impossible to save. At least in Firefox.
| sestinj wrote:
| Thanks for the report, we should be able to fix this in the
| next hour or so. A workaround would be to copy the YAML
| definition displayed on the page of the assistant / block you
| want to remix. Will keep you updated!
| sestinj wrote:
| Fix is released! Thanks for catching this so early
|
| Should now be able to remix PyTorch rules for example:
| https://hub.continue.dev/starter/pytorch-rules/remix
| woah wrote:
| Is the idea here that the assistant is going to be better at
| handling specific libraries and languages that you give it
| documentation for?
| bhouston wrote:
| I think that is a temporary problem. General purpose agents
| will get better in general with libraries without having to
| have specialized agents.
| sestinj wrote:
| Languages, libraries, internal company codebases, common tasks
| (e.g. writing unit tests in your style, scaffolding a CRUD
| backend, etc.), personal preferences, and much more. Language
| models are so general that I probably haven't thought of all
| the possibilities myself
|
| And in some cases not just "better", but if you hook up custom
| tools then your assistant can do entirely new things!
| thomasfromcdnjs wrote:
| Do you run the mcp servers in the cloud, or just download them to
| be installed in vscode?
| sestinj wrote:
| Right now they are defined as a command (typically npx, uvx,
| docker or another way of running code) and run as a subprocess
| in VS Code, which is the same starting point that tools like
| Claude Desktop have used. We're also going to support SSE-based
| servers though, which will make it possible to hook Continue up
| to an MCP that runs anywhere.
|
| I certainly feel that running them locally isn't the end state,
| curious if others have started to feel pains there yet?
| atonse wrote:
| I'm of two minds. I currently use 3 MCPs regularly
| (Atlassian, Git, and GitHub). And I suppose if GitHub or
| Atlassian actually hosted first-party MCP endpoints for their
| services, we probably wouldn't need to self-host?
|
| But I wouldn't want to have a third party host where, all
| they're doing is just running the npx/uvx command themselves
| and I still have to give them keys, etc.
|
| At that point, I'd rather just host them locally.
| sestinj wrote:
| Auth definitely seems like the biggest outstanding question
| with MCP. Local is a great first solution to make it
| simple, but maybe OAuth integration in the future makes
| this easier
| bhouston wrote:
| As someone who has done a lot of work with agentic coding I am
| not sure specialized agents are the best solution. I think
| standardizing knowledge packs would be better that any agent can
| read to understand a domain or library is more useful. In
| particular this allows for an agent to know multiple domains at
| the same time.
|
| Basically knowledge packs could be specified in each npm
| package.json or similar.
|
| And we should view a knowledge pack as just a cache in a way.
| Because agents these days are capable of discovering that
| knowledge themselves, via web browsing and running tests, it is
| just costly to do so on every agent run or for every library they
| don't know.
|
| I sort of view specialized agents as akin to micro services,
| great if you have perfect domain decomposition, but likely to
| introduce artificial barriers and become inconvenient as the
| problem domain shifts from the original decomposition design.
|
| I guess I should write this up as blogpost or something similar.
|
| EDIT: Newly written blog post here:
| https://benhouston3d.com/blog/crafting-readmes-for-ai
| sestinj wrote:
| We think about this a lot, and I think there are merits to the
| viewpoint. If I were to write a rule that said "make sure all
| code you write uses best practices", it should already be
| obvious enough to a good language model that this is always the
| case. It's "common knowledge". In some cases today there might
| be "common knowledge" that is a bit more rare, and the language
| model doesn't quite know this. I might agree that this could be
| obviated as well.
|
| A situation to think about: if I were to write a rule that said
| "I am using Tailwind CSS for styling", then this is actually
| information that can't be just known. It's not "common
| knowledge", but instead "preference" or "personal knowledge". I
| do think it's a fair response to say "can't it just read my
| package.json"? Probably this works in a handful of cases, but
| I've come to find that even so there are a few benefits to
| custom rules that I expect to hold true regardless of LLM
| progress: - It's more efficient to read a rule than to call a
| tool to read package.json on every request - Especially in
| large enterprise codebases, the majority of knowledge is highly
| implicit (oftentimes in detrimental ways, but so the world
| works)
|
| But yeah this is a majorly important and interesting question
| in my mind. What types of customization will last, and which
| won't? A blog post would be amazing
| r_singh wrote:
| > As someone who has done a lot of work with agentic coding
|
| Can you please share what are your favourite tools and for what
| exactly? Would be helpful
|
| I've been using Cline a lot with the PLAN + ACT modes and
| Cursor for the Inline Edits but I've noticed that for anything
| much larger than Claude 3.7's context window things get less
| reliable and it's not worth it anymore.
|
| Have you found a way to share knowledge packs? Any conventions?
| How do you manage chat histories / old tasks and do you create
| documentation from it for future work?
| bhouston wrote:
| > Can you please share what are your favourite tools and for
| what exactly? Would be helpful
|
| I wrote my own open source one here:
| https://github.com/drivecore/mycoder. Covered on hacker news
| here: https://news.ycombinator.com/item?id=43177117
|
| I've also studied coding with it and wrote a lot about my
| findings here:
|
| - https://benhouston3d.com/blog/lean-into-agentic-coding-
| mista...
|
| - https://benhouston3d.com/blog/building-an-agentic-code-
| from-...
|
| - https://benhouston3d.com/blog/agentic-coder-automation
|
| - https://news.ycombinator.com/item?id=43177117
|
| - https://benhouston3d.com/blog/the-rise-of-test-theater
|
| My findings are generally that agentic coders are relatively
| interchangeable and the reason they work is primarily because
| of the LLM's intelligence and that is a result of the
| training they are undergoing on agentic coding tasks. I think
| that both LLMs and agentic coding tools are converging quite
| quickly in terms of capabilities.
|
| > Have you found a way to share knowledge packs? Any
| conventions? How do you manage chat histories / old tasks and
| do you create documentation from it for future work?
|
| I've run into this wall as well. I am working on it right
| now. :) Here is a hint of the direction I am exploring:
|
| https://benhouston3d.com/blog/ephemeral-software-in-the-
| era-...
|
| But using Github as external memory is a near term solution:
|
| https://benhouston3d.com/blog/github-mode-for-agentic-coding
| r0b05 wrote:
| Very interesting. I'd like to give "Github" mode a try. Are
| you able to use some local instance instead?
| bhouston wrote:
| It can, but Claude 3.7 is the best model for it right
| now. Using other models with mycoder right now is just an
| exercise in frustration. I will fix that eventually.
| r0b05 wrote:
| What I meant to ask is, instead of pushing your code to
| Github, is it possible to use a local self hosted
| instance of a similar tool like GitLab or Bitbucket?
| bhouston wrote:
| It is just a prompt change if there is a cli tool for
| GitLab or Bitbucket. I just tell Claude to use the gh cli
| tool and to use it as external memory to track tasks and
| to submit PRs.
| r0b05 wrote:
| I see, so it depends on a cli tool being available. I
| will check what other options are available out there.
|
| It would be great to run local models at the same level
| one day. I am sure that Claude is making your wallet feel
| quite light :)
| bhouston wrote:
| It costs about a $1 to implement a major feature, so no
| the cost is marginal compared to my salary.
| r0b05 wrote:
| That's not bad. Btw your docs are really good. Will be
| checking out the Discord.
| sestinj wrote:
| On another note, this rang true to me:
|
| > Basically knowledge packs should be specified in each npm
| package.json or similar.
|
| Our YAML-based file format for assistants
| (https://docs.continue.dev/reference) is hoping to do just this
| by allowing you "import knowledge packs".
|
| Does it need to be decoupled from package.json, etc.? One of
| the most interesting reasons we decided not to go that route
| was it can be cumbersome for all of your many dependencies to
| taken into account at once. Another is the question of how the
| ecosystem will evolve. I definitely think that each package
| author should take the time to encode the rules and best
| practices of using their library, however it might be difficult
| for community to help out if this is gated by getting in a pull
| request.
|
| At the same time, one of the soon-to-be-released features we
| are working on is the ability to auto-generate or suggest rules
| (based on package.json, etc.).
| bhouston wrote:
| > Does it need to be decoupled from package.json, etc.?
|
| Knowledge packs should be decoupled from package.json just as
| eslint rules or commit-lint rules are. You can include them
| in package.json or in separate files. But including pointers
| to the main files in a package.json helps with discovery.
|
| All packages across all languages should support AI friendly
| knowledge packages so a level of decoupling is required.
|
| EDIT: After thinking about it I think README.md should just
| be written with Agentic Coders in mind. I wrote up my
| thoughts on that here:
| https://benhouston3d.com/blog/crafting-readmes-for-ai
| mentalgear wrote:
| how are "knowledge packs" different than just the package's
| README ? (if present and well written, it should be as usable
| to devs as to an LLM, if not, maybe consider letting the LLM
| write it's own "Readme" for a package on the hub be scanning
| source/types of the package)
| sestinj wrote:
| I'd say rules are quite similar to a README, just tailored
| to LLMs, which often benefit from slightly different
| information than a human would. One way to think about the
| difference is that we as developers have the chance to
| build up memory/context over time, whereas LLMs are
| "memoryless" so you want to efficiently load all of the
| necessary high-level understanding.
|
| > consider letting the LLM write it's own "Readme" for a
| package on the hub be scanning source/types of the package
|
| This is something we're looking to ship soon
| bhouston wrote:
| I agree with you on the READMEs. In response to his
| suggestion that I write a blog post on the idea of
| knowledge packages, I just spend the last 30 minutes
| aligning on your suggestion by coincidence. Written up
| here:
|
| https://benhouston3d.com/blog/crafting-readmes-for-ai
| sestinj wrote:
| wow that was fast--some gems in there
| orliesaurus wrote:
| Any benchmark vs just using Claude-Sonnet-3.5/3.7 to see if
| there's actual a performance gain rather than just a well defined
| PRD/prompt/context?
| sestinj wrote:
| On one hand, I think it would be useful to have a couple of
| benchmarks for super common tech stacks like Python, React,
| etc. that allowed comparing variants of rules to find the best
| ones. On the other hand, if something can be turned into a
| benchmark then it can probably be learned in the weights of a
| model.
|
| A lot of the benefits of rules are completely unique to your
| situation and the definition of "better" is likely to differ
| between people. I think the videos above are the best way I
| currently have to display this.
|
| There are potential solutions though to measure improvement,
| and we actually made it possible for you to obtain the
| necessary data. If you add a "data" block to your custom
| assistant you can capture all of the accept/reject, thumbs
| up/down, etc. data and use it to do an analysis. We definitely
| will be working more in that direction
| outside1234 wrote:
| Having used the agentic Github Copilot in VSCode Insiders, it is
| hard to understand why this is necessary given how well that
| functions.
| sestinj wrote:
| I think Copilot plays an important role in the world of code
| assistants, and it's great that they've implemented Agent mode
| as well.
|
| We'd actually love for them to take part in the standard we're
| building here--the more people build custom assistants
| together, the stronger the ecosystem can grow!
|
| If I were to share any one reason why we continue to build
| Continue when there are so many other coding assistants, it
| really comes down to one word: choice. We hope to live in a
| world where developers can build custom prompts, rules, tools,
| etc. just like we currently build our own bash profiles, Vim
| shortcuts, and other personal or internal company tools. Lots
| of room and lots of space for many products, but we want to
| lead the way on allowing developer choice over models, prompts,
| and much, much more
| thelastbender12 wrote:
| Congrats on the release! I've been using Cursor but somewhat
| annoyed with the regular IDE affordances not working quite right
| (absence of pylance), and would love to go back to VSCode.
|
| I'd love it if you lean into pooled model usage, rather than it
| being an addon. IMO it is the biggest win for Cursor usage - a
| reasonable num of LLM calls per month, so I never have to do
| token math or fiddle with api keys. Of course, it is available as
| a feature already (I'm gonna try Continue) but the difference in
| response time b/w Cursor and Github copilot (who don't seem to
| care) is drastic.
| sestinj wrote:
| Excited to hear how it goes for you!
|
| Our Models Add-On is intended to give the same flat monthly fee
| as you're accustomed to with other products. What did you mean
| by leaning into pooled, just making it more front-and-center?
| thelastbender12 wrote:
| Yep, exactly that. IMO agent workflows, MCP and tool usage
| bits are all promising, but the more common usage of LLMs in
| coding is still chat. AI extensions in editors just make it
| simple to supply context, and apply diffs.
|
| An addon makes it seem like an afterthought, which I'm
| certain you are not going for! But still making is as
| seamless as possible would be great. For ex, response time
| for Claude in Cursor is much better than even the Claude web
| app for me.
| sestinj wrote:
| This is a good callout, we'll definitely work to improve
| our messaging
| paradite wrote:
| I think this is an interesting pivot, but Cursor's project level
| rules and custom modes will probably quickly evolve to cover all
| the aspects listed on your hub. (Maybe it already does)
|
| This also allows developers to switch between projects quickly
| while simultaneously also switching the setup for the AI agent.
| talos_ wrote:
| My main gripe with Cursor is that they put MCP usage behind a
| paywall and their support for the protocol is weaker than
| Continue's in meaningful ways[1]. Given the big community push
| on MCP development, I'm a bit annoyed that Cursor monetizes OSS
| work with a paywall...
|
| [1]https://modelcontextprotocol.io/clients
| sestinj wrote:
| Agreed! If there's value here, they'll definitely follow behind
| and have already taken their own creative steps.
|
| One interesting note is that rather than a pivot, the hub was
| just a continuation of work we'd already been doing--we had a
| configuration file that tons of people were customizing, but it
| was just a bit too complicated to get started, so we decided to
| make that "1 click" easy with the hub
|
| In the long run, we feel that it's not the "features" in the
| IDE that will separate different AI coding tools (most are a
| matter of building out new UI), but that the ecosystem will be
| one of the biggest differentiators
| betimsl wrote:
| Why separate? It's not like there's limited capacity...you can
| have an assistant knowing all the languages at once.
| talos_ wrote:
| I guess you would be paying for pushing all of these tokens to
| the LLM. Also, too much irrelevant context can "confuse" the
| model about the task at hand
| betimsl wrote:
| Check out NVIDIAs latest releases. Paying for tokens is going
| to be a history in about 6 months. You run the model on your
| laptop.
|
| Maybe you're right about the confusion...but given the
| velocity, that's going to be fixed also.
|
| All the knowledge about the field of programming is
| digitized, one could argue that having a model that digested
| all that information in a right way, is better than separate.
|
| Just a thought. I don't care all that much.
| sestinj wrote:
| Absolutely +1 to the progress of local models! We hope
| Continue is and continues to be a great place to use them.
| Tons of blocks in the Ollama page for example that can be
| used: https://hub.continue.dev/ollama
| sestinj wrote:
| You're 100% correct, most models these days know most
| programming languages and will quickly learn the syntax of more
| niche languages.
|
| For users that end up benefitting from rules, it's typically
| because of encoding "personal knowledge" or "preferences". When
| building small side projects from scratch, this typically
| matters much less: you probably want it to "just work" using
| the canonical tech stack. Pretty quickly though projects end up
| with a lot of implicit knowledge and idiosyncratic practices
| that are great candidates for custom rules!
| serjester wrote:
| I think I have trouble understanding what this is doing other
| than maybe some fine-tuned prompts tailored to a specific stack?
| I'm looking at the data science kit and I don't see why anyone
| would use this, much less pay for it?
|
| I guess you guys have some MCP connections too, but this seems
| like such a marginal value add (how often am I really pinging
| these services and do I really want an agent doing that).
|
| Regardless congrats on the launch.
| sestinj wrote:
| This is fair feedback--we're so early in building an ecosystem
| here that the prompts we've shared as starting points are
| relatively general. But already we've seen people start to
| build much more carefully-crafted and specific assistants.
| These are some examples that we've used internally and found to
| be super useful:
|
| - https://hub.continue.dev/continuedev/playwright-e2e-test
|
| - https://hub.continue.dev/continuedev/vscode
|
| - https://hub.continue.dev/continuedev/service-test-prompt
|
| Importantly, you don't have to pay to use custom assistants!
| You can think of hub.continue.dev like the NPM registry: it's
| just a medium for people to create, share, and pull the
| assistants they want to use. It's always possible to bring your
| own API key and we provide a Models Add-On mostly for
| convenience of not needing to keep track of API keys
|
| Value prop of MCP is definitely early, but I would recommend
| giving Agent mode (https://docs.continue.dev/agent/how-to-use-
| it) a try if you haven't had the chance--there's something
| compelling about having the model take the obvious actions for
| you (though you can of course require approval first!)
| prophesi wrote:
| Very excited for this. The two painpoints I've had with LLM's
| lately:
|
| - Mediocre knowledge of Erlang/Elixir
|
| - Reluctance to use the new rune feature in Svelte 5
|
| It sounds like I'd be able to point my local LLM to the official
| docs for Erlang, Elixir, Phoenix, and all the dependencies in my
| project. And same for Svelte 5.
| sestinj wrote:
| I was actually talking to someone the other day who was
| building an MCP for Continue that could call the Elixir type
| checker / compiler (which I've heard is quite powerful). I'll
| need to find this and share--they were saying it made for a
| really powerful edit -> check -> rewrite loop in Agent mode
|
| Also might be interesting to take a look at these Phoenix rules
| that someone built: https://hub.continue.dev/peter-
| mueller/phoenix-rules
| prophesi wrote:
| I didn't think of that! I'd definitely be interested in an
| Elixir MCP. And thank you for pointing out the Phoenix/Elixir
| block. You'll find me lurking in the Discord.
| TIPSIO wrote:
| FYI I'm sure you're aware, Svelte has one of the best migration
| guides ever [1].
|
| It's too large to be a Cursor rule though. But, if you dump it
| into Google Gemini (which is phenomenal at large context
| windows) it will write you a solid condensed version.
|
| [1] https://svelte.dev/docs/svelte/v5-migration-guide
| atonse wrote:
| I found that Copilot sucked for elixir.
|
| But Cursor has been much better with suggestions for elixir
| codebases.
|
| Probably still not as good as JS/Python but way way way better
| than Copilot.
| jareds wrote:
| What is the accessibility status of the Continue platform? I am a
| totally blind developer and have found Cursor to be an absolute
| mess when it comes to accessibility. There are enough tools
| available that I'd like to know about there accessibility ahead
| of time if possible instead of spending a bunch of time trying
| all of them out only to find they are not accessible.
| sestinj wrote:
| We have support for text-to-speech in the chat window and have
| also worked with developers who code entirely through voice and
| have been quite successful with Continue.
|
| I don't claim that we're perfect and would love to hear how we
| can improve if you have the chance to give it a try
| jareds wrote:
| What would be the best way to provide feedback? I'm not sure
| when I will get a chance to look at Continue, but suspect it
| may be after comments are closed on this thread.
| sestinj wrote:
| If you want to keep in touch going forward, you're welcome
| to join our Discord or share a GitHub issue, we'll try to
| be quite responsive
| mannanj wrote:
| here's their discord link to help out
| https://discord.gg/vapESyrFmJ
| swyx wrote:
| was great to meet you guys in NYC last month. congrats on your
| launch and its exciting to see Continue Hub go GA!
| sestinj wrote:
| thanks!!
| RVRC wrote:
| TLDR: Is there a way to make money from creating these
| specialized agents?
| sestinj wrote:
| Nothing built-in yet, but if you have a service that lets users
| pay for API keys then you can achieve this in effect. There are
| already some good examples:
|
| - Models like Relace's Instant Apply
| (https://hub.continue.dev/relace/instant-apply)
|
| - Codebase context services like Greptile
| (https://hub.continue.dev/continuedev/greptile-context)
|
| - MCP servers for services like Exa
| (https://hub.continue.dev/exa/exa-mcp)
| croemer wrote:
| It's nice that it plugs into VS Code. I got very put off by
| Cursor breaking after it imported all my extensions.
|
| Also cool that one can easily select different models for
| different modes.
|
| Is there a competitor extension with similar offerings?
| SlackingOff123 wrote:
| I use the Continue extension in both IntelliJ and VSCode and it's
| great. Although, I'm just connecting it to my own providers and
| not using your hub. So I'm more of a free-loader of the extension
| than a Continue customer. Anyway, thank you!
| sestinj wrote:
| I wouldn't say that's free-loader behavior :) It's exactly what
| we want to make possible--if you have strong reason to use your
| own models (price, convenience, security, remaining local, or
| other) then Continue is built for that
| danielhanchen wrote:
| Congrats guys on the release I love this!! :D
| sestinj wrote:
| :D thank you!!
| erichocean wrote:
| Go read "The Bitter Lesson" and then ask yourself which path this
| startup took.
| sestinj wrote:
| I hear this often (I had the same gut reaction before thinking
| below the surface) and will share why I think that our bet is
| perfectly aligned with the Bitter Lesson being true
|
| 1. The Bitter Lesson extends to test-time compute (some call
| this the "Bitter-er Lesson" https://yellow-
| apartment-148.notion.site/AI-Search-The-Bitte...), and we've
| bet that agentic LLMs will become a major transformation in how
| software is built. Agent mode
| (https://docs.continue.dev/agent/how-to-use-it) is here to
| stay. This means that models are going to take very extended
| action for you. It could be 1, 15, 60, or more minutes of work
| at a time without requiring interjection. As these trajectories
| become longer it becomes _more_, not less, important to give
| the model the correct initial conditions. This is the role of
| rules and prompts.
|
| 2. Cheap inference matters and the trend in frontier models
| (for those watching) is distillation, not increased parameter
| count. There's great reason to believe that we're headed toward
| a future where a few billion parameter model can contain all of
| the reasoning circuits necessary to solve difficult problems
| and that when combined with a massive context window will
| become the "engine" in every AI tool. The difficult part is
| obtaining that context, and if you watch the actions of people
| who work at companies, a large majority of their time is spent
| on reading, writing, sharing the right context with each other.
|
| 3. My co-founder Ty wrote a piece 2 years ago describing the
| path where language models automate increasing amounts of
| software and we use live coding interaction data to make them
| even better, in a positive feedback loop of automation:
| https://blog.continue.dev/its-time-to-collect-data-on-how-
| yo.... If you believe in this future, then you're going to want
| to collect your own data to post-train (e.g.
| https://arxiv.org/pdf/2502.18449v1) rather than letting another
| tool absorb all of the intellectual property without giving it
| back. They aren't going to train a model that knows the private
| details of every company's workflows, they will train on a
| distribution that helps primarily with the most basic tech
| stacks.
|
| 4. No matter how many parameters a foundation model has,
| there's no way for it to know in the weights that "We (at some
| particular team within some larger company) organize our unit
| tests into separate files for selectors, actions, and tests"
| (e.g.
| https://hub.continue.dev/continuedev/playwright-e2e-test). This
| is purely team knowledge and preference, and is often private
| data. The next thought in the chain here is "can't it just use
| tools to inspect the repository and find this out?". And the
| answer is absolutely, but that quickly gets expensive, slow,
| annoying. And you're going to end up writing a rule to save
| both money and time. Next: can't the model just write the rules
| for me? Again, absolutely! We're working on this. And to us the
| natural outcome of this is that the model writes the rules and
| you want to share this potentially expensive "indexing" step
| with your team or the world.
|
| 5. Probably the most obvious, but worth saying: advanced
| language models will use tools much more. Hooking up the right
| MCP is a non-negotiable part of getting out of the way so they
| can do their work.
| seveibar wrote:
| Is this something that could help people write code using my
| framework (tscircuit) more easily? I'm confused how I could add
| docs/recommend users to use Continue with a custom tscircuit
| assistant
| sestinj wrote:
| Totally, I actually think this would make for a really great
| custom assistant, probably even something we'd feature on the
| front page: https://hub.continue.dev/explore/assistants
|
| You could start here by adding a new block for docs:
| https://hub.continue.dev/new?type=block&blockType=docs
| bluelightning2k wrote:
| How is this customisation not equivalent to just a cursor or
| windsurf rules file?
| sestinj wrote:
| Rules are a subset of what we've allowed you to do on the Hub
| (and an important one). There are other building blocks, like
| models, MCP servers, docs, prompts, data, and more. We want to
| make all of this easy to customize
| badmonster wrote:
| congrats on the launch!
| sestinj wrote:
| thanks!
| ripped_britches wrote:
| I would try a flutter assistant on your hub. Can I use hub with
| cursor instead of your extension? How easy is it to make your own
| assistant?
| sestinj wrote:
| It looks like people have created a few already:
| https://hub.continue.dev/search?q=flutter
|
| I think it would be great to have one on the front page as well
| if there's something you find works really well. The nice thing
| about assistants is that a lot of the work is just browsing and
| clicking to add the blocks you want, and then when you really
| end up going deeper to customize the rules, this is just
| writing plain english!
|
| I don't believe Cursor has added support for assistant files
| (https://docs.continue.dev/reference) yet, but think it would
| be great if they did! We've designed the format to be totally
| client agnostic and hopefully to be a simple, portable way to
| share assistants across any tool
| croemer wrote:
| It's a bit disingenuous that you claim on your banner that your
| launch was covered by AP when it was in fact a paid content Press
| Release that only shared the domain: "PRESS RELEASE: Paid Content
| from EZ Newswire. The AP news staff was not involved in its
| creation.", see https://apnews.com/press-release/ez-
| newswire/artificial-inte...
| sestinj wrote:
| We didn't claim to be covered by AP, but I see what you mean:
| the previous text ("Learn more on TechCrunch and The Associated
| Press") wasn't clear enough. We've updated it to say "Check out
| our TechCrunch coverage and press release on AP News". It's a
| bit clunky but should avoid that misunderstanding. Thanks for
| pointing this out!
| FloorEgg wrote:
| Can someone make an assistant for firestore security rules, and
| another for shadcn with the latest tailwindcss version? Like,
| yesterday?
|
| These are the two cases where Claude 3.7/windsurf shits in my
| bed. :(
| sestinj wrote:
| Ooh +1 to both of these. We use shadcn as well :) and have been
| leveraging these docs: https://hub.continue.dev/vercel/shadcn-
| ui-docs, but there should totally be more in-depth rules for it
| and Firestore
| dimal wrote:
| I've enjoyed using Continue and really appreciated the focus on
| customizability.
|
| But my problem with Continue has been the lack of stability. I'll
| often bounce from one tool to another, so I might not use it for
| a couple weeks. Almost every time I come back to it, the
| extension is broken in some way. I've reinstalled it many many
| times. I kinda gave up on it and stuck with Cody, even though I
| like the feature set of Continue better. (Cody eventually broke
| on me, too, but that's another can of worms.)
|
| Is the Continue team aware of this stability issue and are you
| planning on focusing on that more now that you've launched? It
| seems like you've been moving fast and breaking things, which
| makes sense for a while, but I can't hitch my wagon to something
| that's going to break on me.
| sqs wrote:
| What broke on you when using Cody? Sorry to hear about that and
| want to fix it for you.
___________________________________________________________________
(page generated 2025-03-27 23:00 UTC)