[HN Gopher] Show HN: Aide, an open-source AI native IDE
___________________________________________________________________
Show HN: Aide, an open-source AI native IDE
Hey HN, We are Sandeep and Naresh, the creators of Aide. We are
happy to open source and invite the community to try out Aide which
is a VSCode fork built with LLMs integrated. To talk through the
features, we engineered the following: - A proactive agent Agent
which iterates on the linter errors (powered by the Language
Server) and pulls in relevant context by doing go-to-definitions,
go-to-references etc and propose fixes or ask for more files which
might be missing in the context. - Developer control We encourage
you to do edits on top of your coding sessions. To enable this, we
built a VSCode native rollback feature which gets rid of all the
edits made by the agent in a single click if there were mistakes,
without messing up your changes from before. - A combined
chat+edit flow which you can use to brainstorm and edit You can
brainstorm a problem in chat by @'ting the files and then jump into
edits (which can happen across multiple files) or go from a smaller
set of edits and discuss the side-effects of it - Inline editing
widget We took inspiration from the macos spotlight widget and
created a similar one inside the editor, you can highlight part of
the code, do Cmd+K and just give your instructions freely - Local
running AI brain We ship a binary called sidecar which takes care
of talking to the LLM providers, preparing the prompts and using
the editor for the LLM. All of this is local first and you get full
control over the prompts/responses without anything leaking to our
end (unless you choose to use your subscription and share the data
with us) We spent the last 15 months learning about the internals
of VSCode (its a non-trivial codebase) and also powering up our AI
game, the framework is also at the top of swebench-lite with 43%
score. On top of this, since the whole AI side of the logic runs
locally on your machine you have complete control over the data,
from the prompt to the responses and you can use your own API Keys
as well (can be any LLM provider) and talk to them directly.
There's still a whole lot to build and we are at 1% of the journey.
Right now the editor feels robust and does not break on any of the
flows which we aimed to solve for. Let us know if there's anything
else you would like to see us build. We also want to empower
extensibility and work together with the community to build the
next set of features and set a new milestone of AI native editors.
Author : skp1995
Score : 109 points
Date : 2024-11-06 15:01 UTC (8 hours ago)
(HTM) web link (aide.dev)
(TXT) w3m dump (aide.dev)
| morgante wrote:
| Is the "sidecar" open source too?
| ghostwriternr wrote:
| Yes, it is! https://github.com/codestoryai/sidecar
| WillAdams wrote:
| Confusingly, "Sidecar" is the name Apple uses for their
| feature of having an iPad serve as a second screen/touch
| interface for a Mac:
|
| https://support.apple.com/en-us/102597
| Validark wrote:
| I believe I get the metaphor. Why is it confusing?
| WillAdams wrote:
| Overloading the term with a second technological meaning.
| yjftsjthsd-h wrote:
| It's also the name of a k8s thing: https://kubernetes.io/do
| cs/concepts/workloads/pods/sidecar-c...
| ilrwbwrkhv wrote:
| So what? Just because Apple calls something retina
| "display" means that others cannot call stuff displays.
| homarp wrote:
| read it as sAIDEcar
| pseudo_rand000 wrote:
| What differentiates Aide from all the existing tools in this
| space like Cursor?
| skp1995 wrote:
| VSCode forks are not new, there are many companies out there
| building towards this vision. What sets us apart is partly our
| philosophy (deeply integrating into the editor) and also the
| tech stack (running everything to the dot locally) and giving
| developers control over the LLM usage and also other niceness
| (like rollbacks which I think are paramount and important)
| swyx wrote:
| > What sets us apart is partly our philosophy (deeply
| integrating into the editor)
|
| i'm so sorry but what do you think cursor's philosophy is
|
| > also other niceness (like rollbacks
|
| yep in cursor too
|
| i know youre new so just being gentle but try to focus on
| some kind of killer feature (which i guess is sidecar?)
|
| also https://x.com/codestoryai seems suspended fyi
| Alifatisk wrote:
| > I'm so sorry but what do you think cursor's philosophy is
|
| I've never understood why people say sorry for cases like
| these?
| skp1995 wrote:
| Fair point! We are not taking a stab at cursor in any way
| (its a great product)
|
| In terms of features I do believe we are differentiated
| enough, the workflows we came up with are different and we
| are all about giving back the data (prompt+responses) back
| to the user.
|
| The sidecar is not the killer feature, its one of the many
| thing which ticks the whole experience together.
|
| Good callout on the codestoryai account being suspended, we
| are at @aide_dev
| swyx wrote:
| you link to it still on your home page
| skp1995 wrote:
| great catch! Thank you for pointing this out
| OsrsNeedsf2P wrote:
| Aide seems to have a good open source license (Cursor is
| proprietary)
| skp1995 wrote:
| Open Source; giving full ownership of the data to the users;
| running completely locally; we want to make sure you can use
| Aide no matter the environment you are in.
| xpasky wrote:
| Any short-term plans for Claude via AWS Bedrock? (That's for me
| personally a blocker for trying it on our main codebase.)
| skp1995 wrote:
| Thanks for your interest in Aide!
|
| If I understood that correctly, it would mean supporting Claude
| via the AWS Bedrock endpoint, we will make that happen.
|
| If the underlying LLM does not change then adding more
| connectors is pretty easy, I will ping the thread with updates
| on this.
| xpasky wrote:
| Yep! And AWS Bedrock gives you also plenty of other models on
| the back end, plus better control over rate limits. (But for
| us the important thing is data residency, the code isn't
| uploaded anywhere.)
|
| Is it ~just about adding another file to https://github.com/c
| odestoryai/sidecar/blob/main/llm_client/... ?
|
| I could take a look too - another way for me to test Aide by
| working with it to implement this. :-)
|
| (https://github.com/pasky/claude.vim/blob/main/plugin/claude_
| ... is sample code with basic wrapper emulating Claude
| streaming API with AWS Bedrock backend.)
| skp1995 wrote:
| yup! feel free to add the client support, you are on the
| right track with the changes.
|
| To test the whole flow out here are a few things you will
| want to do: - https://github.com/codestoryai/sidecar/blob/b
| a20fb3596c71186... (you need to create the LLMProperties
| object over here) - add support for it in the broker over
| here: https://github.com/codestoryai/sidecar/blob/ba20fb359
| 6c71186... - after this you should be at the very least
| able to test out Cmd+K (highlight and ask it to edit a
| section) - In Aide, if you go to User Settings: "aide self
| run" you can tick this and then run your local sidecar so
| you are hitting the right binary (kill the binary running
| on 42424 port, thats the webserver binary that ships along
| with the editor)
|
| If all of this sounds like a lot, you can just add the
| client and I can also take care of the plumbing!
| xpasky wrote:
| Hmm looks like this is still pretty early project for me.
| :)
|
| My experience: 1. I didn't have a working installation
| window after opening it for the first time. Maybe what
| fixed it was downloading and opening some random
| javascript repo, but maybe it was rather switching to
| "Trusted mode" (which makes me a bit nervous but ok).
|
| 2. Once the assistant window input became active, I wrote
| something short like "hi", but nothing happenned after
| pressing ctrl-Enter. I rageclicked around a bit, it's
| possible I have queued multiple requests. About 30
| seconds later, suddenly I got a reply (something like "hi
| what do you want me to do"). That's .. not great latency.
| :)
|
| 3. Since I got it working, I opened the sidecar project
| and sent my _second_ assistant prompt. I got back this
| response after few tens of seconds: "You have used up
| your 5 free requests. Please log in for unlimited
| requests." (Idk what these 5 requests were...)
|
| I gave it one more go by creating an account. However
| after logging in through the browser popup, "Signing in
| to CodeStory..." spins for a long time, then disappears
| but AIDE still isn't logged in. (Even after trying again
| after a restart.)
|
| One more thought is maybe you got DDos'd by HN?
| skp1995 wrote:
| > 2. Once the assistant window input became active, I
| wrote something short like "hi", but nothing happenned
| after pressing ctrl-Enter. I rageclicked around a bit,
| it's possible I have queued multiple requests. About 30
| seconds later, suddenly I got a reply (something like "hi
| what do you want me to do"). That's .. not great latency.
| :)
|
| Yup thats cause of the traffic and the LLM rate limits :(
| we are getting more TPM right now so the latency spikes
| should go away, I had half a mind to spin up multiple
| accounts to get higher TPM but oh well.... if you do end
| up using your own API Key, then there is no latency at
| all, right now the requests get pulled in a global queue
| so thats probably whats happening.
|
| > 3. Since I got it working, I opened the sidecar project
| and sent my second assistant prompt. I got back this
| response after few tens of seconds: "You have used up
| your 5 free requests. Please log in for unlimited
| requests." (Idk what these 5 requests were...)
|
| The auth flow being wonky is on us, we did fuzzy test it
| a bit but as with any software it slipped from the
| cracks. We were even wondering to skip the auth
| completely if you are using your own API Keys, that way
| there is 0 touch interaction with our llm proxy infra.
|
| Thanks for the feedback tho, I appreciate it and we will
| do better
| solarkraft wrote:
| Not only is the name Aide already used by another project, it's
| even also an IDE.
|
| https://www.android-ide.com/
| skp1995 wrote:
| TIL, I thought we covered the ground when grepping for Aide.
| Funny that its also an editor
| memsom wrote:
| It is a pretty well established IDE. I used it back on a
| Nexus 4 when that phone was actually "recent" to give you
| some context.
| skp1995 wrote:
| I do remember the Nexus 4 (jelly bean OS). I was fascinated
| at the time that you could play games on Android and ran
| the emulator for android on my desktop at that point (I was
| young and needed the games haha)
| fermigier wrote:
| AIDE has been around for 25 years: https://aide.github.io/
|
| IMHO the right thing would be to use another name.
| skp1995 wrote:
| I ... did not know that.
|
| We should probably pick another name then
| dsabanin wrote:
| You probably shouldn't.
| lioeters wrote:
| The name is perfect, AI + IDE = Aide. You should keep it.
| handfuloflight wrote:
| There's also https://aider.chat/, which is... close.
| MyFedora wrote:
| This is literally a totally different piece of software with
| a completely unrelated use case. Changing the name would make
| as much sense as renaming a hammer because someone invented a
| screwdriver.
| mellosouls wrote:
| Links to the project, I'm guessing these :)
|
| https://github.com/codestoryai/aide
|
| https://aide.dev/
| skp1995 wrote:
| you missed this one https://github.com/codestoryai/sidecar
| Sidecar: The AI brains Aide:
| https://github.com/codestoryai/aide the editor
| unshavedyak wrote:
| Any tips for using Aide with another text editor? Ie i'm not
| going to work outside of my preferred text editor (Helix atm), so
| i'm curious about software which has a workflow around this.
| Rather than trying to move me to a new text editor
| skp1995 wrote:
| hmm.... I do think it can be extended to work outside of just
| the VSCode environment.
|
| If you look at the sidecar side of things:
| https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186...
| these are the main APIs we use
|
| On the editor side:
| https://github.com/codestoryai/ide/blob/0eb311b7e4d7d63676ad...
|
| These are the access points we need
|
| The binary is fairly agnostic to the environment, so there is a
| possibility to make it work elsewhere. Its a bit non trivial
| but I would be happy to brainstorm and talk more about this
| swlkr wrote:
| I also use helix and i've been getting some mileage out of
| aider, the cli tool. Confusing name, as I don't believe aider
| is affiliated with aide
| skp1995 wrote:
| do you know of helix exposes the LSP APIs all the way to the
| editor .. if it does doing the integration should be trivial
| drag0s wrote:
| Looks like the download links from your landing page are broken?
| Looks like our build pipeline is broken! Click here to let
| us know?
| skp1995 wrote:
| wooops... on it (we got rate limited by Github) in the
| meanwhile
| https://github.com/codestoryai/binaries/releases/tag/1.94.2....
| check this out
| skp1995 wrote:
| its fixed!
| renewiltord wrote:
| You have just woken up from the cryosleep you entered in 2024.
| The year is 2237. GPT-64 and its predecessors have been around
| for nigh on 100 years. But there has been no civilizational
| upheaval. Your confusion is cleared when you check the inter-
| agent high-speed data bus. You expect this to be utterly
| incomprehensible, but both the human and AI data is clearly
| visible. It is a repeating pattern. The agents are mimicking
| human behavior perfectly and you can't tell which is which. All
| data transmitted has the same form: $word is
| already a name for a project. Stop copying it. Change your name.
|
| Mankind and His Machine Children have met The Great Filter.
| skp1995 wrote:
| hahaha
| __MatrixMan__ wrote:
| I don't think it's going to take us 200 years to kick the habit
| of using global namespaces for friendly names, maybe 80.
| Recognizing a name and rendering it as a disambiguation based
| on my location in the trust graph should be a feature of the
| text box, not something that I have to think about.
| DesiLurker wrote:
| sounds like the scene in movie Idiocracy where roomba is stuck
| in corner and keeps repeating 'floor is now clean'.
| Alifatisk wrote:
| Aide.dev is similar to aider.chat except Aide being an IDE while
| Aider is a CLI
| skp1995 wrote:
| AIDE == AI + IDE (that was our take on the name)
| james_marks wrote:
| Looks interesting, is there a binary for Mac OS? I'd rather not
| build from scratch just to demo.
|
| For the people comparing to Cursor on features, I suspect the
| winner is going to be hard to articulate in an A:B comparison.
|
| There's such a difference in feel that may be rooted in a
| philosophy, but boils down to how much the creator's vision
| aligns with my own.
| skp1995 wrote:
| Yes there is, we have the binary link on our website but
| putting it here:
|
| - arm64 build:
| https://github.com/codestoryai/binaries/releases/download/1....
|
| - x86 build:
| https://github.com/codestoryai/binaries/releases/download/1....
|
| > There's such a difference in feel that may be rooted in a
| philosophy, but boils down to how much the creator's vision
| aligns with my own.
|
| Hard agree! I do think AI will find its way into our
| productivity tool kit in different ways. There are still so
| many ways we can go about doing this, A:B comparison aside I do
| feel the giving people to power to mold the tool to work for
| themselves is the right way.
| h1fra wrote:
| Genuine question, with vscode going all-in in this direction,
| what's left for forks like this?
| skp1995 wrote:
| There are quite a few things! VSCode's direction (I am making
| my own assumptions from the learnings I have) - VSCode is
| working on the working set direction of making multi-file edits
| work - Their idea of bringing in other extension is via the
| provider API which only copilot has access to (so you can't use
| them if you are not a copilot subscriber)
|
| So just taking these things for face value, I think there is
| lots to innovate. No editor (bias view of mine) has really
| captured the idea of a pair programmer working alongside you.
| Even now the most beloved feature is copilot or cursor tab with
| the inline completions.
|
| So we are ways further from a saturated market or even a
| feature set level saturation. Until we get there, I do think
| forks have a way to work towards their own version of an ideal
| AI native editor, I do think the editors of the future will
| look different given the upwards trend of AI abilities.
| arjunaaqa wrote:
| Betting on Microsoft messing up on UX side, as always.
| hubraumhugo wrote:
| I'm curious - what does the AI coding setup of the HN community
| look like, and how has your experience been so far?
|
| I want to get some broader feedback before completely switching
| my workflow to Aide or Cursor.
| arjunaaqa wrote:
| Using cursor and it's been great !
|
| Founders care about development experience a lot and it shows.
|
| Yet to try others, but already satisfied so not required.
| skp1995 wrote:
| I can give my broader feedback: - Codegen tools today are still
| not great: The lack of context and not using LSP really burns
| down the quality of the generated code. - Autocomplete is great
| Autocomplete is pretty nice, IMHO it helps finish your thoughts
| and code faster, its like intellisense but better.
|
| If you are working on a greenfield project, AI codegen really
| shines today and there are many tools in the market for that.
|
| With Aide, we wanted it to work for engineers who spend >= 6
| months on the same project and there are deep dependencies
| between classes/files and the project overall.
|
| For quick answers, I have a renewed habit of going to
| o1-preview or sonnet3.5 and then fact checking that with google
| (not been to stack overflow in a long while now)
|
| Do give AI coding a chance, I think you will be excited to say
| the least for the coming future and develop habits on how to
| best use the tool.
| SparkyMcUnicorn wrote:
| > Codegen tools today are still not great: The lack of
| context and not using LSP really burns down the quality of
| the generated code
|
| Have you tried Aider?
|
| They've done some discovery on this subject, and it's
| currently using tree-sitter.
| skp1995 wrote:
| Yup, I have.
|
| We also use tree-sitter for the smartness of understanding
| symbols https://github.com/codestoryai/sidecar/blob/ba20fb3
| 596c71186... and also the editor for talking to the
| Language Server.
|
| What we found was that its not just about having access to
| these tools but to smartly perform the `go-to-definition`
| `go-to-reference` etc to grab the right context as and when
| required.
|
| Every LLM call in between slows down the response time so
| there are a fair bit of heuristics which we use today to
| sidestep that process.
| tomr75 wrote:
| cursor works well - uses RAG on your code to give context, can
| directly reference latest docs of whatever you're using
|
| not perfect but good to incrementally build things/find bugs
| nprateem wrote:
| I tried GH copilot again recently with Claude. It was complete
| shit. Dog slow and gave incomplete responses. Back to aider.
| skp1995 wrote:
| what was so bad about it? genuinely curious cause they did
| make quite a bit of noise about the integration.
| nprateem wrote:
| It kept truncating files only about 600 lines long. It also
| seems to rewrite the entire file each time instead of just
| sending diffs like aider making it super slow.
| skp1995 wrote:
| oh, I see your point now. Its weird that they are not
| doing the search and replace style editing. Altho now
| that OpenAI also has Predicted Output, I think this will
| improve and it won't make mistakes while rewriting longer
| files.
|
| The 600 line limit might be due to the output token limit
| on the LLM (not sure what they are using for the code
| rewriting)
| nprateem wrote:
| Yeah I guess it's a response limit. It makes it a deal
| breaker though.
| xpasky wrote:
| Besides Claude.vim for "AI pair programming"? :) (tbh it works
| well only for small things)
|
| I'm using Codeium and it's pretty decent at picking up the
| right context automatically, usually it autocompletes within
| ~100kLoC project quite flawlessly. (So far I haven't been using
| the chat much, just autocomplete.)
| skp1995 wrote:
| any reason you don't use the chat often, or maybe it's not
| your usecase?
| viraptor wrote:
| Cursor works amazing day to day. Copilot is not even comparable
| there. I like but rarely use aider and plandex. I'd use them
| more if the interface didn't take me completely away from the
| ide. Currently they're closer to "work on this while I'm taking
| a break".
| gman83 wrote:
| Why is a fork required? I use the cline plugin for VS Code and it
| seems to be able to be able to more things, like update code
| directly, create new files, etc.
| skp1995 wrote:
| fork was necessary for the UX we wanted to go for. I do agree
| that an extension can also satisfy your needs (and it clearly
| is in your case)
|
| Having a deeper integration with the editor allows for some
| really nice paradigms: - Rollbacks feel more native, in the
| sense that I do not loose my undo or redo stack - cmd+k is more
| in line with what you would expect with a floating widget for
| input instead of it being shown at the very top of your screen
| which is the case with any extension for now.
|
| Going further, the changes which Microsoft are making to enable
| copilot editing features are only open to "copilot-chat" and no
| other extension (fair game for Microsoft IMHO) So keeping these
| things in mind, we designed the architecture in a way that we
| can go towards any interface (editor/extension). We did put
| energy into making this work deeply with the VSCode ecosystem
| of APIs and also added our own.
|
| If the editor does not work to our benefit, we will take a call
| on moving to a different interface and thats where an extension
| or cloud based solution might also make sense
| SilentM68 wrote:
| Hmm, any time frame for when Linux (.deb,flatpak) binaries will
| be available?
| skp1995 wrote:
| you should be able to use this:
| https://github.com/codestoryai/binaries/releases/download/1....
| let me know if that does not work.
|
| All our binaries are listed out here:
| https://github.com/codestoryai/binaries/releases/tag/1.94.2....
| ghostwriternr wrote:
| You could also use this script to setup everything: curl -sL
| https://raw.githubusercontent.com/codestoryai/binaries/main/.
| .. | bash (you can see the source of the script too)
| DrBenCarson wrote:
| _sigh_ more Electron
| skp1995 wrote:
| I know... we could have built something free form ground up
| (like zed did) but we had to pick a battle between building a
| new editor from the grounds up or building from a solid
| foundation (VSCode) We are a small team right now (4 of us) and
| have been users of VSCode, so instead of building something
| new, putting energy into building from VSCode made a lot more
| sense to us.
| jamie_ca wrote:
| FYI the youtube embed on https://docs.codestory.ai/features is
| broken (both Firefox and Chrome, MacOS).
|
| https://support.mozilla.org/1/firefox/132.0.1/Darwin/en-US/x...
| skp1995 wrote:
| RIP, didn't expect that to happen. This is the embedded video
| btw https://www.youtube.com/watch?v=i8ZXMgnFSo8 putting it here
| for prosperity
| CyberCatMeow wrote:
| I see Qwen 2.5 is not listed on your website, is plugging in
| different llms supported as well?
| skp1995 wrote:
| Honestly we can, I haven't prompted it enough what do you want
| to use the model for?
| CyberCatMeow wrote:
| Just general coding, mostly python. Seems to me that Qwen
| 2.5, especially the upcoming bigger coder model might be the
| best performing coding model for 24GB VRAM setups
| jwilber wrote:
| This looks great. Would love some blog posts about your
| experience building this our with rust!
| skp1995 wrote:
| Oh for sure! I do want to talk about how rust really helped us
| so many times when doing refactors or building new features,
| part of the reason why we were able to iterate so quickly on
| the AI side of things and ship features
| ilrwbwrkhv wrote:
| This is very similar to the Zed editor. How much did you get
| inspired by them? And what are the differences between yours and
| their implementations?
| skp1995 wrote:
| I would take that as a compliment, big fan of Zed (I hope their
| extension ecosystem allows for us to plugin sidecar into Zed
| soon)
|
| Tbh I did try out their implementation and it still feels
| early, one of the key difference we went for was to allow the
| user to move freely between chat and editing mode.
| rco8786 wrote:
| Is there a comparison with Cursor I can read?
| nsonha wrote:
| first editor I've seen recently that defaults to turn of minimap.
|
| I won't shut up for about this, I don't understand how such an
| useless "feature" becomes the norm in modern IDEs.
| gitgud wrote:
| This is a fork of VScode, which means people can't use the
| extension store anymore right?
| skp1995 wrote:
| They can from the openvsx store https://open-vsx.org/
|
| We also import your extensions automatically (safe guarding
| against the ones with Microsoft's licensed)
|
| You can also just download in from the vscode marketplace
| webpage and drag and drop it in
___________________________________________________________________
(page generated 2024-11-06 23:02 UTC)