[HN Gopher] Zed now predicts your next edit with Zeta, our new o...
___________________________________________________________________
Zed now predicts your next edit with Zeta, our new open model
Author : ahamez
Score : 455 points
Date : 2025-02-14 06:50 UTC (16 hours ago)
(HTM) web link (zed.dev)
(TXT) w3m dump (zed.dev)
| aqueueaqueue wrote:
| Ah that's a model that can run on a shitty old PC right. Like the
| idea of tools being local again.
| pointlessone wrote:
| Maybe the model itself can be ran locally but the wa it's
| currently integrated in Zed it runs on the server.
| afandian wrote:
| Zed doesn't run on the PC you describe. It needs a modern GPU
| that supports Vulkan (not for AI, just to show text on the
| screen).
|
| I liked Zed so much I bought a new graphics card!
|
| https://github.com/zed-industries/zed/discussions/23623
| guipsp wrote:
| I have sympathy for the reporter, but that CPU is literally
| 10 years old.
| afandian wrote:
| My own graphics card was 5 years old. It was just an Nvidia
| with outdated Linux drivers. There is no other application
| on my computer that complained about it.
|
| In any case I'd understand needing bleeding edge hardware
| for 3D gaming, or CAD, or multimedia. But a 10 year old
| machine _should_ be able to run a text editor IMHO.
| boxed wrote:
| I tried CoPilot a while and my biggest gripe was tab for
| accepting the suggestion. I very often got a ton of AI garbage
| when I was just trying to indent some code.
|
| Tab just doesn't seem like the proper interface for something
| like this.
| fstephany wrote:
| You are not alone!
| ljm wrote:
| I actually wish more editors had emacs style indenting where
| hitting tab anywhere on the line would re-indent it or
| otherwise cycle through indent levels if it was unclear,
| especially because you're unlikely to get copilot suggestions
| in the middle of a word. Plus, it doesn't break if there's a
| syntax error elsewhere in the file.
| fhd2 wrote:
| It's one of those weird Emacs things that you get _so_ used
| to that everything else seems to waste your time.
|
| Using code formatters and formatting on save or with a
| shortcut is OK, but not really the same to me.
|
| That's probably why I'm stuck with Emacs:
|
| 1. No need to use the mouse.
|
| 2. Extremely efficient keyboard usage (maybe not the most
| efficient, but compared with common IDEs, certainly).
|
| Makes it feel like I'm actually using a brain computer
| interface. The somewhat regular yak shaving is a bit of a
| bummer though. I like that I can modify everything to be
| exactly how I like it, but I wouldn't mind sensible defaults.
| Haven't found a distribution yet that works well without
| tinkering.
| ljm wrote:
| Exactly, and sometimes I don't want to save a file just for
| indentation to happen as a side-effect, especially if I can
| do `C-x H TAB` to correctly re-indent the entire thing.
|
| That's particularly more helpful when some formatters will
| actually rewrite your code to either break lines up or
| squish things back into a one-liner.
| boxed wrote:
| Oh, didn't know about that. That makes a lot of sense.
| Reminds me of how hitting cmd+c on a line in PyCharm copies
| the entire line if there is no selection. Because what else
| would make sense?
| yencabulator wrote:
| I thought that was a great feature. Now I'm writing mostly
| languages with well-defined formatting rules and simply never
| need tab. This is even better.
| janaagaard wrote:
| After using Prettier to format my code and turning on format-
| on-save, I pretty much don't use the tab key anymore. This
| doesn't invalidate your point, - I am merely guessing as to why
| the tab key seemingly has been reassigned.
| moritzruth wrote:
| Ctrl+Return works quite well for me in IntelliJ.
| Mashimo wrote:
| > to indent some code.
|
| Why are you manually indenting code?
|
| I don't remember ever doing that in IDEA or VS Code for
| ~~python,~~ java or ts.
|
| Edit: I borked about python, my bad.
| rfoo wrote:
| For example, you have if check_something():
| wall_of_text = textwrap.dedent(""" this is
| a multiline string """.strip("\n"))
|
| I hope you agree it's ugly. And you want to make it
| if check_something(): wall_of_text =
| textwrap.dedent("""\ this is
| a multiline string
| """.strip("\n"))
|
| But I don't know any formatter which does this automatically.
| Because the change here not only changes the looking, it
| changes semantic. The formatter has to understand that after
| `textwrap.dedent(_.strip("\n"))` the result does not change
| and that's hard. So formatter just leave it alone. But it's
| extremely obvious to a human. Or maybe extremely obvious to a
| LLM too.
| Mashimo wrote:
| Yeah, string """ blocks I also have to do manually. Though
| that example never interfered with AI autocomplete in my
| experience.
| oneeyedpigeon wrote:
| Isn't indentation 'non deterministic' in Python? E.g. if I
| have the following: if (foo):
| bar() hum()
|
| how can anything other than a human decide if that 3rd line
| should be indented as it is or by one more level?
| saghm wrote:
| Even humans have trouble with that sometimes:
| https://www.blackduck.com/blog/understanding-apple-goto-
| fail...
| oneeyedpigeon wrote:
| Although, not in Python of course! :)
| yurishimo wrote:
| And this is one reason I still don't understand the use of
| python in large teams!
|
| I suppose with very good tests, you might be able to catch
| something like this, but it seems impossible to me how a PR
| reviewer might catch a "bug" like this and not just assume
| that it's intentional.
| DemetriousJones wrote:
| One of my biggest gripes with python is the fact that the
| only way to create a local scope for variables is with
| functions.
|
| I understand if statements not having their own scope for
| simplicity's sake, but the fact that `with` blocks don't
| is simply mind-bobbling to me. ``` with open("text.txt",
| 'w') as f: f.write("hello world") f.write("hello world")
| # at least the handle was automatically closed so it will
| give an IO error ```
| DemetriousJones wrote:
| Sorry for the borked code formatting
| orf wrote:
| It's actually very useful to have context managers
| outlive their with blocks. They are not only used for
| files:
|
| One example would be a timing context manager:
| with Timer() as t: ... print(t.runtime)
|
| Another example is mocks, where you want to inspect how
| many times a mock was called and with what arguments,
| _after_ the mock context manager has finished.
| DemetriousJones wrote:
| I know it makes sense in the "scope-less" python
| philosopy but it still feels weird for me as a scope
| (ab)user in C++ and has caused me one headache or two in
| the past
| kstrauser wrote:
| OTOH I've written Python professionally for about 25
| years and I truly don't think I've ever seen a bug due to
| accidental mis-indentation like that.
| lostdog wrote:
| When I worked in python I made this exact mistake
| monthly.
| kstrauser wrote:
| How, though? Every code editor I've used supports holding
| indentation at a certain level until you change it, so if
| you write: if foo: bar()
|
| and hit enter after the "bar()", it would drop you down
| so that the next thing you type would be under the "b".
| It's not really different from using curly brackets from
| the perspective of typing in code.
| lostdog wrote:
| Cut and pasting code to move it around. Every editor was
| just slightly jittery about keeping the indentation
| levels consistent.
| fragmede wrote:
| How does the IDEA/VSCode know when you're done with the if?
| if foo: bar # Obviously this is indented
| but_is_this() # supposed to be under the if?
| how.about(this) #? how.do_you(de) # indent in Python,
| if not manually?
| MaikuMori wrote:
| Doesn't exactly fix the issue, bout you can cancel the
| suggestion with ESC and then press Tab.
|
| Changing the shortcut should be possible, but I haven't tried.
| boxed wrote:
| It's a timing issue too. Your hand can be travelling to the
| keyboard and between that and registering in the OS the AI
| suggestion inserted itself inbetween.
| madmulita wrote:
| Don't give them ideas! I can already see the useless AI key
| next to Fn in my next keyboard.
| card_zero wrote:
| That already happened, just over a year ago, we have Copilot
| keys now.
|
| I guess it invokes the AI rather than controling it, maybe
| there'll be another key soon.
| yencabulator wrote:
| See, you're thinking of Microsoft Copilot, but code
| completion is provided by Github Copilot, so it'll need its
| own key. Which will also be labeled Copilot.
| as-cii wrote:
| Hey! Zed founder here.
|
| We totally agree with this and that's why Zed will switch the
| keybinding for accepting an edit prediction to `alt-tab` when
| the cursor is in the leading whitespace of a line. This way you
| can keep using `tab` for indenting in that situation.
|
| Also, when there's both an edit prediction and and LSP
| completion, Zed switches the keybinding to `alt-tab` to prevent
| the conflict with accepting an LSP completion.
|
| Curious to hear what you think!
| maxloh wrote:
| Hi. Could you explain how you plan to make money with the
| model while open-sourcing it?
|
| It seems contradicting to me.
| recov wrote:
| Not everyone can, or wants to set up running a local model.
| And it'll probably be slower on most users GPUs then what
| zed runs it on.
| danielsamuels wrote:
| For reasons that should be obvious, that's not going to work
| on Windows.
| as-cii wrote:
| Sorry, I assumed macOS: but you're right! For Linux (and
| Windows, once we ship support for it) the keybinding is
| alt-l to avoid conflicting with tab switching.
| VWWHFSfQ wrote:
| Is there way to change this key binding (tab for accept)
| right now? Because otherwise I have to stop using this
| program. It is absolutely obnoxious.
| awfulneutral wrote:
| Ohhh, is that why I keep pressing tab and it doesn't accept
| the prediction lately? I thought it was a bug. It feels weird
| for tab to double-indent when it could be accepting a
| prediction - I wonder if alt-tab to do a manual indent rather
| than accept the current prediction might be preferable?
|
| Edit - On the other hand, a related issue is that if the
| prediction itself starts with whitespace, in that case it
| would be good if tab just indents like normal; otherwise you
| can't indent without accepting the prediction.
| daliusd wrote:
| Yes, copilot's tab in vim is that made me think that AI is
| useless. However next iteration of AI coding tools made me
| rethink this (I am using
| https://github.com/olimorris/codecompanion.nvim with nvim now).
| windward wrote:
| AI coding tool implementers seem to be fans of novel editor
| fonts.
| marcosdumay wrote:
| It's way better than the other Microsoft favorites of space and
| enter...
|
| It's as if people developing autocomplete doesn't really code.
| relistan wrote:
| Yeah, IMO tab makes no sense for this as the default.
|
| Since I code in Go and use tabs regularly, I remapped my auto-
| complete AI key for Supermaven in Neovim to ctrl-L which I have
| no other occasion to use regularly. Now tab works properly and
| I can get auto-complete.
| notsylver wrote:
| This looks a lot more impressive than a lot of GitHub Copilot
| alternatives I've seen. I wonder how hard it would be to port
| this to vscode - using remote models for inline completion always
| seemed wrong to me, especially with server latency and network
| issues
| coder543 wrote:
| Based on the blogpost, this appears to be hosted remotely on
| baseten. The model just happens to be released openly, so you
| can also download it, but the blogpost doesn't talk about any
| intention to help you run it locally within the editor. (I
| agree that would be cool, I'm just commenting on what I see in
| the article.)
|
| On the other hand, network latency itself isn't really that big
| of a deal... a more powerful GPU server in the cloud can
| typically run so much faster that it can make up for the added
| network latency _and then some_. Running locally is really
| about privacy and offline use cases, not performance, in my
| opinion.
|
| If you want to try local tab completions, the Continue plugin
| for VSCode is a good way to try that, but the Zeta model is the
| first open model that I'm aware of that is more advanced than
| just FIM.
| notsylver wrote:
| I'm stuck using somewhat unreliable starlink to a datacenter
| ~90ms away, but I can run 7b models fine locally. I agree
| though, cloud completions aren't unusably slow/unreliable for
| me, it's mostly about privacy and it being really fun.
|
| I tried continue a few times, I could never get consistent
| results, the models were just too dumb. That's why I'm
| excited about this model, it seems like a better approach to
| inline completion and might be the first okay enough(tm)
| model for me. Either way, I don't think I can replace copilot
| until a model can automatically fine tune itself in the
| background on the code I've written
| coder543 wrote:
| > Either way, I don't think I can replace copilot until a
| model can automatically fine tune itself in the background
| on the code I've written
|
| I don't think Copilot does this... it's really just a
| matter of the editor plug-in being smart enough to grab all
| of the relevant context and provide that to the model
| making the completions; a form of RAG. I believe
| organizations can _pay_ to fine-tune Copilot, but it sounds
| more involved than something that happens automatically.
|
| Depending on when you tried Continue last, one would hope
| that their RAG pipeline has improved over time. I tried it
| a few months ago and I thought codegemma-2b (base) acting
| as a code completion model was fine... certainly not as
| good as what I've experienced with Cursor. I haven't tried
| GitHub Copilot in over a year... I really should try it
| again and see how it is these days.
| littlestymaar wrote:
| > A few weeks out from launch, we ran a brief competitive
| process, and we ended up being really impressed with Baseten.
|
| What? I really fail to see how it can make sense for a company
| like Zed: Baseten bills by the minute so it can be really useful
| if you need to handle small bursts of compute, but on the flip
| side they charge you a x5 premium if you end up being billed for
| complete hours...
| lukaslalinsky wrote:
| Zed is a VC funded startup. Wasting money in exchange for ease
| of deployment is expected, no?
| ramon156 wrote:
| > zeta won't be free forever
|
| Well that's a bummer, but also very understandable. I hope they
| don't make the hop too early, because I still want to grow into
| Zed before throw my wallet at them. So far it's very promising!
| norman784 wrote:
| But Zeta model is open source, so I suppose you could run it
| locally if you want. I didn't tried it yet, but I suppose
| that's their intention by open sourcing it.
| pr337h4m wrote:
| Yeah, a 7B param LLM (quantized) is pretty fast on even a
| previous-generation base model MacBook Air with 8 GB RAM.
| freehorse wrote:
| can we run it locally to get autocomplete?
| gardenhedge wrote:
| Personally, before I hit tab to confirm a change, I would want to
| see the before and after rather than just the after
| vinnyhaps wrote:
| I believe the after change hovers over the before line. So if
| you go back to the video, e.g. at 27s in, there's a lightning
| bolt highlighting which line is going to be changed, then
| there's a box, with "tab" at the end, above the line,
| highlighting the change that will be performed :)
| keyle wrote:
| Good, charge for Zed and secure its future.
|
| I'm becoming more and more wanting to use Zed every day, and
| shifting away from other editors whenever possible. Some LSP
| implementations are lacking... But it's getting damn close!
|
| I love the new release every week. Zed is my recent love, and
| Ghostty which is also stellar.
|
| Hanging by a thread for some sort of lldb/gdb integration with
| breakpoints and inspection! Hopefully some day, without becoming
| a bag of turd.
| returnInfinity wrote:
| Future CPUs must be able to run this model locally. This is the
| way. I have spoken.
| coder543 wrote:
| Two immediate issues that I noticed:
|
| 1. If I make a change, then undo, so that change was never made,
| it still seems to be in the edit history passed to the model, so
| the model is interested in predicting that change again. This
| felt too aggressive... maybe the very last edit should be
| forgotten if it is immediately undone. Maybe only edits that
| exist against the git diff should be kept... but perhaps that is
| too limiting.
|
| 2. It doesn't seem like the model is getting enough context. The
| editor would ideally be supplying the model with type hints for
| the variables in the current context, and based on those type
| hints being put into the context, it would also pull in some type
| definitions. (I was testing this on a Go project.) As it is, the
| model was clearly doing the best it could with the information
| available, but it needed to be given more information. Related, I
| wonder if the prediction could be performed in a loop. When the
| model suggests some code, the editor could "apply" that change so
| that the language server can see it, and if the language server
| finds an error in the prediction, the model could be given the
| error and asked to make another prediction.
| yellow_lead wrote:
| Seems like you can't run it locally. I don't like my code being
| sent to a third party, especially when my employer may not agree
| with it.
|
| I also edit secret/env files in my IDE, so for instance, a
| private key or API key could get sent, right?
|
| I hope there will be a local option later.
| _flux wrote:
| They use backend configurable at environment variable
| ZED_PREDICT_EDITS_URL https://github.com/zed-
| industries/zed/blob/2f734cbd5e2452647... , but I don't know if
| the /predict_edits/v2 endpoint is something some projects
| provide or not.
|
| At least the model is available and interacting with it seems
| simple, so it's probably quite realistic to have an
| open/locally runnable version of it. The model isn't very big.
| levzzz wrote:
| yeah, i'd like to be able to run it locally. it should fit
| well onto my 12gb gpu
| mbitsnbites wrote:
| The model is based on Qwen2.5-Coder-7b it seems. I
| currently run some quantized variant of Qwen2.5-Coder-7b
| locally with llama.cpp and it fits nicely in the 8GB VRAM
| of my Radeon 7600 (with excellent performance BTW), so it
| looks like it should be perfectly possible.
|
| I would also only use Zeta locally.
| jbk wrote:
| Are you happy with the speed with your 8GB GPU?
| mikaylamaki wrote:
| > a private key or API key could get sent, right?
|
| You can disable this feature on a per-file basis, here's the
| relevant setting: https://github.com/zed-
| industries/zed/blob/39c9b1f170cd640cd...
| yencabulator wrote:
| Sending files to a remote server is never something that
| should need to be disabled, this _must_ be an opt-in or it 's
| time for a fork.
| 85392_school wrote:
| It is opt in. You have to manually sign in to Zed and
| enable the feature.
| mikaylamaki wrote:
| Agreed. The predict edit feature needs to be actively
| enabled before it'll do anything. And once it's enabled, it
| won't send up your private keys or environment variables.
| If their filename matches a glob in this list, or a list
| you configure.
| lordnacho wrote:
| Is there a thing that does this for the terminal? I hate it when
| I'm fiddling with some complicated command and I have to juggle
| the flags as well as my personal inputs like paths and such.
| djvv wrote:
| If you mean an autocomplete for the terminal, there is
| https://www.warp.dev/.
| diggan wrote:
| Seeing "Contact Sales" for a terminal application is scary,
| but whatever, I gave it a try.
|
| And it seems like it requires a constant internet
| connection?! What the hell is this? Sure, we put all our
| coding-baskets into GitHub, so you can't collaborate when
| GitHub/internet isn't working, but you want the same
| experience for your terminal?
| minib wrote:
| Zed's Inline AI Assistant works in terminal too!
| jamiedumont wrote:
| Using fish shell has largely solved my gripes with complex
| commands. It's not AI autocomplete, but it remembers the
| complex commands perfectly, so I only need to work it out once.
| It's suggestions are also uncanny - the right command and just
| the right time.
| elashri wrote:
| It seem that someone already published different quanta versions
| of the model [1] . This can be used to define Modelfile to use
| with ollama locally. But I am not sure that zed allows changing
| the endpoint of this feature yet (ever). Of course it is opeb
| source and you can change it but then you will need to build it.
|
| [1] https://huggingface.co/mradermacher/zeta-GGUF
| _flux wrote:
| I found https://github.com/zed-
| industries/zed/blob/2f734cbd5e2452647... which leads me to
| believe the environment variable ZED_PREDICT_EDITS_URL does
| control the endpoint.
| elashri wrote:
| But is there a setting you can modify after the binary being
| built? Something like in application settings? Or do you need
| to build it?
| master-lincoln wrote:
| It's an environment variable you can set before starting
| the program. No need to recompile.
| as-cii wrote:
| Hey elashri, Zed co-founder here.
|
| There currently is no official way of configuring Zed to use
| Ollama for edit prediction, but I would love to accept a pull
| request that implements it!
|
| It should be relatively straightforward and we're happy to
| accept contributions here: this has been something I wanted to
| experiment with for a while but didn't get around to for the
| launch.
| shinryuu wrote:
| I'm curious how they plan to fund the company?
| thomascountz wrote:
| If you were looking for the configuration like I was[1][2]:
| { "show_edit_predictions": <true|false>,
| "edit_predictions": { "disabled_globs": [<globs>],
| "mode": <"eager_preview"|"auto"> }, "features": {
| "edit_prediction_provider": <"copilot"|"supermaven"|"zed"|"none">
| } }
|
| [1]: https://zed.dev/docs/completions
|
| [2]: https://zed.dev/docs/configuring-zed#edit-predictions
| tripplyons wrote:
| Thanks for sharing, it would have taken me some time to find
| this. It should really be included in the article.
| mihaaly wrote:
| The sensitive readers please be advised, quite a bit of a rant
| and angry reactions coming in an overreacting style, please stop
| here if you are of the sensitive type. The comments are unrelated
| to this particular product but aimed at the universal approach of
| the broad topic nowadays. Zero intent of offending anyone
| specific person is attempted.
|
| I am fed up with all these predicting what I want to do. Badly!!
| Don't guess! Wait, and I will do what I want to do. I do not
| appreciate it from my wife trying to figure out what I want to
| say in the middle of my senntence and interrupts before I finish
| what I am saying, imagine how much I tolerate it from a f
| computer! I know what I am going to do, you do not! Let me do
| that already! This level of predicting our asses off everywhere
| grown to be a f nuisance by now, I cannot simply do and focus on
| what I want to do because of the many distractions and
| suggestions and guesses and prediction of me and my actions all
| the f time are in the way' Wait, and see! At this overly eager
| level and pushing into everything is a nuisance now! Too many
| times the acceptance of the - wrong - 'helping suggestion' is in
| the way too, hijacking that usable elsewhere particular keyboard
| action, breaking my flow, dragging in the unwanted stupid guess!
| Recovery of my way of working from incoming and pushy "feature"
| hiding/colliding my usual actions, forced on me in a "security
| update" or other bullshit, turning off and recover the working
| practice already been in place and worked is an unwelcom being in
| the way too, ruined, now colliding with "smart prediction", not
| helping. In long term, it is not a definitive help but an around
| zero sum game. Locally, in specific sittuations, too many times
| it is a strong negative by the wrong done! Too much problems here
| and there, accuracy and implementation wise. Forced everywhere.
| Don't be a smartass, you are just an algorithm not a mind reader!
| Lay back and listen.
|
| If prediction is that smart - being with us since the turn of the
| millania here and there - then should do my job perfectly and I
| can go walk outside and collect the money! Until, f off!
| gkbrk wrote:
| Is someone forcing you at gunpoint to use AI autocomplete while
| you code? If you think it's not good, just don't use it.
| heeton wrote:
| Right? If you don't like the tool, turn it off.
|
| I find autocomplete _exceptionally_ useful. It's right in
| most of the simple tasks I'm trying to do and speeds me up a
| lot.
| botanical76 wrote:
| Well, I notice there is a lot of pressure in organizations
| for individual developers to start making use of these tools.
| I was already using AI extensively before my company picked
| up on it, so it doesn't really affect me negatively, but I
| notice some of my coworkers starting to ask questions like
| "Do I have to use it?". The status quo seems to imply that
| you {refuse to accept change,aren't willing to grow,aren't
| interested in increasing efficiency in workflow} if you don't
| use AI tools / autocomplete.
|
| So while it is unlikely anyone is _forcing_ you to use AI-
| enabled efficiency boosters, there may be a strong managerial
| pressure felt to do so, and it may even be offered as an
| action item in yearly reviews, and therefore strongly linked
| to compensation / incentives.
|
| That is all to say, I understand if people in this group are
| frustrated with the AI hype train at the moment, even if they
| can appreciate that these tools do indeed improve efficiency
| in some places and in some people.
| pritambarhate wrote:
| If an employee demonstrates the same level of productivity
| without using AI, most managers would likely be fine with
| that approach. However, if a manager observes that several
| team members are more productive with AI and are achieving
| business goals more quickly, they will naturally expect
| everyone to adopt it. Those who refuse to use AI and cannot
| match the efficiency of their peers may eventually be
| replaced. While this outcome may be emotionally
| challenging, economic realities primarily drive these
| decisions.
| pipo234 wrote:
| Show me a manager that can realistically gauge
| productivity first, and I'd be happy to consider having
| this type of argument.
|
| Otherwise, I'll stick to developing software in a small
| company where my boss trusts me to get the job done at my
| own pace with whatever tools I chose.
| lionkor wrote:
| I find that AI autocomplete, even autocompleting full
| functions, is capable enough to use. I need to review all my
| code in detail before pushing, and I need to write unit tests,
| and I need to "run it once" test it also.
|
| It gets it mostly right most of the time, and often times it
| quite literally suggests what I was about to type.
|
| This is mostly in Rust and C#, maybe other languages have more
| of a hurdle for AI.
| powerhugs wrote:
| So it can rust now? That's impressive!
|
| Last time I tried it had no way of valid rust code beyond
| hello-world level, constantly producing code that failed the
| borrow checker.
| lionkor wrote:
| I rarely if ever have to worry about the borrow checker,
| mostly stumbling blocks are move/copy/clone semantics.
|
| GitHub Copilot does a good job of generating correct Rust,
| it just has the usual subtle-but-annihilating-if-not-caught
| logic bugs, like in all other languages.
| diggan wrote:
| > rarely if ever have to worry about the borrow checker
| mostly stumbling blocks are move/copy/clone semantics
|
| I'm don't write Rust for a living, only been
| experimenting on-and-off with the language, but isn't
| stumbling on the move/copy/clone semantics literally
| stumbling on the borrow checker? Or are there issues
| regarding move/copy/clone that aren't related to the the
| borrow checker?
| ajayka wrote:
| In my experience, Copilot is not able to fix rust code
| flagged by the borrow checker. Its suggestions are almost
| always wrong.This is a hard problem and often requires
| restructing the code (and sometimes using inner
| mutuability constructs such as RefCell and so on).
| gizmo wrote:
| If you fight the system you're not going to have a good time.
| For incremental improvements (faster grep, mixed-language
| syntax highlighting) you get the benefits for free. You don't
| have to change anything about the way you work.
| Revolutionary/disruptive technologies are not like that. They
| demand that the world adapts to them. They demand _you_ change.
|
| Almost anywhere you can go by foot you can get faster with a
| horse. By contrast, a car only drives on flat roads. Cars are
| inflexible, fragile, unwieldy. A car demands that the world
| adapts to it. And adapt we did. We paved the world and are
| better for it.
|
| AI tools are amazing. You just have to approach them with a
| beginner's mindset.
| mihaaly wrote:
| I am not having a good time, that's true. This is with
| systems you do not need and not working for you. You can lay
| down to it, but could also fight it, go around it, not just
| give up and do what a random self promoted something tries to
| dictate. True, it is not good now or just yet, and about the
| amazingness, well, I believe the jury is still out on
| that.... let's say there are moments when it is.
|
| Disruption can f off! I am not a slave to lay down to self
| serving ideas forced. If ideas not serving humanity and
| demand change for the sake of it, those are bad ideas then!
| Down with technological authoritarianism! I am for liberal
| things mainly anyway.
|
| About cars: don't use it inside hight rise buildings if you
| live in one, or dense cute cities because you will have more
| problem than help. One can walk, cycle or mass transit,
| occasional renting, in a big part of life. Cars have their
| places, not something to wrap humanity around. Thos are just
| box like objects with four wheels for f's sake! For humans,
| and not the other way around.
|
| All depends on circumstances in the end, naturally.
| gizmo wrote:
| You, individually, can make whichever choices you think are
| best for you. But change is forced on societies. No country
| can afford to ignore AI. Countries must adapt or get ran
| over. This is not a moral justification. I don't believe
| all technological change is for the better. Countries have
| to brace for impact regardless.
| lstodd wrote:
| Ran over with what? Insane valuations of a glorified
| autocomplete?
|
| Mimicking a true Montenegrin which I'm not I say: this
| will pass also.
| brabel wrote:
| > And adapt we did. We paved the world and are better for it.
|
| I don't know, it must've been awesome going everywhere
| mounted on a powerful live animal, not being limited to
| roads, feeling the fresh air, not destroying the planet.
| card_zero wrote:
| I don't need insurance, I don't need no parkin' space
|
| And if you try to clamp my horse, he'll kick you in the
| face
| redacted wrote:
| Can't believe you're getting downvoted for one of
| Ireland's greatest cultural contributions. Behold The
| Rubberbandits, Horse Outside (helpfully timestamped to
| the lyric in question)
|
| https://www.youtube.com/watch?v=ljPFZrRD3J8&t=85s
| gizmo wrote:
| New York at one point had 150,000 horses, each producing 15
| to 30 pounds of manure daily. On top of that they produced
| 40,000 gallons of horse urine. Imagine the stench.
|
| https://danszczesny.substack.com/p/the-great-horse-manure-
| cr...
| JoshTriplett wrote:
| While I certainly would not advocate going back from
| vehicles to horses, I _would_ observe that the
| replacement for horses had even _more_ toxic emissions.
| Particularly in the era of leaded gasoline. The smell of
| manure was far less _damaging_.
| nprateem wrote:
| Constant interruptions aren't free though. They break your
| train of thought knocking you out of flow.
| wruza wrote:
| _Revolutionary /disruptive technologies are not like that.
| They demand that the world adapts to them. They demand you
| change._
|
| That's just coping for "my revolutionary technology is a pile
| of crap around the main feature that barely works", imo.
|
| _AI tools are amazing. You just have to approach them with a
| beginner 's mindset._
|
| Makes no sense when they still fail at the job. Mindset
| doesn't change reality, only your relation to it. You're
| suggesting to like bullshit cause you learned to like it. AI
| tools are at best mediocre, just like a huge part of people
| who benefit from these, and their creations.
| portaouflop wrote:
| We paved the world and now it's fucked -- progress is not
| always a good thing, in this case it was a mistake that we
| and our descendants will continue to pay the price for.
| will5421 wrote:
| Indeed, calling it "progress" is political.
| taurknaut wrote:
| To be disruptive you need layoffs. Where is all this labor
| that can be replaced with chatbots?
|
| I, personally, strongly hate cars and think they're moronic.
| Sometimes revolutions aren't a good thing.
| computerthings wrote:
| Colonizers _demand_ you change. Things make life better for
| people need to just exist. You just need to see your neighbor
| enjoying it to want yourself some of that. If you need to
| make changes for that, you 'll _ask_ what changes those are,
| and make them, if the trade-offs are worth it.
|
| If you need to scold people for being backwards for not
| accepting your great gift, it's not a great gift.
|
| > AI tools are amazing. You just have to approach them with a
| beginner's mindset.
|
| IMO they're not even tools. A screwdriver is a tool; the
| phone number of a place that sends people that loosen or
| fasten screws isn't. A switch on the steering column is a
| tool, while a button on a touch screen is a middleman that
| _at best_ works like the switch, at worst just does whatever.
| A keyboard is a tool, you can learn it well enough to know
| what keys you pressed without needing a screen. Predictive
| typing is not a tool.
| gizmo wrote:
| Technological change is like a force of nature. It's not
| polite. It doesn't care about your or your neighbor's
| preferences. It doesn't present itself as a gift. It
| doesn't even matter whether AI is harmful or beneficial
| because AI is here to stay regardless. You may detest AI
| but it will still become ubiquitous and you will be forced
| to adapt to this new reality. The history of the world is
| in a sense a history of technology. Some countries adapt
| and other countries get left behind, but no country can
| escape the rippling influence of change.
| computerthings wrote:
| > Technological change is like a force of nature.
|
| No, progress is. Change is often just the result of bone
| headed butchering by people who want to enforce their
| crap onto others, instead of simply enjoying their own
| medicine and the supposed benefits.
|
| > It doesn't even matter whether AI is harmful or
| beneficial
|
| Yeah, because you're talking about "change" now. It's a
| constant. Everything is change, including a bunch of
| Wall-E type people withering in their cyber fortress
| while a world they no longer have access to keeps
| blooming.
|
| "You may not like change, but it's not polite.", what a
| killer argument. Your argument that what we have now
| isn't shit is that it will not not shit in the future.
| Let that sink in.
|
| And people don't like it now, because it provides no real
| value for them, and you say they'll be forced anyway? Not
| "they'll come to like it and adopt it", just instantly
| this creepy thing you wish came true you wrote?
|
| > _[The method of infallible prediction] is foolproof
| only after the movements have seized power. Then all
| debate about the truth or falsity of a totalitarian
| dictator's prediction is as weird as arguing with a
| potential murderer about whether his future victim is
| dead or alive - since by killing the person in question
| the murderer can promptly provide proof of the
| correctness of his statement. The only valid argument
| under such conditions is promptly to rescue the person
| whose death is predicted._
|
| -- Hannah Arendt
|
| > You may detest AI
|
| What AI? We have "AI". I detest sophistry, and toys that
| replace tools. I like using tools, learning complex by
| using them, insofar they're useful, until I forget they
| exist and just _think_ what I want to do is something
| that never weighed or slowed me down. I don 't like
| needlessly inserting a middleman. You offering yourself
| up doesn't force _me_ to do shit.
|
| > The history of the world is in a sense a history of
| technology.
|
| Of course, in another sense it's a history of how much
| Earth weighs and how warm it is. Or how fast it spins,
| just a clean 2D graph of rotational velocity. But that'd
| be nonsense, _too_.
|
| > Some countries adapt and other countries get left
| behind, but no country can escape the rippling influence
| of change.
|
| What does this have to do with countries now, lol? Or is
| this just poetry?
| barrell wrote:
| I had GitHub copilot for nearly two years. I built the entire
| first version of an application for almost a year in a
| language I didn't know (Python) using LLMs and prompting.
|
| Over the last year I've had to rewrite every single part of
| the application. My amount of checked in code from an LLM has
| reduced to literally 0. I occasionally ask ChatGPT a question
| on my phone.
|
| I turned off copilot a few months ago and honestly it feels
| more like turning off push notifications, not like giving up
| a car.
|
| And even if it were comparable to an automobile, as an
| American who moved to Amsterdam, I can attest that the bike
| life is still much more enjoyable than the car life :)
| franktankbank wrote:
| > We paved the world and are better for it.
|
| Questionable.
| oneeyedpigeon wrote:
| I think there may be a good point here, but it's buried in a
| wall of poorly-written text and inappropriate references to
| your relationship problems.
| capital_guy wrote:
| Finding a way to complain about your wife in a comment about
| an IDE is peak hacker news.
| brookst wrote:
| Perhaps, but using a marriage as an analogy for one's
| preferred IDE doesn't seem like a huge stretch.
| tomw1808 wrote:
| Same here, I basically turned off all the auto-complete things
| everywhere in all the tools I am using, can't stand it. And
| just before reading your comment, I had a google doc I edited
| in the other tab and thought, how annoying are these auto
| suggestions actually. Not helping at all, instead a distraction
| (to me).
|
| For AI coding I'm using Aider as a docker container in the
| terminal in the IDE and I love it. I can write what I want how
| I feel the prompt to be necessary and then (and only then) it
| makes the changes or runs whatever I requested. The IDE runs
| uninterrupted and without any "smart suggestions". A tool for
| every job. Sometimes I do a lot in aider, sometimes I don't
| open it at all, but its all separated where what happens when.
|
| But yeah, anyways, while not as strongly feeling as you
| (probably) towards auto suggesting mid way through my sentence,
| I at least feel they are more distraction than help to me.
| deagle50 wrote:
| Same. I also configured my editor not to show LSP diag unless
| I save. Something you can't do in Zed.
| beefnugs wrote:
| No one has even tried to do it properly: It would have to be
| constant, highly parallel (locally running:non pay-per-use)
| simulations going on the background, with feedback from new
| constantly changing user input and some kind of new reward
| detection about it converging on something worth suggesting.
|
| These loops and simulations have to be happening at multiple
| levels of abstraction all at the same time, not even sure how
| that would work or coordinate properly, and thus: never gonna
| happen
| dmix wrote:
| Cursor's predictions work for me the vast majority of the time
| (far more so than Copilot+VSCode). Might be a
| language/framework dependent though.
| tolerance wrote:
| I see that the truth may be that there are too many men on
| earth who are deprived of cognitive fortitude, starved to think
| and willing to off-load thought to another...
|
| ...Wife...Machine...or what have you.
| Falimonda wrote:
| Go hug your wife
| linsomniac wrote:
| The wife finishing your sentences is an interesting analogy...
| My wife and I are usually on the same page about things, so for
| many topics we can use short-hand or otherwise cut discussions
| short. It's like in the movie _Hackers_: "It's in the place I
| put that thing that time." We can say just enough between us
| that we verify we have a shared state, and if we aren't sure we
| can verify and adjust.
|
| With an LLM, if what I'm starting to say gives it a direction
| on where I'm going, I'd like to see what it thinks, so if it's
| largely or entirely right I can just continue on.
|
| For example, I just asked ChatGPT o3-mini to complete the code
| "def download_uri_to_file(", and it came up with the entire
| function including type annotations, a very reasonable
| docstring, error handling, and streaming download. In fact,
| reviewing the code I'm sure it's better than I would have
| written the first pass through (I probably wouldn't have done
| the error handling or the streaming (unless I knew up front
| that I was going to be downloading huge files).
| aidenn0 wrote:
| > The wife finishing your sentences is an interesting
| analogy... My wife and I are usually on the same page about
| things, so for many topics we can use short-hand or otherwise
| cut discussions short.
|
| My wife and I are just too different for this to happen. For
| the first 10 years or so we had the opposite happen a lot
| (multiple times a day for the first few years), where we
| thought we were on the same page, but had actually under-
| communicated. It still happens occasionally, but now we
| mostly overcommunicate about anything of any importance.
|
| Our kids learned pretty quickly that if one parent was
| helping them with their homework, but had to leave to do
| something else, that asking the other parent for help was
| going to confuse them more, since we come at any given
| problem from a completely different direction.
| qaq wrote:
| Yep it should be configurable let me type at the very least the
| function name before you start predicting
| NoboruWataya wrote:
| I haven't used Zed much but RustRover seems to have recently
| switched to a more aggressive/ambitious autocomplete. IIRC tab
| used to just complete the current word, now it tries to complete
| the rest of the line. Only it usually gets it wrong. Enter now
| seems to do what tab used to do and it's been quite annoying
| having to unlearn tab completing everything.
|
| Maybe Zed's prediction is better (though to be honest I don't
| really care to find out). But I feel like autocomplete is
| something where usefulness drops off _very_ quickly as the amount
| of predicted text increases. The thing is, it really has to be
| 100% correct, because correcting your mostly-correct auto-
| generated code seems more tedious and frustrating to me than just
| typing the correct code in the first place.
| homebrewer wrote:
| It's a new local full-line completion model they've been
| enabling for one language after another in all their IDEs. I
| too agree that it's a waste of CPU cycles. The old completion
| mechanism (a bunch of hardcoded rules plus a bit of machine
| learning on the side) already was miles ahead of everything
| else, and with far fewer false-positives.
|
| You can disable it by going into 'File | Settings | Plugins'
| and turning off "Full Line Code Completion".
| paradite wrote:
| DeepSeek also has a FIM (Fill In the Middle) completion model via
| API, if anyone is interested to try out:
|
| https://api-docs.deepseek.com/guides/fim_completion
| sarosh wrote:
| Interesting that the underlying model, a LoRA fine-tune of
| Qwen2.5-Coder-32B, relies on synthetic data from Claude[1]:
| But we had a classic chicken-and-egg problem--we needed data to
| train the model, but we didn't have any real examples yet. So we
| started by having Claude generate about 50 synthetic examples
| that we added to our dataset. We then used that initial fine-tune
| to ship an early version of Zeta behind a feature flag and
| started collecting examples from our own team's usage.
| ... This approach let us quickly build up a solid
| dataset of around 400 high-quality examples, which improved the
| model a lot!
|
| I checked the training set, but couldn't quickly identify which
| were 'Claude' produced[2]. Would be interesting to see them
| distinguished out.
|
| [1]: https://zed.dev/blog/edit-prediction [2]:
| https://huggingface.co/datasets/zed-industries/zeta
| hereonout2 wrote:
| Yes this is very interesting!
|
| The hardware, tooling and time required to do a LoRa fine tune
| like this are extremely accessible.
|
| Financially this is also not a big expense and I assume would
| have cost in the order of $100s of dollars in GPU rentals,
| possibly less if you ignore experimentation time.
|
| So what is a barrier to entry here? The data? Well they didn't
| have that either so automatically generated a dataset of just
| 500 examples to achieve the task.
|
| I'm sure they spent some time on that but again it doesn't
| sound an incredibly challenging task.
|
| It's worth realising if you've not delved into fine tuning llms
| before. In terms of time, scale and financial costs there is a
| world of difference between building a product like this and
| building a base model.
| lionkor wrote:
| > Edit prediction won't be free forever, but right now we're just
| excited to share and learn.
|
| I love Zed, and I'm happy to pay for AI stuff, but I won't be
| using this until they are done with their rug pull. Once I know
| how much it costs, I can decide if I want to try integrating it
| into my workflow. Only THEN will I want to try it, and would be
| interested in a limited free trial, even just 24 hours.
|
| Considering I've seen products like this range from free to
| hundreds of dollars per month, I'd rather not find out how good
| it is and then find out I can't afford it.
|
| Other than that for anyone wanting to try Zed:
|
| - You can only run one LSP per file type, so your Rust will work
| fine, your C++, too, your Angular will not.
|
| - Remote editing does not work on Windows (its not implemented at
| all), so if you are on windows, you cannot ssh into anything with
| the editor remote editing feature. This means you cannot use your
| PC as a thin client to the actual chunky big work machine like
| you can with vscode. I've seen a PR that adds windows ssh
| support, but it looked very stale.
| vasco wrote:
| With the speed that models are coming out at and the amount of
| VC subsidies in trials my approach is the opposite, I don't get
| too attached and keep trying different tools and models.
| winternewt wrote:
| And that's why most of these endeavors are doomed to fail.
| Every time they enshittify one service there's a new one with
| attractive UX that VC's essentially pay you to use instead.
| And so the previous investment would be lost if they couldn't
| dump it on naive stock traders by going public before the cat
| is out of the bag.
| jstummbillig wrote:
| When products evolve rapidly, pricing will too. Whatever Zed or
| any actor is doing now or in 6 month on either front says
| little about what will happen next.
|
| Just try what people offer right now, at a price point that you
| are okay with, right now. In a year, both product and price
| will probably be moot.
| mstade wrote:
| > - You can only run one LSP per file type, so your Rust will
| work fine, your C++, too, your Angular will not.
|
| As a web developer that's an immediate deal breaker. I use
| Sublime today and being able to run multiple LSP servers per
| file is a _huge_ boon, it turns a very capable text editor into
| a total powerhouse. The way it 's set up in Sublime with
| configuration options that can be applied very broadly or very
| specifically, while having defaults that just works is also
| just incredible.
|
| While I'm super pleased with Sublime and a happy paying
| customer since at least a decade, and at this rate may well be
| for another decase, I'm always keeping my ear to the ground for
| other editors if nothing else just to stay current. Zed's been
| looking pretty cool, but things like this will keep me from
| even just trying it. There's years of muscle memory and
| momentum built up in my editor choice, I'm not switching on
| whim.
|
| Thank you very much for sharing this nugget of gold!
| urschrei wrote:
| I'm not a regular Zed user, but this isn't true: I
| simultaneously ran the Ruff and Pyright LSPs when I used it
| last week.
| rootnod3 wrote:
| In the same file?
| tuzemec wrote:
| You can run multiple LSPs on the same file.
|
| In my currently opened project I have: vtsls (for
| typescript), biome, emmet, and the snippets LSP running
| on the same file.
|
| You can configure which LSPs you can run on a language
| basis. Globally and per project. You can also configure
| the format on save actions the same way. Globally and per
| project.
|
| I have astro project that on save runs biome for the
| front-matter part followed by prettier for the rest.
|
| I would say that's pretty flexible.
| viraptor wrote:
| With this feature they're competing with the features Cursor
| had for quite a while now, so I would expect a price
| competition there. That means close to $20 /mth. (windsurf
| without that feature is $15)
| matt-p wrote:
| I would probably pay 100 quite happily for cursor, don't tell
| them .
| d_tr wrote:
| > Other than that for anyone wanting to try Zed:
|
| Also, no support for debugging yet.
| delduca wrote:
| You do not need debugging if you have AI.
| nurumaik wrote:
| You need more debugging if you have AI
| Nuzzerino wrote:
| I'm sure we'll see a microbubble in this space before
| long.
| rootnod3 wrote:
| Oh hell yeah, an AI driven debugger that hallucinates
| memory values and instruction pointer positions.
| d_tr wrote:
| Genuinely not sure whether you are joking or not. The thing
| is, I do not need a debugger very often, but when I need
| it, I need it.
|
| I also have no idea what kind of code people "write" that
| they can rely on A.I. so much. I have found these tools
| helpful for gathering info but not for much more, yet.
| SketchySeaBeast wrote:
| I don't even find it that useful for gathering info. If
| it's good at that then there's some documentation out
| there that's going to be faster to comb through. It's
| useful for generating boilerplate for well defined unit
| tests and the occasional tab-complete.
| crimsoneer wrote:
| Yeah, this is a moronic take. I'm all in on AI programming,
| but when you need debugging, you _really_ need it -
| sometimes the model will get utterly fixated on a solution
| that is just wrong, and you just need to follow the stack
| trace.
| onionisafruit wrote:
| I keep getting interested in Zed then rediscovering how much
| I like the easy debugging in jetbrains IDEs. Last time I
| checked Zed had a PR in progress. Maybe next time I think
| about it, Zed will be ready for me.
| trcarney wrote:
| I asked them about this on X and they are working on one. I
| use Zed for everything now but must keep VS Code around just
| for the debugger. I can't wait to delete it.
| jeremy_k wrote:
| Debugger PR is here https://github.com/zed-
| industries/zed/pull/13433 if you want to check it out
| atdt wrote:
| What is it about Zed that you find superior to VS Code?
| trcarney wrote:
| I downloaded Zed when it took way too long for VS Code to
| load the monorepo at work. It took almost as long for me
| to download and install Zed and then open the monorepo as
| it did for VS Code to load the monorepo. I think the was
| a fluke with VS Code as it didn't normally take this long
| but it did happen often enough to be annoying.
|
| I also find Zed to be snappier than VS Code. It's hard to
| quantify but it just feels better to use Zed.
|
| For reference, I mainly work on a Node/Typescript
| monorepo that is made up of a bunch of serverless
| services and is deployed with SST v2
| diodak wrote:
| Hey, my name is Piotr and I work on language servers at Zed.
|
| Right now you can run multiple language servers in a single
| project. Admittedly you cannot have multiple instances of a
| single language server in a single worktree (e.g. two rust-
| analyzers) - I am working on that right now, as this is a
| common pain point for users with monorepos.
|
| I would love to hear more about the problems you are having
| with running language servers in your projects. Is there any
| chance for us to speak on our community Discord or via
| onboarding call (which you can book via https://dub.sh/zed-c-
| onboarding)?
| Nuzzerino wrote:
| I'm curious if you've given thought to improving json-schema
| support. Zed just packages VSCode's implementation
| (https://github.com/zed-industries/json-language-server ),
| which is generally decent, but hasn't been able to keep up
| with the spec, and I doubt they ever will at this point
| (Example: https://github.com/microsoft/vscode/issues/165219).
|
| The newer specs for json-schema (not supported by VSCode)
| allow for a wider range of data requirements to be supported
| by a schema without that schema resorting to costly
| workarounds. VSCode's level of support for this is decent,
| but is still a pain point as it creates a sort of artificial
| restriction on the layout of your data that you're able to
| have without unexpected development costs. This of course can
| lead to missed estimates and reduced morale.
|
| I understand that very few developers are directly producing
| and maintaining schemas. Those schemas do have an impact on
| most developers though. I think this is a problem that is
| being sadly overlooked, and I hope you can consider changing
| the status quo.
|
| Love the company name btw, sounds similar to my own Nuzz
| Industries (not a real company, just a tag I've slapped onto
| some projects occasionally as a homage to Page Industries
| from Deus Ex).
| rbetts wrote:
| I've been using Zed (with python) for the last few weeks
| (coming from vscode and nevim). There's a lot I like about
| Zed. My favorites include the speed and navigation via the
| symbol outline (and vim mode). I'd have a hard time going
| back to vscode. The LSP configuration, though, is not one of
| its best parts, for me. I ended up copy/pasting a few
| different ruff + pyright configs until one mostly worked and
| puzzled through how to map the settings from linked pyright
| docs into Zed yaml. Some better documentation for the
| configuration stanzas and how they map across the different
| tool's settings would be really helpful.
|
| I still, for example, can't get Zed / LSP to provide auto-fix
| suggestions for missing imports. (Which seems like a common
| stumbling block: https://github.com/zed-
| industries/zed/discussions/13522, https://github.com/zed-
| extensions/java/issues/20, https://github.com/zed-
| industries/zed/discussions/13281)
|
| I'm sure given the breadth of LSPs, that they all have their
| own config, and use different project config files, makes it
| hard to document clearly. But it's an area that I hope
| bubbles up the roadmap in due course.
| lionkor wrote:
| Hi, thank you. I specifically meant running multiple LSPs in
| the same file at the same time, akin to vscode.
| ubercore wrote:
| You can definitely run more than one LSP with zed -- can you
| elaborate on the angular case that gives you trouble?
| rootnod3 wrote:
| I think the point is: "per file". Sure you an run a Rust LSP
| in one file and a JS LSP in another, but you can't drive both
| in the same file.
| frizkie wrote:
| I run multiple LSPs on Ruby files with no issues. ruby-lsp
| and StandardRB are both Ruby specific.
| maxbrunsfeld wrote:
| You can. This is fully supported.
| lionkor wrote:
| Sorry, in the same file as the sibling said. In some other
| editors, you can run multiple LSPs on the same exact file at
| the same exact time.
|
| A use case is Angular or other more specialized frameworks,
| that are so abstracted away and dumbed down that each layer
| of abstraction basically has an LSP.
| ubercore wrote:
| Is that the case though? https://zed.dev/docs/configuring-
| languages#choosing-language...
| rafaelmn wrote:
| >Remote editing does not work on Windows (its not implemented
| at all), so if you are on windows, you cannot ssh into anything
| with the editor remote editing feature. This means you cannot
| use your PC as a thin client to the actual chunky big work
| machine like you can with vscode.
|
| Does this work on anything other than VSCode ? I have been
| trying to use JetBrains stuff for this but it has been bad for
| years with little improvement. Honestly JetBrains feels like
| they are falling behind further and further in terms of
| adapting to providing a modern workflow - bad remote work, bad
| gen AI integration. I'm using VS code even where I wouldn't
| consider it before because of this, and I would like to see
| what the alternatives have to offer because VSCode is not
| perfect either.
| lionkor wrote:
| Works on JetBrains, vscode, any terminal editor (neovim, vim,
| nano, etc.). Is it any good? It's fine on JetBrains, great on
| vscode, the rest is more or less great. Zed does not have it.
| You want to edit a remote file? You download it, edit it,
| upload it. That's much worse than a half baked
| implementation.
| spmurrayzzz wrote:
| For clarity, this does work with the mac os version of Zed.
| I use it frequently to work on my GPU nodes. The one
| tradeoff that is a bit of a smell for me is that the
| preview version of the feature requires you talk to a
| centralized broker on Zed's servers rather than fully p2p
| between your local IDE and your own server.
|
| This is supposedly temporary though IIRC (may even be
| changed already in a dev branch, not sure).
| pjmlp wrote:
| As someone that is old enough to have used UNIX development
| servers for the whole company, with PC thin clients, reading
| about remote development as modern workflow is kind of
| hilarious.
| scottlamb wrote:
| The remote development feature implemented in VS Code--and
| I believe also in beta in Zed--is a million times better
| than what you're used to. The UI is local, the storage and
| computation (including the language server) are remote.
| This takes away the lag when connected to a far-away server
| while still allowing things like platform-dependent
| compilation to work correctly and efficiently.
| pjmlp wrote:
| For the last 35 years, X Windows, Citrix and RDP have
| done the job just fine, in what I am concerned.
|
| No, they aren't anything better than what I am used to.
|
| Also, as compromise, a cloud shell, browser based IDE
| setup, and dev containers, also does the job in what
| concerns cloud deployments, which should be driven from
| CI/CD and don't have shells on containers anyway.
| scottlamb wrote:
| Good for you, I guess? For me, regressing to having a
| round trip between a keypress and its result would be
| completely unacceptable. The speed of light in a vacuum
| doesn't change; paths are not getting significantly more
| direct; the improved index of refraction from switching
| to hollow-core fiber or low-orbit satellites could in
| theory help but is a one-time, limited improvement that
| has yet to be delivered to my fingertips. Having the
| network boundary in the correct place to account for the
| fundamental physics of the situation is the only real
| answer.
| makapuf wrote:
| Winscp open file + copy back on write detect was very
| useful for local-speed relote editing 10 years ago. Only
| issue now would be no lsp.
| IshKebab wrote:
| No they haven't done the job just fine. Remote X has
| _always_ been a pain to set up, and slow. NX was much
| better but not free.
|
| Remote VSCode is far better than any of those options. If
| you don't want to try it that's fine, but don't pretend
| you know better.
| gamedever wrote:
| sounds like an old stubber person comment. "we had fax in
| the 70s, why do we need anything more, now get off my
| lawn!"
|
| vscode's remote services is far beyond your old remote
| experience, an experience I share
| rafaelmn wrote:
| Except doing it over LAN vs Internet is a very different
| thing - editing over SSH with >100 ping is annoying as
| hell, especially if you have packet drops (like a mobile
| connection). Using a thin client editor with remote server
| is a much smoother experience.
| MobiusHorizons wrote:
| It's not exactly the same paradigm as remote editing, but
| neovim in tmux accessed over mosh is my preferred way of
| accomplishing the same task. I have also gotten a neovim gui
| to connect with a neovim instance over ssh, which worked
| pretty well until the ssh connection broke. But I prefer my
| editor in a terminal rather than terminals in my editor, so I
| switched back to my tmux based workflow.
| kristofferR wrote:
| It's like the recent Samsung phones, functionality fees are
| waived until 2026. No word about the price yet. [1]
|
| Samsung should be avoided like the plague anyway, I've never
| seen such a malicious and hostile company! On Dec 13 they
| silently announced that they were gonna break Samsung APIs on
| Dec 30 [2]. Yeah, they gave devs the "You gotta spend your
| holidays fixing our mess, otherwise your app will break". Due
| to that Samsung is still broken in Home Assistant and other API
| integrations. [3]
|
| [1] https://youtu.be/a4NJNdHqs_I?t=418
|
| [2] https://community.smartthings.com/t/changes-to-personal-
| acce...
|
| [3] https://github.com/home-assistant/core/issues/133623
| Larrikin wrote:
| As an Android user and developer, it's always annoying
| reading reviews where Samsung gets 9.5 out of ten and they
| give a terrible review to the competition because they have a
| slightly better camera. They give you a terrible Touch Wiz
| UI/whatever crap interface they switched to, change all the
| fonts, move around menu items and buttons, slightly change
| all the stock apps to be worse, push their garbage store,
| etc. Samsung Galaxy 1 was legitimately a good phone, but all
| the modern reviews just feel like author grew up with all the
| garbage that Samsung brings and thinks that is actually a
| good experience
| JoshTriplett wrote:
| This is exactly my concern with Samsung's upcoming trifold
| phone: I'm excited about the idea of a trifold phone, but I
| definitely don't want to use a phone that has anything
| other than the stock Android experience.
| ewoodrich wrote:
| I absolutely prefer modern OneUI on Samsung phones to the
| Pixel variant or stock AOSP. The Galaxy store is only used
| for updating Samsung native apps and isn't "pushed" at all
| in my experience. I don't use their native apps for the
| most part and them existing isn't a problem for me, Google
| apps are preinstalled and work as expected set once as the
| default.
| KPGv2 wrote:
| > Samsung should be avoided like the plague anyway
|
| I've avoided them for years. I had a Samsung phone a long
| time ago, and I'd rooted it to run one of those apps that
| could automate tasks (Tasker?), with a killer feature being
| when I turn my phone upside down, it goes on silent mode.
| Standard now, but back then wasn't possible on Android, and
| Tasker enabled it. And also some geofencing stuff. If I got a
| text while going faster than 10mph it would respond back
| "driving right now, will respond later."
|
| Anyway, Samsung released an upgrade that I'd heard would
| eradicate root and make it impossible going forward.
| Something to do with Knox, a corporate way of locking down
| phones for employees.
|
| I repeatedly declined to upgrade.
|
| Finally, one night, with my phone _in another room_ , it
| force-installed the update, with Knox, on my phone, wiping
| out my root, making it impossible going forward, and making
| Tasker worthless for me.
|
| I've never given Samsung another cent. No company that will
| disobey me re my own property and will remotely hack my
| device and wipe out my content can be trusted, and that's
| essentially what they did.
|
| For similar reasons, I've never given Sony any money since
| the rootkit scandal. 2025 marks twenty years of no Sony. I've
| probably unknowingly seen a few Sony films, but that's it. No
| electronics, no games, etc.
| underdeserver wrote:
| Eh, I find that to be a stubborn attitude for no benefit. It
| doesn't really cost you anything to try, and if it's too
| expensive, how are you worse off?
| SkiFire13 wrote:
| Even if you know how much it costs, how can you be sure they
| won't increase the price later on?
| autobodie wrote:
| Even if I know I am alive, how can I be sure I won't die
| later on?
| brookst wrote:
| Is there anything worthwhile that costs the same now that it
| did 50 years ago?
| dymk wrote:
| Why is 50 years your timeframe? I'd be more curious about
| cost increases a year, two years down the line.
| choilive wrote:
| I don't think this is a concern. Zed and Zeta are both open
| source. Fork it, self host it, whatever.
| spmurrayzzz wrote:
| Absolutely. This is already what I've been doing myself.
| Forked zed so I could use my local rig (4x 3090) to do FIM
| completions using a finetuned qwen coder 32B.
|
| Only barrier for some folks will be if they're not familiar
| with rust or local LLMs in general, but it really wasn't that
| difficult looking back on it. Amounted to about an
| afternoon's worth of work.
| tuananh wrote:
| can you describe the process. does zed support custom
| endpoint for tab edit already?
| spmurrayzzz wrote:
| At present it's not possible just via configuration, but
| you can configure a custom endpoint for both the
| assistant and inline assistant in settings.json.
|
| To get custom tab completions working you need to mimic
| one of the completion provider apis (like copilot) [1]
| and direct the requests to your own endpoint. In my case,
| I am running llama.cpp and and MITM-proxying to its newer
| /infill endpoint [2]
|
| That's why I mention the rust piece may be a blocker for
| some, you do have to hack apart the src a bit to get
| things working. At the time I started this, the only two
| completion providers available were supermaven [3] and
| copilot. You could mimic either.
|
| [1] https://github.com/zed-
| industries/zed/tree/be830742439f531e8...
|
| [2] https://github.com/ggerganov/llama.cpp/blob/master/ex
| amples/...
|
| [3] https://github.com/zed-
| industries/zed/tree/be830742439f531e8...
| tuananh wrote:
| that's cool. thanks. any chance you could open the
| changes you made?
| maxbrunsfeld wrote:
| Just to clarify, you can run as many LSPs in a given file type
| as you want.
|
| Common features like completions, diagnostics, and auto-
| formatting will multiplex to all of the LSPs.
|
| Admittedly, there are certain features that currently only use
| one LSP: inlay hints and document highlights are examples. For
| which LSP features is multi-server support important to you? It
| shouldn't be too hard to generalize.
| jermberj wrote:
| That's ... uh ... not what a rug pull is. They're telling you
| plainly from the jump that they're going to eventually charge
| for it. Point taken on your wish to wait, that makes perfect
| sense.
| santoshalper wrote:
| They're just telling you it's not going to be $0.00. It could
| be $5/year, $20/mo or anything else. It's the gentleman's
| rug-pull.
| I_complete_me wrote:
| Since when did gentlemen pull rugs? It seems antithetical
| to the behaviour of what I understand by 'gentleman'.
| lionkor wrote:
| I would love to edit my comment to instead say "having the
| rug pulled from under my feet", which is the feeling I was
| expressing.
| thejazzman wrote:
| don't those mean the same thing?
|
| (not arguing)
|
| recently saw an old alfred hitchcock presents where the
| character does the cheesiest most absurd rug pull ... and
| the person was boom dead. i assumed that was the origin of
| the term
| bastardoperator wrote:
| What kind of mentality is this? I remember before snapple was a
| widely known name they used to give out free bottles all over
| Los Angeles. I never thought to myself, I better not taste this
| free drink until I know the actual price.
| santoshalper wrote:
| Totally different. You aren't adjusting your development
| workflow based on a soft drink. If it weren't a rug-pull, why
| wouldn't they charge for it immediately, or at least tell us
| the price now?
|
| They are specifically hoping you'll become dependent on it
| and then feel compelled to pay. This shit works because
| people like you believe it doesn't.
| pixelready wrote:
| I can't speak to the Zed team's motivations for this, but
| unless it's a big corp pulling a move like this it's
| usually not that nefarious. Having been in these kinds of
| product conversations it's more like, "that thing we
| prototyped? I we think it's stable and usable enough for
| beta. We're not sure how to price it yet, but let's give it
| to our customers to play around with and give us some
| feedback while we figure out what to tweak and how much
| it's worth".
| bastardoperator wrote:
| You're going to be adjusting regardless with new tooling,
| so the point is irrelevant. It's not a rug pull because
| they're telling you it's not free. So what if it works? I
| pay for useful services, you do too. My point is not
| exploring or being curious because a company might ask you
| to pay for a product, seems outlandish, because that's what
| companies do, they charge people for goods and services.
| dmurray wrote:
| > If it weren't a rug-pull, why wouldn't they charge for it
| immediately, or at least tell us the price now?
|
| Because they haven't decided the price yet, or because they
| don't think the feature is mature enough to justify
| charging for?
|
| Though even if they really have the motivations you
| describe, that's fine too. There's a chance this is such a
| valuable feature you could feel compelled to pay any price
| for it, but you get to try it for free? That's purely to
| your benefit: it's not heroin and you won't really lose by
| trying it and then it being taken away.
| jeremyjh wrote:
| They may not know enough about the cost structure of this
| feature to price it fairly; I could easily imagine this is
| heavily dependent on usage patterns and they'll need a few
| thousand people using it regularly before they'll know
| that.
| 725686 wrote:
| So if they let you try a Ferrari for free, you wouldn't because
| you might like it but you can't afford it? I would, even if I
| know I could never buy one.
| lionkor wrote:
| It's not like that, that's out of proportion I feel like.
| It's like you get to use a new phone for a week that makes
| your life much easier, and maybe has features you need. Then,
| it suddenly costs a little bit more than you can afford.
| That's the issue
| tekacs wrote:
| From their overall FAQ:
|
| > Q: Will Zed be free?
|
| > A: Yes. Zed will be free to use as a standalone editor. We
| will instead charge a subscription for optional features
| targeting teams and collaboration. See "how will you make
| money?".
|
| > Q: How will you make money?
|
| > A: We envision Zed as a free-to-use editor, supplemented by
| subscription-based, optional network features, such as:
| - Channels and calls - Chat - Channel notes
| We plan to offer our collaboration features to open source
| teams, free of charge.
|
| It seems to me that they're just going to charge for Zeta if
| they do, because it... costs them money to run.
|
| Unlike others (e.g. Cursor), they've opened it (and its fine-
| tuning dataset!), so you can just run it yourself if you want
| to bear the costs...
|
| They did something similar with LLM use, where for simplicity
| they gave you LLM use, but you could use them directly too.
| tuananh wrote:
| for Cursor, if you use OpenAI api key for example, it's kinda
| cripple because the tab edit model is also proprietary.
| clint wrote:
| Not a rug pull. Just use it. If you like it and the price is
| too high. Don't pay it. What is the problem? Are you afraid
| that you'll like the feature so much that you'll pay whatever
| the cost?
| 1f60c wrote:
| I wonder what this means for _Support using ollama as an
| inline_completion_provider_ https://github.com/zed-
| industries/zed/issues/15968. ':]
|
| I hadn't heard of Baseten before (it seems to be in a hot niche
| along with Together.ai, OpenRouter, etc.) but I'm glad I did
| because I was actually noodling on something similar and now I
| don't have to do that anymore (though it did teach me a lot about
| Fly.io!). Yay economies of scale!
| greener_grass wrote:
| When developing something I tend to have lots of programs open:
|
| - The editor
|
| - Several terminal windows
|
| - Some docs
|
| - GitHub PRs
|
| - AWS console
|
| - Admin tools like PgAdmin
|
| - Teams, Slack, etc.
|
| When screen-sharing with Zed, do I only get to share the editor?
|
| Because (clunky as they are) video call apps let me share
| everything and this is table-stakes for collaboration.
| tombh wrote:
| I really want to like Zed, and their AI may actually be useful.
| But when I hear things like "new open model" I can only associate
| it with hype, which is more often about pleasing investors, not
| end users.
| fau wrote:
| It's hard to care about AI features when a year later I still
| can't even get acceptable font rendering: https://github.com/zed-
| industries/zed/issues/7992
| coder543 wrote:
| It doesn't seem like the issue has been entirely ignored:
| https://github.com/zed-industries/zed/issues/7992#issuecomme...
|
| Out of the 55,000 people who have starred the repo (and
| countless others who have downloaded Zed without starring the
| repo), only 184 people have upvoted that issue. In any project,
| issues have to be triaged. If someone contributed a fix, the
| Zed team would likely be interested in merging that... the
| current attempt does not seem to have fixed it to the
| satisfaction of the commenters. To put priorities into
| perspective, issue 7992 appears to be in about 20th place on
| the list of most-upvoted open issues on the tracker.
| hu3 wrote:
| If font rendering on a text editor is not a priority I wonder
| what is. It seems to be AI.
| coder543 wrote:
| -\\_(tsu)_/- You can also sort the issues and see for
| yourself what the community thinks should be a priority:
| https://github.com/zed-
| industries/zed/issues?q=is%3Aissue%20...
|
| I think the takeaway here is not that everyone related to
| Zed thinks AI should be prioritized over essential
| features, but that _either_ most developers don 't care
| that much about font rendering or (more likely) most
| developers have high DPI monitors these days, so this
| particular bug is just a non-issue for most developers...
| or else more developers would have upvoted this issue.
|
| I have one low-DPI monitor at home, so I am curious to see
| this issue for myself. If it looks bad when I get back from
| vacation in a little over a week, maybe I'll add a thumbs-
| up to that issue, but low-DPI font rendering isn't the
| reason I haven't switched to Zed. I haven't switched to Zed
| because of the reasons mentioned here:
| https://news.ycombinator.com/item?id=42818890
|
| If those issues were resolved, I would probably just use
| Zed on high DPI monitors.
|
| So, yes, for me, certain missing "AI"-related features are
| currently blocking me from switching to Zed. On the other
| hand, the community is upvoting plenty of non-AI things
| more than this particular font rendering bug.
| Unsurprisingly, different people have different priorities.
| deagle50 wrote:
| And hiding the mouse cursor so you can actually see what
| you're editing.
| as-cii wrote:
| Hey fau, Zed founder here.
|
| Apologies if that issue has taken a while to fix: next week is
| "quality week" at Zed and I am personally going to take a look
| at it again.
|
| Thanks for the feedback!
| mikebelanger wrote:
| I've been using Zed for a few months now. One thing I really like
| about Zed is its relatively discrete advertising of new features,
| like this edit prediction one. Its just a banner shown in the
| upper-left, and it doesn't block me from doing other stuff, or
| force me to click "Got it" before using the application more.
|
| This definitely counters the trend of putting speech
| balloons/modals/other nonsense that force a user to confirm a new
| feature. Good job, Zed team!
| barrell wrote:
| I read this wrong initially -- I thought you said one thing you
| __dislike__ about Zed.
|
| I read the whole thing thinking, __oh my god they do exist__
| walthamstow wrote:
| Of all the AI aids, autocomplete is my least favourite, at least
| from my experience with Cursor anyway.
|
| It takes me longer to review the autocomplete (I ain't yoloing it
| in) than it would have done to type the damn thing out. Loving
| Cursor's cmd+k workflow though, very productive.
| hnfong wrote:
| It really depends on what you're using the code for. AI
| generated code is great for quick and dirty stuff you're going
| to throw away.
|
| For example yesterday I wanted to read a small csv and dump the
| contents of the second column of each row into a separate text
| file. I took a quick glance at the AI generated python code and
| just ran it.
|
| Saved me maybe 10 minutes of typing the code myself. Small and
| trivial win for AI, but it's still useful.
|
| I do agree for production code it might turn out to be a net
| negative since AI is pretty good at producing code that looks
| fine but has subtle problems.
| dankobgd wrote:
| ai is boooring, they should fix the core features before they add
| useless ai
| salviati wrote:
| A majority of people thinks it's not useless.
|
| What core features do you believe need fixing?
| gonational wrote:
| Zed is putting so much focus into AI that their editor is falling
| apart:
|
| https://news.ycombinator.com/item?id=43041923
| rw_panic0_0 wrote:
| nice to see they open sourced the model, and it seems relatively
| small so you can run it locally! Also, please change the video
| preview, it's hard to see the feature itself, not really obvious
| what is shown
| dakiol wrote:
| Am I the only one who prefers stability instead of a constant
| rush of features in their text editors/IDEs? If it's AI related I
| like them even less. I know I can stick forever with Vim, but
| damn, I tried Zed and it felt good.
| Arch485 wrote:
| Zed is amazing, and I definitely recommend it. That said, I
| will not be using their AI features, and if the editor turns
| into a slow, bloated monster because of them (like Visual
| Studio and anything made by JetBrains) I will have to ditch it.
| awfulneutral wrote:
| This just seems to be the way for code editors. We just have
| to switch every few years to the next one.
| jswny wrote:
| I agree. I actually use the AI features in Zed a lot, but there
| are things I really wish they would prioritize.
|
| For example this issue that's been open for about a year:
| https://github.com/zed-industries/zed/issues/10122
|
| Editing large files is an incredibly common use case for an
| editor
| dgacmu wrote:
| As a slight tangent, this prompted me to wonder about one of the
| things I _haven't_ enjoyed in my last two weeks of experimenting
| with zed: It tries to autocomplete comments for me. Hands off -
| that's where I think!
|
| Fortunately, zed somewhat recently added options to disable
| these: "edit_predictions_disabled_in":
| ["comment"], "inline_completions_disabled_in": ["comment"]
|
| My life with zed just got a little better. If I switch back to
| vscode I'll have to figure out the same setting there. :-)
| fredoliveira wrote:
| FYI, it looks like inline_completions_disabled_in is no longer
| a thing :-)
| dgacmu wrote:
| Ahh, I see - it looks like in the newer version it is being
| replaced by just edit_predictions_disabled_in.
|
| Thanks!
| billwear wrote:
| hmm. company mode has been doing part of this job for a long time
| now.
| idnty wrote:
| I like Zed as an editor and how they've integrated LLMs and
| supports variety.
|
| But it's mind boggling they still don't have a basic file diff
| tool. Just why?
| flkiwi wrote:
| Because they've been too busy chasing the AI fad.
| rs186 wrote:
| As usual, AI has a higher priority than working on very basic
| stuff like creating a Windows build.
|
| If you guys want to compete with VSCode, think again.
| bobuk wrote:
| I agree with your irritation, perhaps this link will help
| https://github.com/deevus/zed-windows-builds
| markus_zhang wrote:
| I think the modern Intellisense has the right amount of
| prediction - offloads enough brain activity without completely
| relying on something else.
|
| AI prediction feels way too much and way too eager to give me
| something. I don't know about you guys, but programming is an
| exercise for me, not just to make it work and call it a day.
|
| However, AI would be useful if it can offer program structural
| and pattern recommendations. One big problem I now face, and I
| believe all hobbyists face too, is that when the program grows
| larger, it is becoming increasingly difficult to make it well
| structured and easy to expand -- on the other hand, pre-mature
| architecturing is also an issue. Reading other people's source is
| not particularly useful, because 1) You don't know whether it is
| suitable or even well written, and 2) Usually it is too tough to
| read other people's source code.
| bennine wrote:
| > but programming is an exercise for me, not just to make it
| work and call it a day.
|
| The problem is that mid, upper management and execs don't much
| care for how we feel about it.
|
| They are literally measuring who is using AI and how much and
| will eventually make it into an excuse for poor performance.
| markus_zhang wrote:
| Yeah I don't really mind using AI coding in work because it's
| boring as hell. And getting things done quicker is almost a
| virtue in the business world.
|
| I should have clarified that my original comment is about
| side projects or serious software engineering.
| minzi wrote:
| Still no debugger. I know there is a branch open, but it's
| surprising to me that there isn't a more concentrated effort on
| getting that over the line. Major props to the folks working on
| it. I just wish they had more resources and help getting it done.
| Verlyn139 wrote:
| that website is one of the most unresponsive one in a while
| deagle50 wrote:
| Does the mouse cursor still not hide while typing in Zed?
| flkiwi wrote:
| I'm not a developer, but I use Zed for a lot of things that would
| be ripe for "AI" application in the current bubble. I, however,
| have exactly zero use for AI in those cases, and will reconsider
| any application that pushes AI. It's both that I do not want to
| use AI features but also (a) I am prohibited from doing so and
| (b) the focus on deploying AI solutions raises serious concerns
| about a product's focus on and support of their core features.
| All of which is to say that Zed's AI features would be more
| valuable to me, and would drive quite a lot of good will, if they
| were an entirely removable module. No upsell notices, no
| suggested uses, just a complete absence of the functionality at
| the user's choice (like, say, an LSP).
| choilive wrote:
| You can turn all of the AI features off via the settings.
| cameroncooper wrote:
| If the model is open source, I'm hoping for an option to be able
| to run this feature locally for free. They seem to have support
| for running other models locally (e.g. deepseek-r1 through
| Ollama), so I'm hoping they will keep that up with edit
| prediction.
| barnabee wrote:
| Yeah, it's a deal-breaker if I can't run the model locally
| and/or have to sign up for their account.
| daft_pink wrote:
| Is it on device? Github code completion works so well.
| ekvintroj wrote:
| What a scam is this AI stuff.
| gyre007 wrote:
| Please add vim leader support to vim mode! :)
| fultonb wrote:
| I got beta access to this and love it. It is much more useful
| than copilot by itself and very useful for an edit that is a
| little repetitive. I wish I could run the model locally and given
| that is open source and they have support for Ollama and other
| OSS tools, I feel like that would be an amazing feature.
| lubitelpospat wrote:
| Dear Zed devs - please, fix the bug with the "Rename symbol"
| functionality! Refactoring is an important feature that many of
| your users need to have to start using Zed as their main daily
| driver. Otherwise - great IDE! Please, help me forever forget the
| VSCode nightmare!
| Alifatisk wrote:
| Is there any plans for Zed to add basic functionality like task
| runner? A button to run / debug code? Only having autocomplete
| for Java code gives the impression that Zed is only a text editor
| and not an IDE.
| jswny wrote:
| Zed already has built in task running... you can use the same
| thing to call anything that you want like a command to run your
| project. You can even add custom keybindings to them
|
| https://zed.dev/docs/tasks
| samcat116 wrote:
| > Only having autocomplete for Java code gives the impression
| that Zed is only a text editor and not an IDE
|
| Where did you get the impression they only have support for
| autocomplete for Java? AFAIK they support any LSP and this new
| feature is language independent.
| zeta0134 wrote:
| I have mixed feelings about the name of this model. :P Though I
| suppose it's my own fault, naming myself after a letter.
|
| I'm admittedly a bit surprised that there's a free/paid scheme
| for a 7b model though, as those are small enough to run locally.
| I suppose revenue streams are enticing and I can't fault the
| company for wanting to make money, but I'm also 100% against
| remote models for privacy reasons, making this a bit of a
| nonstarter for me. Depending on how heavily integrated this is,
| the mere presence of a remote-first prediction engine sorta turns
| me off the idea of the editor as a whole. If there were the
| option to run the model 100% local (sans internet) then I'd be
| more interested.
| vednig wrote:
| why can't this be a script, as in the old days, it looks like
| overkill, if you compare change vs compute/efforts, it would be
| nice to see it evolve though
| vijaybritto wrote:
| There begins the downfall. Any great product who jumps on a hype
| train always ends up crashing
| coder543 wrote:
| If anyone is interested, the release of Zeta inspired me to write
| up a blog post this afternoon about LLM tab completions past and
| future: https://news.ycombinator.com/item?id=43053094
___________________________________________________________________
(page generated 2025-02-14 23:01 UTC)