[HN Gopher] Rust compiler performance
___________________________________________________________________
Rust compiler performance
Author : mellosouls
Score : 142 points
Date : 2025-06-10 08:24 UTC (2 days ago)
(HTM) web link (kobzol.github.io)
(TXT) w3m dump (kobzol.github.io)
| dboreham wrote:
| I'd vote for filesystem space utilization to be worked on before
| performance.
| stevedonovan wrote:
| Is this not more a Cargo thing? Cargo is obsessed with correct
| builds and eventually the file system fills up with old
| artifacts.
|
| (I know, I have to declare Cargo bankruptcy every few weeks and
| do a full clean & rebuild)
| vlovich123 wrote:
| Correct builds != never running garbage cleanup. I would
| settle for it evicting older variants of a build (I also
| dislike the random hash that's impossible to determine what
| specifically is different between two hashes / which one is
| newer).
| mbrubeck wrote:
| Automatic garbage collection of old build artifacts* is
| coming in Rust 1.88 (currently on the beta channel, will
| become the new stable release in two weeks):
|
| https://github.com/rust-lang/cargo/issues/12633
|
| *EDIT: For now this only deletes old cached downloads, not
| build artifacts. Thanks epage for the correction below.
| vlovich123 wrote:
| That's as an additional flag and not the default?
| mbrubeck wrote:
| In versions earlier than 1.88, garbage collection
| required the unstable -Zgc flag (and a nightly
| toolchain). But in 1.88 and later, automatic garbage
| collection is enabled by default.
| epage wrote:
| That is not for build artifacts but global caches like
| for `.crate` files but its a stepping stone.
| mbrubeck wrote:
| Oops, thanks for the correction.
| epage wrote:
| A little of both. Incremental compilation cache is likely the
| single largest item in the target directory but it gets
| cleaned up on each invocation so it doesn't scale in size
| with time.
|
| I believe the next release will have a cache GC but only for
| global caches (e.g. `.crate` files). I'd like us to at least
| cleanup the layout of the target directory so its easier to
| track stuff before GCing it. Work is underway for this. A
| cheap GC we could add earlier is for artifacts specific to
| older cargo versions.
| dralley wrote:
| The problems are largely related. Cut down the amount of
| intermediate compilation artifacts by half and you'll have sped
| up the compiler substantially. Monomorphization and iterator
| expansion and such is a significant contributor to both issues.
| scripturial wrote:
| One of the reasons I quit rust is literally because having 4-5
| projects checked out that use serde would fill my laptop drive
| with junk in a few weeks.
| ivanjermakov wrote:
| This is one of the only reasons I disliked Haskell. GHC and lib
| files can easily take over 2GB of storage.
| MeetingsBrowser wrote:
| Why not both?
|
| If I had to choose though, I would choose compilation speed.
| Buying an SSD to double my storage is much more cost effective
| than buying a bulkier processor to halve my compilation times.
| estebank wrote:
| > Why not both?
|
| For better and worse, the Rust project is a "show-up-ocracy":
| the work that gets done is the one that volunteers (or paid
| employees from a company donating their time to the project)
| spend time doing. It's hard in open source projects to tell
| people "this is important and you must work on it", but
| rather you have to _convince_ people that it is important and
| have to _hope_ they will have the time, inclination and skill
| to work on it.
|
| FWIW, people were working on the cargo cache GC feature years
| ago[1], but I am not aware of what the current state of that
| is. I wouldn't be surprised if it wasn't turned on because
| there are unresolved questions.
|
| 1: https://blog.rust-lang.org/2023/12/11/cargo-cache-
| cleaning/
| kzrdude wrote:
| Performance has been worked on since Rust 1.0 or so, for all
| this time, there has been lots of work on compiler performance.
| There's no "before performance" :)
| estebank wrote:
| There are multiple avenues to improve both first and
| incremental compiles. They "just" require large architectural
| changes that may yield marginal improvements, so it's hard to
| get those projects off the ground. But after the last project
| all-hands I do expect _at least one_ of these to be pursued,
| of not more.
|
| There are cases where cargo recompiles crates
| "unnecessarily", cargo+rustc could invert the compilation
| pyramid to start at the root and then only compile reachable
| items (similar effect to LTO, but can produce a binary if an
| item with a compile error isn't evaluated, improving clean
| compiles), having better communication with linkers and
| having access to incremental linking would be a boon for
| incremental compiles, etc.
| preciousoo wrote:
| What's stopping cargo from storing libraries in one global
| directory(via hash or whatever), to be re-used whenever needed?
| jakkos wrote:
| You can: https://github.com/mozilla/sccache
| preciousoo wrote:
| Oh this is amazing, looks simple to set up too. Thanks!
| epage wrote:
| That work is being tracked in https://github.com/rust-
| lang/cargo/issues/5931
|
| Someone has taken up the work on this though there are some
| foundational steps first.
|
| 1. We need to delineate intermediate and final build
| artifacts so people have a clearer understanding in `target/`
| what has stability guarantees (implemented, awaiting
| stabilization).
|
| 2. We then need to re-organize the target directory from
| being organized by file type to being organized by crate
| instance.
|
| 3. We need to re-do the file locking for `target/` so when we
| share things, one cargo process won't lock out your entire
| system
|
| 4. We can then start exploring moving intermediate artifacts
| into a central location.
|
| There are some caveats to this initial implementation
|
| - To avoid cache poisoning, this will only items with
| immutable source that and an idempotent build, leaving out
| your local source and stuff that depends on build scripts and
| proc-macros. There is work to reduce the reliance on build
| scripts and proc-macros. We may also need a "trust me, this
| is idempotent" flag for some remaining cases.
|
| - A new instance of a crate will be created in the cache if
| any dependency changes versions, reducing reuse. This becomes
| worse when foundation crates release frequently and when
| adding or updating a specific dependency, Cargo prefers to
| keep all existing versions, creating a very unpredictable
| dependency tree. Support for remote caches, especially if you
| can use your project's CI as a cache source, would help a lot
| with this.
| jakkos wrote:
| Not that this isn't a problem, it is, target folders currently
| take up ~100gb on my machine but...
|
| I'd still, by far, prefer a tiny incremental compile speed
| increase over a substantial storage reduction. I can get a
| bigger SSD, I can't get my time back :'(
| kibwen wrote:
| The previous post on the OP's blog is about exactly this:
| https://kobzol.github.io/rust/rustc/2025/06/02/reduce-cargo-...
| vlovich123 wrote:
| I wonder if how much value there is in skipping LLVM in favor of
| having a JIT optimized linked in instead. For release builds it
| would get you a reasonable proxy if it optimized decently while
| still retaining better debugability.
|
| I wonder if the JVM as an initial target might be interesting
| given how mature and robust their JIT is.
| dhruvrajvanshi wrote:
| I would love this in modern languages.
|
| For dev builds, I see JIT compilation as a better deal than
| debug builds because it's capable of eventually reaching peak
| performance. For performance sensitive stuff like games, it
| really matters to keep a nice feedback loop without making the
| game unusable by turning off _all_ optimizations.
|
| AOT static binaries are valuable for deployments.
|
| No idea how expensive it would be to develop for an existing
| language like Rust though.
| zozbot234 wrote:
| The JVM is not a very meaningful target for Rust since it does
| not use C-like flat memory addressing and pointer arithmetic.
| It's as if every single Java object and field is sitting in its
| own tiny memory segment/address space. On the one hand, this
| makes it essentially transparent to GC, which is a key property
| for Java; OTOH, it means that compiling C-like languages to the
| JVM is usually done by reimplementing "memory" as a JVM array
| of byte values.
| bbatha wrote:
| > I wonder if how much value there is in skipping LLVM in favor
| of having a JIT optimized linked in instead. For release builds
| it would get you a reasonable proxy if it optimized decently
| while still retaining better debugability.
|
| Rust is in the process of building out the cranelift backend.
| Cranelift was originally built to be a JIT compiler. The hope
| is that this can become the debug build compiler.
|
| https://github.com/rust-lang/rustc_codegen_cranelift
| lsuresh wrote:
| LLVM optimizations are the overwhelming majority of the
| compilation bottleneck for us over at Feldera. We blogged about
| some of the challenges we faced here:
| https://www.feldera.com/blog/cutting-down-rust-compile-times...
|
| We almost definitely need to build a JIT in the future to avoid
| this problem.
| jplusequalt wrote:
| Having worked on large scale C++ code-bases and thus used to long
| compilation times, it surprises me that this is the hill many C++
| devs would die on in regards to their dislike of Rust.
| 0cf8612b2e1e wrote:
| It is a quantifiable negative to which you can always point. Of
| course it will be used for justifications.
| logicchains wrote:
| There's a lot of things you can do in C++ to reduce compilation
| time if you care about it, that aren't possible with Rust.
| MeetingsBrowser wrote:
| There are things you can do for Rust if it really is a deal
| breaker.
|
| Dioxus has a hot reload system. Some rust game engines have
| done similar things.
| jplusequalt wrote:
| When in doubt, manually patch your dll's
| kibwen wrote:
| You can absolutely do the same things in Rust, it's just that
| the culture and tooling of Rust encourages much larger
| compilation units than in C or C++, so you don't get the same
| sort of best-case nontrivial embarrassing-parallelism,
| forcing the compiler to do more work to parallelize.
|
| To address the tooling pressure, I would like to see Cargo
| support first-class internal-only crates, thereby
| deconflating the crate as what is today both the unit of
| compilation and the unit of distribution.
| phkahler wrote:
| >> it surprises me that this is the hill many C++ devs would
| die on in regards to their dislike of Rust
|
| I believe people will exaggerate their current issue so it
| sounds like the only thing that matters to them. On another
| project I've had people say "This is the _only thing_ that
| keeps me using commercial alternatives " or the _only thing_
| holding back wider adoption, or the _only thing_ needed for
| blah blah blah. Meanwhile I 've got my own _list_ of high
| priority things needed to bring it to what I 'd consider a
| basic level of completeness.
|
| When it comes to performance it will never be good enough for
| everyone. There is always a bigger project to consume whatever
| resources are available. There are always people who insist on
| doing things in odd ways (maybe valid, but very atypical).
| These requests to improve are often indistinguishable from the
| regular ones.
| pton_xd wrote:
| Makes sense to me! Everyone with enough C++ experience has
| dealt with that nightmare at one point. Never again, if you can
| help it.
| maccard wrote:
| I work on large c++ code bases day in day out - think 30 minute
| compiles on an i9 with 128GB ram and NVMe drives.
|
| Rusts compile times are still ungodly slow. I contributed to a
| "small to medium" open source project [0] a while back, fixing
| a few issues that we came across when using it. Given that the
| project is approximately 3 orders of magnitude smaller than my
| day to day project, a clean build of a few thousand lines of
| rust took close to 10 minutes. Incremental changes to the
| project were still closer to a minute at the time. I've never
| worked on a 5m+ LOC project in rust, but I can only imagine how
| long it would take.
|
| On the flip side, I also submitted some patches to a golang
| program of a similar size [1] and it was faster to clone,
| install dependencies and clean build that project than a single
| file change to the rust project was.
|
| [0] https://github.com/getsentry/symbolicator
|
| [1] https://github.com/buildkite/agent
| jplusequalt wrote:
| Yes, but Go is a higher level language than Rust. It feels
| unfair to compare the two. That's why I brought up C++ (as
| did the article).
| 9d wrote:
| Just curious, are you still able to get instant feedback and
| development conveniences on that 30 minute compile time
| project, like up to date autocomplete and type hints and
| real-time errors/warnings while developing before compiling?
| Fluorescence wrote:
| > clean build of a few thousand lines of rust took close to
| 10 minutes
|
| That doesn't sound likely. I would expect seconds unless
| something very odd is happing.
|
| Is the example symbolicator?
|
| I can't build the optional "symbolicator-crash" crate because
| it's not rust but 300k of C++/C pulled from a git submodule
| that requires dependencies I am not going to install. Your
| complaint might literally be about C++!
|
| For the rest of the workspace, 60k of rust builds in 60
| seconds
|
| - clean debug build on a 6 year old 3900X (which is under
| load because I am working)
|
| - time includes fetching 650 deps over a poor network and
| building them (the real line count of the build is likely
| 100s of thousands or millions of lines of code)
|
| - subsequent release build took 100s
|
| - I use the mold linker which is advised for faster builds
|
| - modern cpus are so much faster than my machine they might
| not even take 10s
| afdbcreid wrote:
| What are incremental compile times with the C++ codebase?
|
| Also, does the line of code you count include dependencies
| (admitting, dependencies in Rust are a problem, but it's not
| related to compiler performance)?
| maccard wrote:
| About 15 seconds, because we carve it up into 100 or so
| dlls specifically for this case
| sapiogram wrote:
| Thanks for actually including the slow repo in your comment.
| My results on a Ryzen 5900X:
|
| * Clean debug build: 1m 22s
|
| * Incremental debug build: 13s
|
| * Clean release build: 1m 51s
|
| * Incremental release build: 24s
|
| Incremental builds were done by changing one line in
| creates/symbolicator/src/cli.rs.
|
| It's not great, but it sounds like your experience was much
| worse for some reason.
| maccard wrote:
| Sorry - my clean build was actually including the
| dependency fetching, which is a large part of it. My
| experience was in 2023 which if we go by article roughly
| scales with compiler performance to 5 minutes or so
| panstromek wrote:
| The original title is:
|
| Why doesn't Rust care more about compiler performance?
| mellosouls wrote:
| (OP) I submitted this some time ago, and am pretty sure I would
| have submitted the title as is, so I'm guessing some manual or
| automatic editing since by the mods before the second chance
| here.
| norir wrote:
| Compiler performance must be considered up front in language
| design. It is nearly impossible to fix once the language reaches
| a certain size without it being a priority. I recently saw here
| the observation that one can often get a 2x performance
| improvement through optimization, but 10x requires redesigning
| the architecture.
|
| Rust can likely never be rearchitected without causing a
| disastrous schism in the community, so it seems probable that
| compilation will always be slow.
| littlestymaar wrote:
| You're conflating _language design_ and _compiler
| architecture_. It 's hard to increment on a compiler to get
| massive performance improvement, and rearchitecture can help,
| but you don't necessarily need to change anything to the
| language itself in that regard.
|
| Roslyn (C#) is the best example of that.
|
| It's a massive endeavor and would need significant fundings to
| happen though.
| dist1ll wrote:
| Language design can have massive impact on compiler
| architecture. A language with strict define-before-use and
| DAG modules has the potential to blow every major compiler
| out of the water in terms of compile times. ASTs, type
| checking, code generation, optimization passes, IR design,
| linking can all be significantly impacted by this language
| design choice.
| formerly_proven wrote:
| No, language design decisions absolutely have a massive
| impact the performance envelope of compilers. Think about
| things like tokenization rules (Zig is designed such that
| every line can be tokenized independently, for example),
| ambiguous grammars (most vexing parse, lexer hack etc.),
| symbol resolution (e.g. explicit imports as in Python, Java
| or Rust versus "just dump eeet" imports as in C#, and also
| things whether symbols can be defined after being referenced)
| and that's before we get to the really big one: type solving.
| littlestymaar wrote:
| This kind of comment is funny because it reveals how
| uninformed people can be while having a strong opinion on a
| topic.
|
| Yes grammar can impact how theoretically fast a compiler
| can be, and yes the type system ads more or less works
| depending on how it's designed, but none of these are what
| makes Rust compiler slow. Parsing and lexing are negligible
| fraction of compile time, and typing isn't particularly
| heavy in most cases (with the exception of niches crates
| who abuse the Turing completeness of the Trait system).
| You're not going to make big gains by changing these.
|
| The massive gains are to be made later in the pipeline (or
| earlier, by having a way to avoid re-compiling pro macros
| and their dependencies before the actual compilation can
| even start).
| lsuresh wrote:
| Hard agree. Practically all the bottlenecks we run into
| with Rust compilation have to do with the LLVM passes.
| The frontend doesn't even come close. (e.g.
| https://www.feldera.com/blog/cutting-down-rust-compile-
| times...)
| uecker wrote:
| While LLVM is known to be slow, not all LLVM-based
| languages are equally slow.
| kibwen wrote:
| It's certainly possible to think of language features that
| would preclude trivially-achievable high-performance
| compilation. None of those language features that are present
| in Rust (specifically, monomorphized generics) would have ever
| been considered for omission, regardless of their compile-time
| cost, because that would have compromised Rust's other goals.
| panstromek wrote:
| There are many more mundane examples of language design
| choices in rust that are problematic for compile time.
| Polymorphization (which has big potential to speed up compile
| time) has been blocked on pretty obscure problems with
| TypeId. Procedural macros require double parsing. Ability to
| define items in function bodies prevents skipping parsing
| bodies. Those things are not essential, they could pretty
| easily be tweaked to be less problematic for compile time
| without compromising anything.
| kibwen wrote:
| This is an oversimplification. Automatic polymorphization
| is blocked on several concerns, e.g. dyn safety (and
| redesigning the language to make it possible to paper over
| the difference between dyn and non-dyn safe traits imposes
| costs on the static use case), and/or obscure LLVM
| implementation deficiencies (which was the blocker for the
| last time I proposed a Swift-style ABI to address this).
| Procedural macros don't require double-parsing; many people
| do use syn to parse the token stream, but 1) parsing isn't
| a performance bottleneck, 2) providing a parsed AST rather
| than a token stream freezes the AST, which is something
| that the Rust authors deliberately wanted to avoid, rather
| than being some kind of accident of design, 3) at any point
| in the future the Rust devs could decide to stabilize the
| AST and provide a parsed representation, so this isn't
| anything unfixable that would cause any sort of trauma in
| the community, 4) proc macro expansions are trivially
| cacheable if you know you're not doing arbitrary I/O, which
| is easy to achieve manually today and should absolutely be
| built-in to the compiler (if for no other reason than
| having a sandboxed dev environment), but once again this is
| easy to tack on in future versions. As for allowing item
| definitions in function bodies, I want to reiterate that
| parsing is not a bottleneck.
| zozbot234 wrote:
| AIUI, "Swift-style" ABI mechanisms are heavily dependent
| on alloca (dynamically-sized allocations on the stack)
| which the Rust devs have just proposed backing out of a
| RFC for (i.e. give up on it as an approved feature for
| upcoming versions of Rust) because it's too complex to
| implement, even with existing LLVM support for it.
| WhyNotHugo wrote:
| Macros themselves are a terrible hack to work around
| support for proper reflection.
|
| The entire Rust ecosystem would be reshaped in such
| fascinating ways if we had support for reflection. I'd love
| to see this happen one day.
| kibwen wrote:
| _> Macros themselves are a terrible hack to work around
| support for proper reflection._
|
| No, I'm not sure where you got this idea. Macros are a
| disjoint feature from reflection. Macros exist to let you
| implement DSLs and abstract over syntax.
| swsieber wrote:
| What about crates as the unit of compilation? I am genuinely
| curious because it's not clear to me what trade-offs there
| are around that decision.
| pornel wrote:
| It's a "unit" in the sense of calling `rustc` once, but
| it's not a minimal unit of work. It's not directly
| comparable to what C does.
|
| Rust has incremental compilation within a crate. It also
| splits optimization work into many parallel codegen units.
| The compiler front-end is also becoming parallel within
| crates.
|
| The advantage is that there can be common shared state
| (equivalent of parsing C headers) in RAM, used for the
| entire crate. Otherwise it would need to be collected,
| written out to disk, and reloaded/reparsed by different
| compiler invocations much more often.
| nicoburns wrote:
| > Rust has incremental compilation within a crate. It
| also splits optimization work into many parallel codegen
| units.
|
| Eh, it does, but it's not currently very good at this in
| my experience. Nothing unfixable AFAIK (and the parallel
| frontend can help (but is currently a significant
| regression on small crates)), but currently splitting
| things into smaller crates can often lead to much faster
| compiles.
| kibwen wrote:
| All compilers have compilation units, there's not actually
| much interesting about Rust here other than using the word
| "crate" as a friendlier term for "compilation unit".
|
| What you may be referring to instead is _Cargo 's_ decision
| to re-use the notion of a crate as the unit of _package
| distribution_. I don 't think this was necessarily a bad
| idea (it certainly made things simpler, which matters when
| you're bootstrapping an ecosystem), but it's true that
| prevailing best practices since then have led to Rust's
| ecosystem having comparatively larger compilation units
| (which itself isn't necessarily a bad thing either; larger
| compilation units do tend to produce faster code). I would
| personally like to see Cargo provide a way to decouple the
| unit of distribution from the unit of compilation, which
| would give us free parallelism (which currently today rustc
| needs to tease out via parallel codegen units (and the
| forthcoming parallel frontend)) and also assuage some of
| the perpetual hand-wringing about how many crates are in a
| dependency tree (which is exactly the wrong measure as
| getting upset about how many source files are in your C
| program). This would be a fully backwards-compatible
| change.
| WhyNotHugo wrote:
| One of the issue why compile times are so awful is that all
| dependencies must be compiled for each project.
|
| 20 different projects use the same dependency? They each need
| to recompile it.
|
| This is an effect of the language not having a proper ABI for
| compiling libraries as dynamically loadable modules, which in
| itself presents many other issues, including making
| distribution of software a complete nightmare.
| kibwen wrote:
| _> This is an effect of the language not having a proper ABI
| for compiling libraries as dynamically loadable modules_
|
| No, this is a design decision of Cargo to default to using
| project-local cached artifacts rather than caching them at
| the user or system level. You can configure Cargo to do so if
| you'd like. The reason it doesn't do this by default is
| because Cargo gives crates great latitude to configure
| themselves via compile-time flags, and any difference in
| flags means you get a different compiled artifact anyway. On
| top of that, there's the question of what `cargo clean`
| should do when you have a global cache rather than a local
| one.
| CrendKing wrote:
| Why can't Cargo have a system like PyPI where library
| author uploads compiled binary (even with their specific
| flags) for each rust version/platform combination, and if
| said binary is missing for certain combination, fallback to
| local compile? Imagine `cargo publish` handle the
| compile+upload task, and crates.io be changed to also host
| binaries.
| fngjdflmdflg wrote:
| This was a big reason for dart canceling its previous macros
| attempt (as I understand it). Fast compilation is integral for
| Flutter development - which accounts for a late percentage of
| dart usage - so after IIRC more than two years of developing it
| they still ended up not going through with that iteration of
| macros because it would make hot reload too slow. That degree
| of level-headedness and consideration is worthy of respect IMO.
| jmyeet wrote:
| I'm a big fan of Rust but there are definitely warts that are
| going to be difficult to cure [1]. This is 5 years old now but I
| believe it's still largely relevant.
|
| It is a weird hill to die on for C/C++ devs though, given header
| files and templates creating massive compile-time issues that
| really can't be solved.
|
| Google is known for having infrastructure for compiling large
| projects. They use Blaze (open-sourced at Bazel) to define
| hermetic builds then use large systems to cache object graphs
| (for compilation units) and caching compiled objects because
| Google uses some significant monoliths that would take a
| significant amount of time to compile from scratch.
|
| I wonder what this kind of infrastructure can do for a large Rust
| project.
|
| [1]:https://www.pingcap.com/blog/rust-compilation-model-
| calamity...
| 1718627440 wrote:
| I think there is a massive difference in compile times between
| idiomatic C and C++, so its problematic to be lumping them
| together. But there is also some selection bias since large
| projects tend to migrate from C to C++.
| cozzyd wrote:
| C++ compile times are why I often stick to C, which compiles
| nearly instantly.
| daxfohl wrote:
| Maybe these features already exist, but I'd like a way to: 1)
| Type check without necessarily building the whole thing. 2) Run a
| unit test, only building the dependencies of that test. Do these
| exist or are they remotely feasible?
| lpapez wrote:
| You don't even need to ask AI to get an answer to the first
| question, the first hit on both Google and Bing will tell you
| how to do it - it takes 2 seconds!
| panstromek wrote:
| cargo check exists for option 1. For 2.) it depends on the
| project structure. Either way, they don't help as much as you
| would hope for.
| kibwen wrote:
| cargo check absolutely helps as much as I'd hope for, and
| more. It's the basis of my entire workflow and it's like two
| seconds on my codebase.
| jtrueb wrote:
| A true champion
|
| > when I started contributing to Rust back in 2021, my primary
| interest was compiler performance. So I started doing some
| optimization work. Then I noticed that the compiler benchmark
| suite could use some maintenance, so I started working on that.
| Then I noticed that we don't compile the compiler itself with as
| many optimizations as we could, so I started working on adding
| support for LTO/PGO/BOLT, which further led to improving our CI
| infrastructure. Then I noticed that we wait quite a long time for
| our CI workflows, and started optimizing them. Then I started
| running the Rust Annual Survey, then our GSoC program, then
| improving our bots, then...
| agumonkey wrote:
| Talk about proper `continuous improvement`
| c-cube wrote:
| I'm worried this person is going to experience a Yak overflow,
| honestly.
| mrec wrote:
| Coincidentally, I discovered this glorious page literally
| five minutes ago:
|
| https://github.com/SerenityOS/yaksplained?tab=readme-ov-
| file...
| jadbox wrote:
| Not related to the article, but after years of using Rust, it
| still is a pain in the ass. While it may be a good choice for OS
| development, high frequency trading, medical devices, vehicle
| firmware, finance software, or working on device drivers, it
| feels way overkill for most other general domains. On the other
| hand, I learned Zig and Go both over a weekend and find they run
| almost as fast and don't suffer from memory issues (as much as
| say Java or C++).
| sfvisser wrote:
| This comment would have been more useful with some
| qualification of why that's the case. The language, tooling,
| library ecosystem? Something else?
| skrtskrt wrote:
| For me the hangup is that async is Still Hard. Just a
| ridiculous amount of internal implementation details exposed
| in order to just write, like, an http middleware.
|
| We looked at proposing Rust as the second blessed language in
| addition to Go where I work, and the conclusion was
| basically... why?
|
| We have skilled Go engineers that can drop down to manual
| memory management and squeeze lots of extra performance out
| of it. And it's still dead simple when you don't need to do
| that or the task is suitable for a junior engineer. And
| channels are simply one of the best concurrency primitives
| out there, and baked into the language unlike Rust where
| everything is library making independent decisions. (to be
| fair I haven't tried Elixir/Erlang message passing, I
| understand people like that too).
| bobbylarrybobby wrote:
| Not to be that guy who comes to Rust's defense whenever Go is
| mentioned, but... Rust protects from a much larger class of
| errors than just memory safety. For instance, it is impossible
| to invalidate an iterator while iterating over it, refer to an
| unset or invalid value, inadvertently merely shallow copy a
| variable, or forget to lock/unlock a mutex.
| codr7 wrote:
| If only these were common problems that were difficult to
| otherwise avoid.
| 90s_dev wrote:
| Rust feels like wearing a giant bubble just to go outside
| safely.
|
| C++ feels like driving a car. Dangerous but doable and
| often necessary and usually safe.
|
| (Forth feels like being drunk?)
| 90s_dev wrote:
| Could you elaborate on the memory issues in all four languages
| that you ran into?
| adrian17 wrote:
| > On this benchmark, the compiler is almost twice as fast than it
| was three years ago.
|
| I think the cause of the public perception issue could be the
| variant of Wirth's law: the size of an average codebase (and its
| dependencies) might be growing faster than the compiler's
| improvements in compiling it?
| IshKebab wrote:
| Yeah definitely when you include dependencies. Also I've
| noticed that when your dependency tree gets above a certain
| size you end up pulling in _every alternative_ crate for a
| certain task, because e.g. one of your dependencies uses
| miniz_oxide and another uses zlib-rs (or whatever).
|
| On the other hand the compile to for _most_ dependencies doesn
| 't matter hugely because they are easy to do in parallel. It's
| always the last few crates and linking that take half the time.
| littlestymaar wrote:
| In my experience working on medium-sized Rust projects (hundreds
| of thousands of LoCs, not millions), incremental compilation and
| mold pretty much solved the problem in practice. I still
| occasionally code on my 13 years old laptop when traveling and
| compilation time is fine even there (for cargo check and debug
| build, that is, I barely ever compile in release mode locally).
|
| What's painful is compiling from scratch, and particularly the
| fact that every other week I need to run _cargo clean_ and do a
| full rebuild to get things working. IMHO this is a much bigger
| annoyance than raw compiler speed.
| kristoff_it wrote:
| > Speaking of DoD, an additional thing to consider is the
| maintainability of the compiler codebase. Imagine that we swung
| our magic wand again, and rewrote everything over the night using
| DoD, SIMD vectorization, hand-rolled assembly, etc. It would
| (possibly) be way faster, yay! However, we do not only care about
| immediate performance, but also about our ability to make long-
| term improvements to it.
|
| This is an unfortunate hyperbole from the author. There's a lot
| of distance between DoD and "hand-rolled assembly" and thinking
| that it's fair to put them in the same bucket to justify the
| argument of maintainability is just going to hurt the Rust
| project's ability to make a better compiler for its users.
|
| You know what helps a lot making software maintainable? A Faster
| development loop. Zig has invested years into this and both users
| and the core team itself have started enjoying the fruits of that
| labor.
|
| https://ziglang.org/devlog/2025/#2025-06-08
|
| Of course everybody is free to choose their own priorities, but I
| find the reasoning flawed and I think that it would ultimately be
| in the Rust project's best interest to prioritize compiler
| performance more.
| 90s_dev wrote:
| Yeah but it's _Zig._ Rust is for when you want to write C but
| have it be _easier_. Zig is when you want it to be _harder_
| than C, but with more control over execution and allocation as
| a trade off.
| AndyKelley wrote:
| For anyone who wants to form their own opinion about whether
| this style of programming is easier or harder than it would
| be in other languages:
|
| https://github.com/ziglang/zig/blob/0.14.1/lib/std/zig/token.
| ..
| Rusky wrote:
| The tokenizer is not really a good demonstration of the
| differences between these styles. A more representative
| comparison would be the later stages that build, traverse,
| and manipulate tree and graph data structures.
| AndyKelley wrote:
| OK, parser then:
|
| https://github.com/rust-
| lang/rust/tree/1.87.0/compiler/rustc... (main logic seems
| to be in expr.rs)
|
| vs
|
| https://github.com/ziglang/zig/blob/0.14.1/lib/std/zig/Pa
| rse...
|
| Again, for those who wish to form their own opinions.
| Rusky wrote:
| "Hand-rolled assembly" was one item in a list that also
| included DoD. You're reading way more into that sentence than
| they wrote- the claim is that DoD itself also impacts the
| maintainability of the codebase.
| Animats wrote:
| This isn't a huge problem. My big Rust project compiles in about
| a minute in release mode. Failed compiles with errors only take a
| few seconds. That's where most of the debugging takes place. Once
| it compiles, it usually works the first time.
| afdbcreid wrote:
| But how big are your big projects?
| Animats wrote:
| About 40,000 lines of my own code, plus a hundred or so
| crates from crates.io.
| johnfn wrote:
| A minute is pretty bad. I understand it may work for your use
| case, but there are plenty of use cases out there where errors
| typically don't fail the compile and a minute iteration time is
| a deal killer. For instance: UI work - good luck catching an
| incorrect color with a compile error. Vite can compile 40,000
| loc and display it on your screen in probably a couple of
| milliseconds.
| kalaksi wrote:
| Compile what language?
| FridgeSeal wrote:
| Probably js given they mentioned Vite; not exactly sure I'd
| call it "compiling" in nearly t he r same order of
| magnitude of complexity though...
| synthos wrote:
| Regarding AVX: could rust be compiled with different symbols that
| target different x64 instruction sets, then at runtime choose the
| symbol set that is the more performant for that architecture?
| johnfn wrote:
| > First, let me assure you - yes, we (as in, the Rust Project)
| absolutely do care about the performance of our beloved compiler,
| and we put in a lot of effort to improve it.
|
| I'm probably being ungrateful here, but here goes anyway. Yes,
| Rust cares about performance of the compiler, but it would likely
| be more accurate to say that compiler performance is, like, 15th
| on the list of things they care about, and they'll happily trade
| off slower compile times for one of the other things.
|
| I find posts about Rust like this one, where they say "ah, of
| course we care about perf, look, we got the compile times on a
| somewhat nontrivial project to go from 1m15s to 1m09s" somewhat
| underwhelming - I think they miss the point. For me, I basically
| only care if compile times are virtually instantaneous. e.g. Vite
| scales to a million lines and can hot-swap my code changes in
| instantaneously. This is where the productivity benefits come in.
|
| Don't just trust me on it. Remember this post[1]?
|
| > "I feels like some people realize how much more polish could
| their games have if their compile times were 0.5s instead of 30s.
| Things like GUI are inherently tweak-y, and anyone but users of
| godot-rust are going to be at the mercy of restarting their game
| multiple times in order to make things look good. "
|
| [1]: https://loglog.games/blog/leaving-rust-gamedev/#compile-
| time...
| kalaksi wrote:
| Isn't Vite for javascript though, which is, of course, a
| scripting language?
|
| Btw, I've used QML and Dioxus with rust (not for games). Both
| make hot reloading the GUI parts possible without recompiling
| since that part is basically not rust (Dioxus in a bit more
| limited manner).
___________________________________________________________________
(page generated 2025-06-12 23:00 UTC)