[HN Gopher] Zig builds are getting faster
___________________________________________________________________
Zig builds are getting faster
Author : emschwartz
Score : 400 points
Date : 2025-10-03 22:45 UTC (1 days ago)
(HTM) web link (mitchellh.com)
(TXT) w3m dump (mitchellh.com)
| jtrueb wrote:
| Wow!
| Ygg2 wrote:
| It's a bold strategy - let's see if it works out for them.
|
| But from what I remember, this still uses the LLVM backend,
| right? Sure, you can beat LLVM on compilation speed and number of
| platforms supported, but when it comes to emitting great
| assembly, it is almost unbeatable.
| measurablefunc wrote:
| I don't really get the obsession w/ compiler performance. At
| the end of the day you are trading performance optimizations
| (inlining, dead code elimination, partial evaluation, etc) for
| compile times. If you want fast compile times then use a
| compiler w/ no optimization passes, done. The compile times are
| now linear w/ respect to lines of code & there is provably no
| further improvement you can make on that b/c any further passes
| will add linear or superlinear amount of overhead depending on
| the complexity of the optimization.
| tom_ wrote:
| But if those passes (or indeed any other step of the process)
| are inefficiently implemented, they could be improved. The
| compiler output would be the same, but now it would be
| produced more quickly, and the iteration time would be
| shorter.
| dustbunny wrote:
| Any delay between writing code and seeing the result is empty
| space in my mind where distractions can enter.
| pfg_ wrote:
| The reason to care about compile time is because it affects
| your iteration speed. You can iterate much faster on a
| program that takes 1 second to compile vs 1 minute.
|
| Time complexity may be O(lines), but a compiler can be faster
| or slower based on how long it takes. And for incremental
| updates, compilers can do significantly better than O(lines).
|
| In debug mode, zig uses llvm with no optimization passes. On
| linux x86_64, it uses its own native backend. This backend
| can be significantly faster to compile (2x or more) than
| llvm.
|
| Zig's own native backend is designed for incremental
| compilation. This means, after the initial build, there will
| be very little work that has to be done for the next emit. It
| needs to rebuild the affected function, potentially rebuild
| other functions which depend on it, and then directly update
| the one part of the output binary that changed. This will be
| significantly faster than O(n) for edits.
| timschmidt wrote:
| > The reason to care about compile time is because it
| affects your iteration speed. You can iterate much faster
| on a program that takes 1 second to compile vs 1 minute.
|
| Color me skeptical. I've only got 30 years of development
| under the belt, but even a 1 minute compile time is dwarfed
| by the time it takes to write and reason about code, run
| tests, work with version control, etc.
|
| Further, using Rust as an example, even a project which
| takes 5 minutes to build cold only takes a second or two on
| a hot build thanks to caching of already-built artifacts.
|
| Which leaves any compile time improvements to the very
| first time the project is cloned and built.
|
| Consequently, faster compile times would not alter my
| development practices, nor allow me to iterate any faster.
| zdragnar wrote:
| > Consequently, faster compile times would not alter my
| development practices, nor allow me to iterate any
| faster.
|
| I think the web frontend space is a really good case for
| fast compile times. It's gotten to the point that you can
| make a change, save a file, the code recompiles and is
| sent to the browser and hot-reloaded (no page refresh)
| and your changes just show up.
|
| The difference between this experience and my last time
| working with Ember, where we had long compile times and
| full page reloads, was incredibly stark.
|
| As you mentioned, the hot build with caching definitely
| does a lot of heavy lifting here, but in some
| environments, such as a CI server, having minutes long
| builds can get annoying as well.
|
| > Consequently, faster compile times would not alter my
| development practices, nor allow me to iterate any
| faster.
|
| Maybe, maybe not, but there's no denying that faster
| feels nicer.
| timschmidt wrote:
| > Maybe, maybe not, but there's no denying that faster
| feels nicer.
|
| Given finite developer time, spending it on improved
| optimization and code generation would have a much larger
| effect on my development. Even if builds took twice as
| long.
| hu3 wrote:
| Can't agree. Iteration speed is magical when really fast.
|
| I'm much more productive when I can see the results
| within 1 or 2 seconds.
| timschmidt wrote:
| > I'm much more productive when I can see the results
| within 1 or 2 seconds.
|
| That's my experience today with all my Rust projects.
| Even though people decry the language for long compile
| times. As I said, hot builds, which is every build while
| I'm hacking, are exactly that fast already. Even on the
| large projects. Even on my 4 year old laptop.
|
| On a hot build, build time is dominated by linking, not
| compilation. And even halving a 1s hot build will not
| result in any noticeable change for me.
| pjmlp wrote:
| Same here, Rust currently isn't the best tooling for UI
| or game development.
| bigstrat2003 wrote:
| Yeah, I agree. Much like how the time you spend thinking
| about the code massively outweighs the time you spend
| writing the code, the time you spend writing the code
| massively outweighs the time you spend compiling the
| code. I think the fascination with compiler performance
| is focusing on by _far_ the most insignificant part of
| development.
| dolmen wrote:
| This is underestimating that running the code is part of
| the development process.
|
| With fast compile time, running the test suite (which
| implies to recompile it) is fast too.
|
| Also if the language itself is optimized towards making
| easy to write a fast compiler, this also makes your IDE
| fast.
|
| And just if you're wondering, yes, Go is my dope.
| eyegor wrote:
| > even a 1 minute compile time is dwarfed by the time it
| takes to write and reason about code, run tests, work
| with version control, etc.
|
| You are far from the embedded world if you think 1 minute
| here or there is long. I have been involved with many
| projects that take hours to build, usually caused by
| hardware generation (fpga hdl builds) or poor cross
| compiling support (custom/complex toolchain
| requirements). These days I can keep most of the custom
| shenanigans in the 1hr ballpark by throwing more compute
| at a very heavy emulator (to fully emulate the
| architecture) but that's still pretty painful. One day
| I'll find a way to use the zig toolchain for cross
| compiles but it gets thrown off by some of the c macro or
| custom resource embedding nonsense.
|
| Edit: missed some context on lazy first read so ignore
| the snark above.
| timschmidt wrote:
| > Edit: missed some context on lazy first read so ignore
| the snark above.
|
| Yeah, 1 minute was the OP's number, not mine.
|
| > fpga hdl builds
|
| These are another thing entirely from software
| compilation. Placing and routing is a Hard Problem(TM)
| which evolutionary algorithms only find OK solutions for
| in reasonable time. Improvements to the algorithms for
| such carry broad benefits. Not just because they could be
| faster, but because being faster allows you to find
| better solutions.
| minitech wrote:
| > Further, using Rust as an example, even a project which
| takes 5 minutes to build cold only takes a second or two
| on a hot build thanks to caching of already-built
| artifacts.
|
| So optimizing compile times isn't worthwhile because we
| already do things to optimize compile times? Interesting
| take.
|
| What about projects for which hot builds take
| significantly longer than a few seconds? That's what I
| assumed everyone was already talking about. It's
| certainly the kind of case that I most care about when it
| comes to iteration speed.
| timschmidt wrote:
| > So optimizing compile times isn't worthwhile because we
| already do things to optimize compile times?
|
| That seems strange to you? If build times constituted a
| significant portion of my development time I might think
| differently. They don't. Seems the compiler developers
| have done an excellent job. No complaints. The pareto
| principle and law of diminishing returns apply.
|
| > What about projects for which hot builds take
| significantly longer than a few seconds?
|
| A hot build of Servo, one of the larger Rust projects I
| can think of off the top of my head, takes just a couple
| seconds, mostly linking. You're thinking of something
| larger? Which can't be broken up into smaller compilation
| units? That'd be an unusual project. I can think of lots
| of things which are probably more important than
| optimizing for rare projects. Can't you?
| minitech wrote:
| The part that seems strange to me is your evidence that
| multi-minute compile times are acceptable being couple-
| second compile times. It seems like everyone actually
| agrees that couple-second iteration is important.
| timschmidt wrote:
| It seems like you might have missed a few words in this
| comment, I'm honestly having trouble parsing it to figure
| out what you're trying to say.
|
| Just for fun, I kicked off a cold build of Bevy, the
| largest Rust project in my working folder at the moment,
| which has 830 dependencies, and that took 1m 23s. A
| second hot build took 0.22s. Since I only have to do the
| cold build once, right after cloning the repository which
| takes just as long, that seems pretty great to me.
|
| Are you telling me that you need faster build times than
| 0.22s on projects with more than 800 dependencies?
| minitech wrote:
| This is the context:
|
| > > The reason to care about compile time is because it
| affects your iteration speed. You can iterate much faster
| on a program that takes 1 second to compile vs 1 minute.
|
| > Color me skeptical. I've only got 30 years of
| development under the belt, but even a 1 minute compile
| time is dwarfed by the time it takes to write and reason
| about code, run tests, work with version control, etc.
|
| If your counterexample to 1-minute builds being
| disruptive is a 1-second hot build, I think we're just
| talking past each other. Iteration implies hot builds. A
| 1-minute hot build is disruptive. To answer your earlier
| question, I _don't_ experience those in my current Rust
| projects (where I'm usually iterating on `cargo check`
| anyway), but I did in C++ projects (even trivial ones
| that used certain pathological libraries) as well as some
| particularly badly-written Node ones, and build times are
| a serious consideration when I'm making tech decisions.
| (The original context seemed language-agnostic to me.)
| timschmidt wrote:
| I see. Perhaps I wasn't clear. I've never encountered a 1
| minute hot build in Rust, and given my experience with
| large Rust codebases like Bevy I'm not even sure such a
| thing exists in a real Rust codebase. I was pointing out
| that no matter how slow a cold build is, hot builds are
| fast, and are what matters most for iteration. It seems
| we agree on that.
|
| I too have encountered slow builds in C++. I can't think
| of a language with a worse tooling story. Certainly good
| C++ tooling exists, but is not the default, and the
| ecosystem suffers from decades of that situation.
| Thankfully modern langs do not.
| magicalhippo wrote:
| I've worked with Delphi where a recompile takes a few
| seconds, and I've worked with C++ where a similar
| recompile takes a long time, often 10 minutes or more.
|
| I found I work very differently in the two cases. In
| Delphi I use the compiler as a spell checker. With the
| C++ code I spent much more time looking over the code
| before compiling.
|
| Sometimes though you're forced to iterate over small
| changes. Might be some bug hunting where you add some
| debug code that allows you to narrow things a bit more,
| add some more code and so on. Or it might be some UI
| thing where you need to check to see how it looks in
| practice. In those cases the fast iteration really helps.
| I found those cases painful in C++.
|
| For important code, where the details matter, then yeah,
| you're not going to iterate as fast. And sometimes
| forcing a slower pace might be beneficial, I found.
| ben-schaaf wrote:
| clang -O0 is significantly slower than tcc. So the problem is
| clearly not (just) the optimisation passes.
| rudedogg wrote:
| > If you want fast compile times then use a compiler w/ no
| optimization passes, done. The compile times are now linear
| w/ respect to lines of code & there is provably no further
| improvement you can make on that b/c any further passes will
| add linear or superlinear amount of overhead depending on the
| complexity of the optimization.
|
| Umm this is completely wrong. Compiling involves a lot of
| stuff, and the language design, as well as compiler design
| can make or break them. Parsing is relatively easy to make
| fast and linear, but the other stuff (semantic analysis) is
| not. Hence why we have a huge range of compile times across
| programming languages that are (mostly) the same.
| OtomotO wrote:
| I think it highly depends on what you're doing.
|
| For some work I tend to take a pen and paper and think about
| a solution, before I write code. For these problems compile
| time isn't an issue.
|
| For UI work on the other hand, it's invaluable to have fast
| iteration cycles to nail the design, because it's such an
| artistic and creative activity
| loeg wrote:
| People are (reasonably) objecting to Clang's performance even
| in -O0. Still, it is a great platform for sophisticated
| optimizations and code analysis tools.
| WD-42 wrote:
| When you are writing and thinking about code, you are making
| progress. You are building your mental model and increasing
| understanding. Compile time is often an excuse to check
| hacker news while you wait for the artifact to confirm your
| assumptions. Of course faster compile times are important,
| even if dwarfed relatively by writing the code. It's not the
| same kind of time.
| indymike wrote:
| Faster compiler means developers can go faster.
| bsder wrote:
| > but when it comes to emitting great assembly, it is almost
| unbeatable.
|
| "Proebsting's Law: Compiler Advances Double Computing Power
| Every 18 Years"
|
| The implication is that doing the easy, obvious and fast
| optimizations is Good Enough(tm).
|
| Even if LLVM is God Tier(tm) at optimizing, the cost for those
| optimizations swings against them very quickly.
| mitchellh wrote:
| It only uses the LLVM backend for release builds. For debug
| builds Zig now defaults to self-hosted (on supported platforms,
| only x86_64 right now). Debug builds are directly in the hot
| path of testing your software (this includes building test
| binaries), and the number of iterations you do on a debug build
| are multiple orders of magnitude higher than release builds, so
| this is a good tradeoff.
| brabel wrote:
| The post says the llvm is used in all builds they measured
| since the self hosted backend still can't compile ghostty.
| dundarious wrote:
| Incorrect, the last build is substantially faster and is
| using the "native" debug backend.
| mmastrac wrote:
| LLVM is a trap. You bootstrap extra fast, you get all sorts of
| optimization passes and platforms for free, but you lose out on
| the ability to tune the final optimization passes and performance
| of the linking stages.
|
| I think we'll see cranelift take off in Rust quite soon, though I
| also think it wouldn't be the juggernaut of a language if they
| hadn't stuck with LLVM those early years.
|
| Go seems to have made the choice long ago to eschew outsourcing
| of codegen and linking and done well for it.
| OtomotO wrote:
| Huh, are there plans on using cranelift for release builds?
| veber-alex wrote:
| No
| 01HNNWZ0MV43FF wrote:
| If it's a small overhead I might
| mmastrac wrote:
| "While it does not have the aggressive optimization stance
| (or complexity) of larger compilers such as LLVM or gcc,
| its output is often not too far off: the same paper showed
| Cranelift generating code that ran ~2% slower than V8
| (TurboFan) and ~14% slower than an LLVM-based system."
|
| -- https://cranelift.dev/
| azakai wrote:
| The Cranelift website does have that quote, but the
| linked paper says
|
| > The resulting code performs on average 14% better than
| LLVM -O0, 22% slower than LLVM -O1, 25% slower than LLVM
| -O2, and 24% slower than LLVM -O3
|
| So it is more like 24% slower, not 14%. Perhaps a typo
| (24/14), or they got the direction mixed up (it is +14 vs
| -24), or I'm reading that wrong?
|
| Regardless, those numbers are on a particular set of
| database benchmarks (TPC-H), and I wouldn't read too much
| into them.
| mmastrac wrote:
| Interesting. Thanks for those numbers. I'd be interested
| in trying some real-world applications myself.
| dontlaugh wrote:
| Even 14% would be unacceptably slow for a system
| language.
|
| I don't think that means it's not doable, though.
| goku12 wrote:
| The artifact's executions speed doesn't seem to be
| Cranelift's priority. They're instead focusing on
| compilation speed and security. Those are still useful in
| Rust for debug builds at least. That's when we need a
| quick turnaround time and as much verification as
| possible.
| anematode wrote:
| Besides the potential typo noted by another comment, I'd
| also note that the benchmarks are on a WebAssembly
| compiler, which has already gone through the LLVM wringer
| and thus undergone many high-level optimizations,
| including those reliant on alias analysis (which
| Cranelift is quite conservative on, iirc). I'd also
| imagine that the translation from LLVM IR to WASM
| discards some useful information that the LLVM backend
| could otherwise have used to generate better assembly.
|
| That's not to say Cranelift isn't a fantastic piece of
| tech, but I wouldn't take the "14%" or "24%" number at
| face value.
| phplovesong wrote:
| Both Go and Ocaml have really, really fast compilers. They did
| the works from the get go, and now wont have to pay the price.
|
| I cant see myself ever again working on a system with compile
| times that take over a minute or so (prod build not counting).
|
| I wish more projects would have they own "dev" compiler that
| would not do all the shit llvm does, and only use llvm for the
| final prod build.
| pjmlp wrote:
| They show a return to how fast compilers used to be, before C
| and C++ took over the zeitgeist of compilation times.
|
| Eiffel, Common Lisp, Java and .NET (C#, F#, VB) are other
| examples, where we can enjoy fast development loops.
|
| By combining JIT and AOT workflows, you can get best of both
| worlds.
|
| I think the main reason this isn't as common is the effort
| that it takes to keep everything going.
|
| I am on the same camp regarding build times, as I keep
| looking for my Delphi experience when not using it.
| debugnik wrote:
| As much as I love F#, I wouldn't add it to a list of fast
| compilers; although I really appreciate that it supports
| actual file-scoped optimizations and features for compile-
| time inlining.
| rafaelmn wrote:
| I wouldn't even put C# in there - minute builds are like
| nothing unusual, and my only experience with Java is
| Android but that was really bad too.
|
| They aren't C++ levels bad, but they are slow enough to
| be distracting/flow breaking. Something like dart/flutter
| or even TS and frontend with hot reload is much leaner.
| Comparing to fully dynamic languages is kind of unfair in
| that regard.
|
| I did not try Go yet but from what I've read (and also
| seeing the language design philosophy) I suspect it's
| faster than C#/Java.
| pjmlp wrote:
| Android is an anti-pattern in Java world, and having a
| build system based on a relatively slow scripting
| language hardly helps.
|
| Many things that Google uses to sell Java (the language)
| over Kotlin also steems from how bad they approach the
| whole infrastructure.
|
| Try using Java on Eclipse with compilation on save.
| tialaramex wrote:
| Right, the work I get paid to do is often C# and
| literally yesterday I was cursing a slow C# build. Why is
| it slow? Did one of the other engineers screw up
| something about the build, maybe use a feature that's
| known to take longer to compile, or whatever? I don't
| know or care.
|
| This idea that it's all sunshine and lollipops for other
| languages is wrong.
| pjmlp wrote:
| The fun of tracking down slow autoconf builds with
| spaghetti Makefiles in C, that take a full hour to do a
| clean build.
|
| Lets not mix build tools with compilers.
| pron wrote:
| Go is slower than Java (Java's (JIT) optimising compiler
| and GCs are much more advanced than Go's).
| pjmlp wrote:
| It never took more than a few seconds to me.
| citrin_ru wrote:
| > be, before C and C++ took over the zeitgeist of
| compilation times.
|
| I wouldn't put them together. C compilation is not the
| fastest but fast enough to not be a big problem. C++ is a
| completely different story: not only it orders of magnitude
| slower (10x slower probably not the limit) on some
| codebases compiler needs a few Gb RAM (so you have to set
| -j below the number of CPU cores to avoid OOM).
| pjmlp wrote:
| Back in 1999 - 2003, when I was working on a product
| mixing Tcl and C, the builds took one hour per platform,
| across Aix, Solaris, HP-UX, Red-Hat Linux, Windows
| 2000/NT build servers.
|
| C++ builds can be very slow versus plain old C, yes,
| assuming people do all mistakes there can be done.
|
| Like overuse of templates, not using binary libraries
| across modules, not using binary caches for object files
| (ClearMake style already available back in 2000), not
| using incremental compilation and incremental linking.
|
| To this day, my toy GTK+ Gtkmm application that I used
| for a The C/C++ Users Journal article, and have ported to
| Rust, compiles faster in C++ than Rust in a clean build,
| exactly because I don't need to start from the world
| genesis for all dependencies.
| adastra22 wrote:
| That's not really an apples to apples comparison, is it?
| pjmlp wrote:
| I dunno why, the missing apple on Rust side is not
| embracing binary libraries like C and C++ do.
|
| Granted there are ways around it for similar
| capabilities, however they aren't the default, and
| defaults matter.
| john01dav wrote:
| You only build the world occasionally, like when cloning
| or staring a new project, or when upgrading your rustc
| version. It isn't your development loop.
|
| I do think that dynamic libraries are needed for better
| plugin support, though.
| epage wrote:
| > You only build the world occasionally
|
| Unless a shared dependency gets updated, RUSTFLAGS
| changes, a different feature gets activated in a shared
| dependency, etc.
|
| If Cargo had something like binary packages, it means
| they would be opaque to the rest of your project, making
| them less sensitive to change. Its also hard to share
| builds between projects because of the sensitivity to
| differences.
| adastra22 wrote:
| Except for RUSTFLAGS changes (which aren't triggered by
| external changes), those only update the affected
| dependencies.
| epage wrote:
| I talked a bit about this at the Rust All Hands back in
| May.
|
| A lot of Rust packages that people ust are setup more
| like header-only libraries. We're starting to see more
| large libraries that better fit the model of binary
| libraries, like Bevy and Gitoxide. I'm laying down a
| vague direction for something more binary-library like
| (calling them opaque dependencies) as part of the `build-
| std` effort (allow custom builds of the standard library)
| as that is special cased as a binary library today.
| Bolwin wrote:
| One hour? How did you develop anything?
| jitl wrote:
| That's probably why tcl is in there, you use uncompiled
| scripting to orchestrate the native code, which is the
| part that takes hours to compile.
| john01dav wrote:
| I have seen projects where ninja instead of make (both
| generated by cmake) is able to cleverly invoke the
| compiler such that CPU is saturated and ram isn't
| exhausted, and make couldn't (it reached oom).
| pjmlp wrote:
| As I can't update my comment, here goes some info out of
| Turbo Pascal 5.5 marketing brochure,
|
| > Fast! Compiles 34 000 lines of code per minute
|
| This was measured on a IBM PS/2 Model 60.
|
| So lets put this into perspective, Turbo Pascal 5.5 was
| released in 1989.
|
| IBM PS/2 Model 60 is from 1987, with a 80286 running at 10
| MHz, limited by 640 KB, with luck one would expad it up to
| 1 MB and use HMA, in what concerns using it with MS-DOS.
|
| Now projecting this to 2025, there is no reason that
| compiled languages, when using a limited set of
| optimizations like TP 5.5 on their -O0, can't be flying in
| their compilation times, as seen in good examples like D
| and Delphi, to use two examples of expressive languages
| with rich type systems.
| 1313ed01 wrote:
| Reminds me of this old series of posts on the Turbo
| Pascal compiler (have been shared a few times on HN in
| the past):
|
| A Personal History of Compilation Speed (2 parts):
| https://prog21.dadgum.com/45.html
|
| "Full rebuilds were about as fast as saying the name of
| each file in the project aloud. And zero link time.
| Again, this was on an 8MHz 8088."
|
| Things That Turbo Pascal is Smaller Than:
| https://prog21.dadgum.com/116.html
|
| Old versions of Turbo Pascal running in FreeDOS on the
| bare metal of a 21st century PC is how fast and
| responsive I wish all software could be, but never is.
| Press a key and before you have time to release it the
| operation you started has already completed.
| thechao wrote:
| I recently saw an article about someone improving the
| machine code generation time of an assembler, here; I
| idly noticed that the scale was the same number of
| instructions we had in the budget to compile whole lines
| of code (expressions & all) "back in the day". It was
| weird. Of course, we're fighting bandwidth laws, so if
| you looked at the wall clock time, the machine code
| generation time was very good in an absolute sense.
| hyghjiyhu wrote:
| A problem is how people have started depending on the
| optimizations. "This tower of abstractions is fine, the
| optimizer will remove it all" Result is some modern
| idioms run slow as molasses without optimization, and you
| can't really use o0 at all.
| Someone wrote:
| Indeed. Also,
|
| - Turbo Pascal was compiling at o-1, at best. For
| example, did it ever in-line function calls?
|
| - its harder to generate halfway decent code for modern
| CPUs with deep pipelines, caches, and branch predictors,
| than it was for the CPUs of the time.
| epolanski wrote:
| > its harder to generate halfway decent code for modern
| CPUs with deep pipelines
|
| Shouldn't be the case for an O0 build.
| nine_k wrote:
| Can you give an example? AFAICT monomorphization takes
| the major portion of time, and it's not even a result of
| some complicated abstraction.
| aidenn0 wrote:
| Turbo Pascal was an outlier in 1989 though. The funny
| thing is that I remember Turbo C++ being an outlier in
| the opposite direction.
|
| In my computer science class (which used Turbo C++),
| people would try to get there early in order to get one
| of the two 486 machines, as the compilation times were a
| huge headache (and this was without STL, which was new at
| the time).
| magicalhippo wrote:
| Our Delphi codebase at work is 1.7 million lines of code,
| takes about 40 seconds on my not very spicy laptop to do
| a full release build.
|
| That's with optimizations turned on, including automatic
| inlining, as well as a lot of generics and such jazz.
| epolanski wrote:
| Modern compilers do an insane amount of work compilers
| didn't do just a decade or two ago, let alone 35 years
| ago.
|
| But I somewhat agree for an O0 the current times are not
| satisfactory, at all.
| foobarbaz33 wrote:
| I have first hand experience of painfully slow C# compile
| times. Sprinkle in a few extra slow things like EDMX
| generated files (not C# but part of the MS ecosystem) and
| it has no business being in a list of fast compiling
| languages.
| pjmlp wrote:
| I can do the same with some C code I worked on at
| enterprise scale.
|
| Lets apply the same rules then.
| anymouse123456 wrote:
| This.
|
| The C# compiler is brutally slow and the language idioms
| encourage enormous amounts of boilerplate garbage, which
| slows builds even further.
| mrsmrtss wrote:
| Painfully slow? On a modern machine reasonable sized
| solution will compile almost instantly.
| swiftcoder wrote:
| > Both Go and Ocaml have really, really fast compilers. They
| did the works from the get go, and now wont have to pay the
| price.
|
| People tend to forget that LLVM was pretty much that for the
| C/C++ world. Clang was worlds ahead of GCC when first
| released (both in speed and in quality of error messages),
| and Clang was explicitly built from the ground up to take
| advantage of LLVM.
| baby wrote:
| Yet Ocaml is unusable. Focus on user experience before
| focusing on speed. Golang did both tbh.
| EasyMark wrote:
| I can always see myself working somewhere the money leads me.
| I see languages as a tool and not a "this not that" or "ewww"
| . Sure I have my preferences but they all keep a Rivian over
| my head and a nice italian leather couch to lay my head on.
| dbtc wrote:
| How is it a trap if languages are able to move away?
| pjmlp wrote:
| Not all do, some stick with their decision to live with LLVM,
| e.g. Swift, Julia, Mojo,...
| akkad33 wrote:
| But isn't there a reason why so many start with LLVM ? It's
| highly optimised. I think mojo is not LLVM but mlir
| pjmlp wrote:
| It is the modern version of a compiler construction
| toolkit, which by the way isn't a new idea, even though
| so many think LLVM was the first one,
|
| One example from 1980s, there are others to pick from as
| example,
|
| https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit
|
| So naturally a way to quickly reach the stage where a
| compiler is avaible for a brand new language, without
| having to write all compiler stages by hand.
|
| A kind of middleware for writing compilers, if you wish.
|
| MLIR is part of LLVM tooling, is the evolution of LLVM
| IR.
| melodyogonna wrote:
| Mojo uses mlir for high-level ir and most optimizations
| but codegen is still done with llvm, albeit in
| unconventional manner.
| pjmlp wrote:
| Additionally any language that wants to be a C++ replacement,
| keeps a dependency on C++ as long as LLVM is part of its
| tooling infrastructure.
|
| For me beyond the initial adoption hump, programming languages
| should bootstrap themselves, if nothing else, reduces the urban
| myth that C or C++ have to always be part of the equation.
|
| Not at all, in most cases it is a convenience, writing usable
| compilers takes time, and it is an easy way to get everything
| else going, especially when it comes to porting across multiple
| platforms.
|
| However that doesn't make them magical tools, without which is
| impossible to write compilers.
|
| That is one area where I am fully on board with Go team's
| decisions.
| goodpoint wrote:
| GCC is the way.
| nikic wrote:
| > LLVM is a trap.
|
| Is it? I think Rust is a great showcase for why it isn't. Of
| course it depends somewhat on your compiler implementation
| approach, but actual codegen-to-LLVM tends to only be a tiny
| part of the compiler, and it is not particularly hard to
| replace it with codegen-to-something-else if you so desire.
| Which is why there is now codegen_cranelift, codegen_gcc, etc.
|
| The main "vendor lock-in" LLVM has is if you are exposing the
| tens of thousands of vendor SIMD intrinsics, but I think that's
| inherent to the problem space.
|
| Of course, whether you're going to find another codegen backend
| (or are willing to write one yourself) that provides similar
| capabilities to LLVM is another question...
|
| > You bootstrap extra fast, you get all sorts of optimization
| passes and platforms for free, but you lose out on the ability
| to tune the final optimization passes and performance of the
| linking stages.
|
| You can tune the pass pipeline when using LLVM. If your
| language is C/C++ "like", the default pipeline is good enough
| that many such users of LLVM don't bother, but languages that
| differ more substantially will usually use fully custom
| pipelines.
|
| > I think we'll see cranelift take off in Rust quite soon,
| though I also think it wouldn't be the juggernaut of a language
| if they hadn't stuck with LLVM those early years.
|
| I'd expect that most (compiled) languages do well to start with
| an LLVM backend. Having a tightly integrated custom backend can
| certainly be worthwhile (and Go is a great success story in
| that space), but it's usually not the defining the feature of
| the language, and there is a great opportunity cost to
| implementing one.
| crawshaw wrote:
| It is a "trap" in the sense that it limits the design space
| for your language by forcing some of the choices implicit in
| C++. Rust went with slow build times like C++ and so was not
| held back in this dimension by LLVM.
|
| Nothing about LLVM is a trap for C++ as that is what it was
| designed for.
| bfrog wrote:
| I hope cranelift continues to grow too. LLVM tragically has
| become fork hell. Every cpu vendor now seems to have a fork
| with enhancements for their cpu arch in some closed source
| package. It's hellish as soon as you want to use another
| language frontend or find the inevitable compiler bug.
| pjmlp wrote:
| That is the magic of MIT like licenses.
|
| How much do you think the BSDs get back from Apple and Sony?
| whatshisface wrote:
| A trap? New language projects would _never_ be able to develop
| the architecture support and optimizer. There 's nothing new in
| the linking stage you couldn't submit as a patch to LLVM
| upstream.
| EasyMark wrote:
| Seems more like a tr-ade off than a tr-ap. Everything has a
| price. I think LLVM is well worth the tradeoffs for all the
| stuff that it gives you. I'm sure that some better tech will
| replace it but calling it a trap seems like an exaggeration, it
| seems to work well enough for even the biggest companies and
| countries in the world.
| stephc_int13 wrote:
| The gold standard for compiler speed so far is TCC. (Fabrice
| Bellard)
|
| It is not even super optimized (single thread, no fancy tricks)
| but it is so far unbeaten by a large margin. Of course I use
| Clang for releases, but the codegen of tcc is not even awful.
| Rochus wrote:
| The Go compiler is also pretty fast.
| klysm wrote:
| Maybe typescript will catch up
| satvikpendem wrote:
| It's being rewritten in Go so yes it will
| serial_dev wrote:
| The compiler might be rewritten in Go, but the Typescript
| and Go languages themselves are very different.
|
| Go's main objectives were fast builds and a simple
| language.
|
| Typescript is tacked onto another language that doesn't
| really care about TS with three decades of warts, cruft
| and hacks thrown together.
| veber-alex wrote:
| It's hard to say.
|
| One the one hand the go type system is a joke compared to
| typescript so the typescript compiler has a much harder
| job of type checking. On the other hand once type
| checking is done typescript just needs to strip the types
| and it's done while go needs to optimize and generate
| assembly.
| levodelellis wrote:
| I'd call it not terribly slow. IIRC it didn't hit 100k lines
| on an M2, while I saw over two million on my own compiler (I
| used multiple instances of tcc since I didn't want to deal
| with implementing debug info on mac)
| c0balt wrote:
| What specifically are those 100k lines though? Golang code
| that, e. G., uses generics is significantly heavier than
| code that doesn't.
| burnt-resistor wrote:
| There is no singular "gold standard".
|
| vlang is really fast, recompiling itself entirely within a
| couple of seconds.
|
| And Go's compiler is pretty fast for what it does too.
|
| No one platform has a unique monopoly on build efficiency.
|
| And also there are build caching tools and techniques that
| obviate the need for recompliation altogether.
| von_lohengramm wrote:
| > vlang is really fast, recompiling itself entirely within a
| couple of seconds.
|
| Does V still just output C and use TCC under the hood?
| shakna wrote:
| Yes. [0]
|
| https://github.com/vlang/v/blob/master/Makefile#L18
| giancarlostoro wrote:
| So in theory, I can make Nim do the same?
| cb321 wrote:
| I use tcc with Nim all the time and it yields almost
| interpreted REPL-like code-iteration speed (like
| 100-500ms per compile-link - _almost_ ENTER-see-results).
|
| As a warning, you need to be sure --lineDir=off in your
| nim.cfg or config.nims (to side-step the infinite loop
| mentioned in https://github.com/nim-lang/Nim/pull/23488).
| You may also need to --tlsEmulation=on if you do threads
| and you may want to --passL=-lm.
|
| tcc itself is quite fast. Faster for me than a Perl5
| interpreter start-up for me (with both on trivial files).
| E.g., on an i7-1370P: tim "tcc -run
| /t/true.c" "/bin/perl</n" 58.7 +- 1.3 ms
| (AlreadySubtracted)Overhead 437 +- 14 ms tcc
| -run /t/true.c 589 +- 10 ms /bin/perl</n
|
| { EDIT: /n -> /dev/null and true.c is just "int main(int
| ac,char*av){return 0;}". }
| summarity wrote:
| Ah nice, I've been meaning to try TCC with Nim.
| stephc_int13 wrote:
| We are talking about compilation speed, not the harness that
| you could use around it.
|
| I never tried vlang but I would say this is pretty niche,
| while C is a standard.
|
| AFAIK tcc is unbeaten, and if we want to be serious about
| compilation speed comparison, I'd say tcc is a good baseline.
| xnickb wrote:
| Build caching is nice until you have to disable it because it
| broke something somewhere. At which point it becomes very
| hard to turn it back on.
|
| In other words, one needs to have absolute trust in such
| tools to be able to rely on them.
| gertop wrote:
| I've never had an issue with ccache and I've been using it
| forever. Are you using it over a networked filesystem or
| something like that for it to become confused about what
| needs recompiling?
| Kranar wrote:
| Vlang uses tcc though...
| pkphilip wrote:
| V Lang is under appreciated. If you want to really see its
| simplicity and expressiveness, one look at its ORM will be
| enough. It is a thing of beauty:
|
| https://modules.vlang.io/orm.html
| codr7 wrote:
| There they go, making the same stupid mistakes as most
| other ORMs out there; by 1:1-mapping structs to tables as
| well as inventing their own SQL-dialect.
| teo_zero wrote:
| > The gold standard for compiler speed so far is TCC.
|
| I only wish it supported C23...
| oguz-ismail wrote:
| Any big projects that doesn't compile without C23 support?
| flohofwoe wrote:
| Not yet AFAIK, but unlike the 'inbetween' versions since
| C99, C23 actually has a number of really useful features,
| so I expect that a couple of projects will start using C23
| features. The main problem is as always MSVC, but then
| maybe its time to ditch MSVC support since Microsoft also
| seems to have abandondend it (not just the C compiler, but
| also the C++ compiler isn't seeing lots of updates since
| everybody working on Visual Studio seems to have been
| reassigned to work in the AI salt mines).
| pjmlp wrote:
| And Rust, as the official system programming language for
| Azure.
|
| However there are several C++23 goodies on latest VC++,
| finally.
|
| Also lets not forget Apple and Google no longer are that
| invested into clang, rather LLVM.
|
| It is up for others to bring clang up to date regarding
| ISO.
| flohofwoe wrote:
| Clang's support for C23 looks actually pretty good:
|
| https://en.cppreference.com/w/c/compiler_support/23.html
|
| (not sure how much Apple/Google even cared about the C
| frontend before though, but at least keeping the C
| frontend and stdlib uptodate by far doesn't require as
| much effort as C++).
| pjmlp wrote:
| Apple used to care before Swift, because Objective-C
| unlike C++, is a full superset from C.
|
| Most of the work going into LLVM ecosystem is directly
| into LLVM tooling itself, clang was started by Apple, and
| Google picked up on it.
|
| Nowadays they aren't as interested, given Swift, C++ on
| Apple platforms is mostly for MSL (C++14 baseline) and
| driver frameworks (also a C++ subset), Google went their
| own way after the ABI drama, and they care about what
| fits into their C++ style guide.
|
| I know Intel is one of the companies that picked up some
| of the work, yet other compiler vendors that replaced
| their proprietary forks with clang don't seem that eager
| to contribute upstream, other than LLVM backends for
| their platforms.
|
| Such is the wonders of Apache 2.0 license.
| 1718627440 wrote:
| Ladybird.
| Narishma wrote:
| Isn't that C++?
| brabel wrote:
| I thought DMD which compiles both C and D was the gold
| standard!?
| pjmlp wrote:
| I would rather vouch for Delphi, as a much more expressive, and
| safer language, while having blazing fast compile times.
| pkphilip wrote:
| True. Incremental compilation in Delphi is quite something
| else.
| levodelellis wrote:
| Yes. I once was thinking I should write a x86-64 codegen for my
| own compiler, but using tcc I was able to reach 3 million lines
| in a second (using multi-threading and multiple instances of
| tcc) so I called it a day. I never did check out if the linking
| was in parallel
| dcsommer wrote:
| Is there a straightforward path to building Zig with polyglot
| build systems like Bazel and Buck2? I'm worried Zig's reliance on
| Turing complete build scripts will make building (and caching)
| such code difficult in those deterministic systems. In Rust,
| libraries that eschew build.rs are far preferable for this
| reason. Do Zig libraries typically have a lot of custom build
| setup?
| rockwotj wrote:
| For bazel:
|
| https://github.com/aherrmann/rules_zig
|
| Real world projects like ZML uses it:
|
| https://github.com/zml/zml
| esjeon wrote:
| FYI, build scripts are completely optional. Zig can build and
| run individual source code files regardless of build scripts
| (`build.zig`). You may need to decipher the build script to
| extract flags, but that's pretty much it. You can integrate Zig
| into any workflow that accepts GCC and Clang. (Note: `zig` is
| also a drop-in replacement C compiler[1])
|
| [1]: https://andrewkelley.me/post/zig-cc-powerful-drop-in-
| replace...
| koeng wrote:
| I've really appreciated the incremental builds with zig - I
| really like having a single static binary, and I have an
| application which uses a few different libraries like SQLite and
| luau and it compiles at nearly Go speeds.
|
| That said, there are definitely still bugs to their self hosted
| compiler. For example, for SQLite I have to use llvm -
| https://github.com/vrischmann/zig-sqlite/issues/195 - which kinda
| sucks.
| teo_zero wrote:
| Anybody knows what exactly prevents Ghostty from being built
| using the self-hosted x86_64 backend?
| anonymoushn wrote:
| well, most of the users are aarch64
| mitchellh wrote:
| Bugs. The Zig compiler crashes. It's new and complex, I'm sure
| they'll work it out soon.
| commandersaki wrote:
| Tangential: I've been writing a Java library that is pretty much
| a JNI shim to a C library. Looking at building a dynamic library
| for each of the platform / architecture combo is looking like a
| huge pain in the arse; I'm toying with the idea of using the zig
| compiler to do the compile / cross compiles and simply be done
| with it.
| mbrock wrote:
| do yourself a favor and try it, Zig ships with headers and libc
| sources and its own LLVM distribution and makes cross
| compilation so easy it's just ridiculous, you literally don't
| have to do anything
| pjmlp wrote:
| If it is a plain shim, and you can use modern Java, I would
| rather make use of Panama and jextract.
| metaltyphoon wrote:
| I've been building a shared lib for both Android and iOS/iPadOS
| and using jni and jin_fn crate in Rust has been a God sent.
| Using cargo-ndk simplifies the building process so much.
| mischief6 wrote:
| my only wish for ghostty is that there was openbsd support. some
| work needs to be done in the io layer to support openbsd kqueue
| to make it work.
| loeg wrote:
| I'm surprised that whatever IO primitive is used on Mac doesn't
| work on OpenBSD. Mac has select and some variant of kqueue,
| right? What is ghostty doing there that doesn't work on
| OpenBSD?
|
| E.g., here it is kqueue-aware on FreeBSD:
| https://github.com/mitchellh/libxev/blob/34fa50878aec6e5fa8f...
|
| Might not be that different to add OpenBSD. Someone would begin
| here:
| https://github.com/mitchellh/libxev/blob/main/src/backend/kq...
| It's about 1/3 tests and 2/3 mostly-designed-to-be-portable
| code. Some existing gaps for FreeBSD, but fixing those (and
| adding OpenBSD to some switch/enums) should get you most of the
| way there.
| geokon wrote:
| Trying to understand the Zig goals a bit better..
|
| If you care about compilation speed b/c it's slowing down
| development - wouldn't it make sense to work on an interpreter?
| Maybe I'm naiive, but it seems the simpler option
|
| Compiling for executable-speed seems inherently orthogonal to
| compilation time
|
| With an interpreter you have the potential of lots of extra
| development tools as you can instrument the code easily and you
| control the runtime.
|
| Sure, in some corner cases people need to only be debugging their
| full-optimization-RELEASE binary and for them working on a
| interpreter, or even a DEBUG build just doesn't makes sense. But
| that's a tiny minority. Even there, you're usually optimizing a
| hot loop that's going to be compiling instantly anyway
| delusional wrote:
| > Compiling for executable-speed seems inherently orthogonal to
| compilation time
|
| That true at the limit. As is often the case there's a vast
| space in the middle, between the extremes of ultimate
| executable speed and ultimate compiler speed, where you can
| make optimizations that don't trade off against each other.
| Where you can make the compiler faster while producing the
| exact same bytecode.
| Ygg2 wrote:
| Limits are more important than implied. Language benchmarks,
| computer game performance.
| IshKebab wrote:
| I wouldn't say games are a corner case.
| dontlaugh wrote:
| For games you have to compile release builds almost all the
| time anyway, otherwise performance wouldn't be playable.
| flohofwoe wrote:
| That's only true for badly written game code though ;)
|
| Even with C++ and heavy stdlib usage it's possible to have
| debug builds that are only around 3..5 times slower than
| release builds in C++. And you need that wiggle room anyway
| to account for lower end CPUs which your game should better
| be able to run on.
| dontlaugh wrote:
| If you target consoles, you're already on the lowest end
| hardware you'll run on. It's extremely rare to have much
| (if any) headroom for graphically competitive games. 20%
| is a stretch, 3x is unthinkable.
| flohofwoe wrote:
| Then I would do most of the development work that doesn't
| directly touch console APIs on a PC version of the game
| and do development and debugging on a high-end PC. Some
| compilers also have debug options which apply
| optimizations while the code still remains debuggable
| (e.g. on MSVC: https://learn.microsoft.com/en-
| us/visualstudio/debugger/how-...) - don't know about
| Sony's toolchain though. Another option might be to only
| optimize the most performance sensitive parts while
| keeping 90% of the code in debug mode. In any case, I
| would try _everything_ before having to give up
| debuggable builds.
| dontlaugh wrote:
| It doesn't matter. You will hit (for example) rendering
| bugs that only happen on a specific version of a console
| with sufficient frequency that you'll rarely use debug
| builds.
|
| Debug builds are used sometimes, just not most of the
| time.
| pjmlp wrote:
| That only kind of works when not using console specific
| hardware.
|
| Naturally you can abstract them as middleware does it,
| however it is where tracking performance bugs usually
| lies.
| geokon wrote:
| Just curious - is this from personal experience?
|
| I've never done it, but I just find it hard to believe
| the slow down would be that large. Most of the
| computation is on GPU, and you can set your build up such
| that you link to libraries built at different compilation
| optimizations.. and they're likely the ones doing most of
| the heavy lifting. You're not rebuilding all of the
| underlying libs b/c you're not typically debugging them.
|
| EDIT:
|
| if you're targeting a console.. why would you not debug
| using higher end hardware? If anything it's an argument
| in favor of running on an interpreter with a very high
| end computer for the majority of development..
| flohofwoe wrote:
| Yeah, around 3x slower is what I've seen when I was still
| working on reasonably big (around a million lines of
| 'orthodox C++' code) PC games until around 2020 with MSVC
| and without stdlib usage.
|
| This was building the _entire_ game code with the same
| build options though, we didn 't bother to build parts of
| the code in release mode and parts in debug mode, since
| debug performance was fast enough for the game to still
| run in realtime. We also didn't use a big integrated
| engine like UE, only some specialized middleware
| libraries that were integrated into the build as source
| code.
|
| We _did_ spend quite some effort to keep both build time
| and debug performance under control. The few cases were
| debug performance became unacceptable were usually caused
| by 'accidentially exponential' problems.
|
| > Most of the computation is on GPU
|
| Not in our case, the GPU was only used for rendering. All
| game logic was running on the CPU. IIRC the biggest chunk
| was pathfinding and visibility/collision/hit checks (e.g.
| all sorts of NxM situations in the lower levels of the
| mob AI).
| geokon wrote:
| Ah okay, then that makes sense. It really depends on your
| build system. My natural inclination would be to not have
| a pathfinding system in DEBUG unless I was actively
| working on it. But it's not always very easy to set
| things up to be that way
| TinkersW wrote:
| The slowdown can be enormous if you use SIMD, I believe
| MSVC emits a write to memory after every single SIMD op
| in debug, thus the code is incredibly slow(more than
| 10x). If you have any SIMD kernels you will suffer, and
| likely switch to release, with manual markings used to
| disable optimizations per function/file.
| CooCooCaCha wrote:
| I was working on a 2d game in Rust, and no joke the debug
| build was 10x slower than release.
| pjmlp wrote:
| And if using Visual Studio, it is even possible to debug
| release builds, with dynamic debugging feature.
| chongli wrote:
| This is why games ship interpreters and include a lot of
| code scripting the gameplay (e.g in Lua) rather than having
| it all written in C++. You get fast release builds with a
| decent framerate and the ability to iterate on a lot of the
| "business logic" from within the game itself.
| weinzierl wrote:
| _" Trying to understand the Zig goals a bit better.. If you
| care about compilation speed b/c it's slowing down development
| - wouldn't it make sense to work on an interpreter?"_
|
| I've heard Rust classified as Safety, Performance, Usability in
| that order and Zig as Performance, Usability, Safety. From that
| perspective the build speed-ups make sense while an interpreter
| would only fit a triple where usability comes first.
| ModernMech wrote:
| I don't think it's right to say Zig is more performance-
| focused than Rust. Based on their designs, they should both
| be able to generate similar machine code. Indeed, benchmarks
| show that Rust and Zig perform very similarly:
| https://programming-language-benchmarks.vercel.app/rust-
| vs-z...
|
| This is why the value proposition of Zig is a question for
| people, because if you got measurably better performance out
| of Zig in exchange for worse safety that would be one thing.
| But Rust is just as performant, so with Zig you're exchanging
| safety for expressivity, which is a much less appealing
| tradeoff; the learning from C is that expressivity without
| safety encourages programmers to create a minefield of
| exploitable bugs.
| logicchains wrote:
| It's much easier to write ultra low latency code in Zig
| because doing so requires using local bump-a-pointer
| allocators, and the whole Zig standard library is built
| around that (everything that allocates must take the
| allocator as a param). Rust on the other hand doesn't make
| custom allocators easy to use, partially because proving
| they're safe to the borrow checker is tricky.
| toshinoriyagi wrote:
| I don't think Zig is about expressivity over safety. Many
| people liken Zig to C, like with Rust to C++.
|
| Zig is a lot safer than C by nature. It does not aim to be
| as safe as Rust by default. But the language is a lot
| simpler. It also has excellent cross-compiling, much faster
| compilation (which will improve drastically as incremental
| compilation is finished), and a better experience
| interfacing with/using C code.
|
| So there are situations where one may prefer Zig, but if
| memory-safe performance is the most important thing, Rust
| is the better tool since that is what it was designed for.
| LexiMax wrote:
| The value proposition in Zig is that it has far and away
| the best C interoperability I've seen out of a language
| that wasn't also a near-superset of it. This makes
| migrations away from C towards Zig a lot more palatable
| than towards Rust unless security is your primary concern.
| quotemstr wrote:
| With Zig, you're not so much migrating away from C as
| migrating to a better syntax _for_ C.
| LexiMax wrote:
| On the contrary I would not call Zig very C-like at all,
| and that's what makes the interoperability so impressive
| to me.
|
| Zig feels like a small yet modern language that lacks
| many of C's footguns. It has far fewer implicit type
| conversions. There is no preprocessor. Importing native
| zig functionality is done via modules with no assumed
| stable ABI. Comptime allows for type and reflection
| chicanery that even C++ would be jealous of.
|
| Yet I can just include a C header file and link a C
| library and 24 times out of 25, it just works. Same thing
| if you run translate-c on a C file, and as a bonus, it
| reveals the hoops that the tooling has to go through to
| preserve the fiddly edge cases while still remaining
| somewhat readable.
| quotemstr wrote:
| Excellent way to put it.
|
| > expressivity without safety encourages programmers to
| create a minefield of exploitable bugs.
|
| Also, psychologically, the combination creates a powerful
| and dangerously false sensation of mastery in
| practitioners, especially less experienced ones. Part of
| Zig's allure, like C's bizarre continued relevance, is the
| extreme-sport nature of the user experience. It's
| exhilarating to do something dangerous and get away with
| it.
| khamidou wrote:
| Please -- rust advocates need to realize that you're not
| winning anyone to rust with this kind of outlandish
| comments.
|
| Zig is not an extreme sports experience. It has much
| better memory guarantees than C. It's clearly not as good
| as rust if that's your main concern but rust people need
| to also think long and hard about why rust has trouble
| competing with languages like Go, Zig and C++ nowadays.
| quotemstr wrote:
| I'm not a Rust advocate. I'm a "the whole debate is
| silly: 99% of the time you want a high level language
| with a garbage collector" advocate.
|
| What's the use case for Zig? You're in that 1% of
| projects in which you need something beyond what a
| garbage collector can deliver _and_ , what, you're in the
| 1% of the 1% in which Rust's language design hurts the
| vibe or something?
|
| You can also get that "safer, but not Safe" feeling from
| modern C++. So what? It doesn't really matter whether a
| language is "safer". Either a language lets you _prove_
| the absence of certain classes of bug or it doesn 't.
| It's not like C, Zig, and Rust form a continuum of
| safety. Two of these are different from the third in a
| qualitative sense.
| khamidou wrote:
| > What's the use case for Zig? You're in that 1% of
| projects in which you need something beyond what a
| garbage collector can deliver and, what, you're in the 1%
| of the 1% in which Rust's language design hurts the vibe
| or something?
|
| You want a modern language that has package management,
| bounds checks and pointer checks out of the box. Zig is a
| language you can pick up quickly whereas rust takes years
| to master. It's a good replacement to C++ if you're
| building a game engine for example.
|
| > Either a language lets you prove the absence of certain
| classes of bug or it doesn't. It's not like C, Zig, and
| Rust form a continuum of safety. Two of these are
| different from the third in a qualitative sense.
|
| Again repeating my critique from the previous comment -
| yes Zig brings in additional safety compared to C.
| Dismissing all of that out of hand does not convince
| anyone to use rust.
| epolanski wrote:
| Zig is a modern safer and more approachable C, that's the
| value proposition.
|
| It's value proposition is aiming for the next generation of
| system programmers that aren't interested in Rust like
| languages but C like ones.
|
| Current system devs working on C won't find Zig or Odin's
| value propositions to matter enough or indeed as you point
| out, to offer enough of a different solution like Rust
| does.
|
| But I'm 100% positive that Zig will be a huge competitor
| for Rust in the next decade because it's very appealing to
| people willing to get into system programming, but not
| interested in Rust.
| Quitschquat wrote:
| > wouldn't it make sense to work on an interpreter
|
| I like the Julia approach. Full on interactivity, the
| "interpreter" just compiles and runs your code.
|
| SBCL, my favorite Common Lisp, works the same way I think
| self_awareness wrote:
| For me, Zig the language is "meh", but Zig the toolchain is very
| interesting. It's really ambitious, to the point I think it can't
| work in reality in the long term, but I want it to succeed.
| sgt wrote:
| Honestly a bit surprised that he waits 16 seconds every minor
| change to Ghostty while developing. That sounds painful. Glad Zig
| build times are getting faster though, and I understand he is
| still on LLVM.
| loeg wrote:
| I work in a large C++ codebase where incremental builds can
| take minutes. It's flow-interrupting, but doesn't make it
| impossible to be productive. You just can't rely too heavily on
| tight printf debugging cycles or you waste a lot of time
| waiting.
| Simran-B wrote:
| How does Zig's compilation and codegen compare to TPDE? The claim
| is that it's 10-20x faster than LLVM with -O0 but with
| limitations.
| onetom wrote:
| Great.
|
| It's finally as snappy as recompiling the "Borland Pascal version
| of Turbo Vision for DOS" was on an Intel 486 in 1995, when I
| graduated from high school...
|
| They C version of Turbo Vision was 5-10x slower to compile at
| that time too.
|
| Turbo Vision is a TUI windowing framework, which was used for
| developing the Borland Pascal and C++ IDEs. Kinda like a
| character mode JetBrains IDE in 10 MB instead of 1000 MB...
|
| https://en.m.wikipedia.org/wiki/Turbo_Vision
| ComputerGuru wrote:
| Are their ghostty builds statically linked by default or does zig
| follow in the footsteps of the C/C++ world and dynamically link
| everything? If statically linking (like rust) then a huge part of
| the incremental build times for apps with external dependencies
| is the lack of incremental linking in any of the major linkers.
| Wild linker is working on becoming the answer to this:
| https://github.com/davidlattimore/wild
| tredre3 wrote:
| ghostty statically links all its zig dependencies. The only
| dynamic libraries I can see are the usual suspects (libc, X11,
| Wayland, OpenGL, GTK, glib, libadwaita, etc).
___________________________________________________________________
(page generated 2025-10-04 23:00 UTC)