[HN Gopher] Zig self hosted compiler is now capable of building ...
___________________________________________________________________
Zig self hosted compiler is now capable of building itself
Author : marcthe12
Score : 576 points
Date : 2022-04-16 13:11 UTC (9 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| vonwoodson wrote:
| For great justice
| einpoklum wrote:
| Zig move zig...
| orblivion wrote:
| Probably a stupid question, but is LLVM a sort of cheat for "self
| compiling"? Shouldn't you need to compile LLVM as well?
| dundarious wrote:
| They're working towards having native backends for x86-64,
| aarch64, etc., that are all written in Zig, making LLVM an
| optional dependency, and eliminating literally all uses of non-
| Zig code (including no mandatory use of C standard library, for
| example) for those builds.
| https://news.ycombinator.com/item?id=31052234
|
| I'm assuming these will primarily be for super fast debug
| builds (at least to start), and LLVM (and maybe C backend too)
| will still be the favored backend(s) for release builds.
| okareaman wrote:
| It used to be the case that a compiler compiled itself to bare
| metal machine instructions, but now with so many cpu
| instruction set targets that's no longer the case. LLVM uses an
| intermediate language like an assemby language. Another way to
| look at it is the TypeScript compiler compiles itself which is
| written in TypeScript but the intermediate language is
| JavaScript. V8 handles the translation of JavaScript to machine
| code, similar to LLVM translating LLVM IR to machine code
| orblivion wrote:
| Okay, so we're treating LLVM intermediate language or
| javascript as if "assembly" in the bare metal days. Makes
| sense.
| chrisseaton wrote:
| > It used to be the case that a compiler compiled itself to
| bare metal machine instructions, but now with so many cpu
| instruction set targets that's no longer the case.
|
| Tons of compilers emit machine code.
| okareaman wrote:
| Sure, I didn't say they didn't. Are you refuting what I
| said and implying that a compiler is not self-hosted if it
| doesn't compile to machine code?
| chrisseaton wrote:
| > I didn't say they didn't
|
| You said 'that's no longer the case'. Isn't that the same
| as saying they don't?
| okareaman wrote:
| In the context of the question, it is no longer the case
| that a compiler has to compile to machine code to be
| considered self-hosting. I thought this was clear, but I
| guess it wasn't
| fouronnes3 wrote:
| Are there any languages out there that can _only_ be compiled by
| a compiler written in their own language? Presumably because the
| original pre-dogfood compiler stopped being maintained years ago.
| So that if we somehow lost all binaries of the current compiler,
| that language would effectively be lost?
| amelius wrote:
| If your language (call it X) can only be compiled by a compiler
| written in X, then you can always create an X-to-C transpiler
| (it doesn't need to be efficient, and it can even leak memory,
| as long as it can complete the bootstrapping process).
| dottedmag wrote:
| C?
| coder543 wrote:
| A lot of popular C compilers are actually written (at least
| partially) in C++ these days.
| viraptor wrote:
| You're commenting on an article about Zig which became self-
| hosted and can compile C. (There's also lots of other C
| compilers available)
| dundarious wrote:
| Zig compiles C/C++ by deferring the vast majority of the
| work to libclang, which is written in C++. Also note Zig is
| self-hosted _when using the LLVM backend_ , which means
| deferring to C++ for much of the code generation. There is
| no "end-to-end Zig" self-hosted compiler yet, because the
| Zig native backends are not as near completion. See the
| creator's comment about the breakdown:
| https://news.ycombinator.com/item?id=31052234. (I'm excited
| about this progress, so this is not meant as any kind of
| knock on Zig, which I think is quite impressive)
|
| But you're right that C is not a good example.
| [deleted]
| richardfey wrote:
| Most modern languages cannot be compiled anymore with their
| original pre-dogfood compiler, but we have the sources of the
| older versions so you can bootstrap them in a sequence.
| skrebbel wrote:
| TypeScript
| retrac wrote:
| The Haskell GHC compiler is written mostly in Haskell. Rust's
| compiler is also written in Rust, these days. TypeScript's
| compiler is in TypeScript.
|
| It's a pretty common state of affairs, actually. Often arises
| out of the second or third implementation of the compiler being
| _much_ better than the first attempts, probably coupled with
| the momentum of people using the language who can contribute to
| the tooling because it 's in the same language.
| fasquoika wrote:
| GHC is nearly impossible to bootstrap if you don't consider the
| vendored transpiled C code to be source. Versions of GHC not
| dependent on GHC were never public AFAIK.
|
| https://elephly.net/posts/2017-01-09-bootstrapping-haskell-p...
| jonpalmisc wrote:
| I think Zig's compatibility with C is such a valuable feature.
|
| I also wish we could rewrite everything in a modern language, but
| the reality is that we can't and that if we could, it would take
| a LONG time. The ability to start new projects, or extend
| existing ones, with a modern and more ergonomic language--Zig--
| and be able to seamlessly work with C is incredible.
|
| I look forward to the self-hosted compiler being complete, and
| hopefully a package manager in the future. I'd really like to
| start using Zig for more projects.
| jitl wrote:
| Zig as the "Kotlin of C" makes it very appealing. Kotlin has
| seen fantastic adoption in JVM projects because you can convert
| files one at a time from .java to .kt, with only a modicum of
| one-time build system shenanigans up front. Then your team can
| gain experience gradually, fill in missing pieces like linters
| over time, all without redesigning your software.
|
| What zig offers is even better - because zig included a CC, you
| can actually reduce complexity with zig by getting a single
| compiler version for all platforms, rather than a fixed zig +
| each platform's cc. And with it, trivially easy cross-platform
| builds - even with existing C code. That's cool! Go has
| excellent cross-compilation, but Go with C, not so much.
|
| Rust is a powerful tool, but it's a complex ecosystem unto
| itself - both conceptually and practically. It has great
| interoperability frameworks, but the whole set of systems comes
| at a substantial learning cost. Plus, porting an existing
| software design to Rust can be a challenge. It's more like the
| "scala of C" if we're trying to stretch the analogy past the
| breaking point.
| richardfey wrote:
| The "Kotlin of C", what a beautiful metaphor!
| pjmlp wrote:
| Kotlin has seen fantastic adoption in Android projects,
| because of the way Google pushes it, while stagnating Android
| Java on purpose.
|
| On the JVM world not really.
|
| https://www.infoq.com/news/2022/03/jrebel-report-2022/
| exikyut wrote:
| Huh. I read about this the other day as well -
| https://news.ycombinator.com/item?id=30842602:
|
| > _If only [Oracle] hadn 't sued Google, Java would still
| have been the pre-eminent language for Android development.
| Sadly Android is stuck at legacy Java 8 permanently now.
| So, modern Java is stuck as a server-side language with
| dozens of competitors._
|
| A reply argued that Android is on Java 11 now, and then you
| noted (hi!) that it's "a subset". Huh.
|
| I'm trying to get a handle on understanding the
| ramifications of the legal/licensing situation, and the
| actual concrete impact on Java's use in Android. The
| subject seems somewhat murky and opaque. Is there possibly
| a high-level disambiguation about what's going on published
| anywhere?
| pjmlp wrote:
| The set of Java 11 LTS features and standard library
| missing from Android 13 is left as exercise for the
| reader, they are relatively easy to find out, hence
| subset.
|
| You can check the Javadoc for the standard library, and
| the JVM specification. Then compare with DEX/ART and the
| Android documentation for Java APIs.
| holoduke wrote:
| Are they pushing? Most documentation for Android dev is
| still Java. Or by default Java. It's only the Intelij guys
| pushing for Kotlin by creating a lockin in their IDE. One
| reason I refuse to use it.
| carstenhag wrote:
| As an Android Dev: noone of us wants to do java projects
| anymore. If we had support for some recent versions
| maybe, but as it is, there's no going back.
|
| 0 of our recent or current projects still use java.
|
| Google is either moving/extending libs to natively
| integrate with kotlin (numerous -ktx libs) or they are
| kotlin-only (Compose) anyway.
|
| I don't really see the Jetbrains lock-in thing, because:
| Android Studio is free, you can use any other IDE with
| syntax highlighting and the terminal to run tests & to
| compile.
|
| If you want to blame someone for locking in android devs
| into Android Studio, it would be Google, because they
| build the previews into Android Studio afaik. But you
| would have the same criticism at Apple/XCode. Supporting
| one IDE is already tough I guess.
| pjmlp wrote:
| You have missed all the Kotlin only Jetpack libraries,
| NDK documentation now using Kotlin, Jetpack libraries
| originally released in Java now rewritten in Kotlin.
|
| They are still using Android Java on the system layers,
| because they aren't rewriting the whole stack.
|
| Even the update for Java 11 LTS subset on Android 13 is
| mostly likely caused by the Java ecosystem moving
| forward, than the willingness of Android team to support
| anything other than what can be desugared into Java 8 /
| DEX somehow.
|
| You are right in relation to JetBrains, they even
| acknowledge it on their blog post introducing Kotlin.
|
| https://blog.jetbrains.com/kotlin/2011/08/why-jetbrains-
| need...
|
| "And while the development tools for Kotlin itself are
| going to be free and open-source, the support for the
| enterprise development frameworks and tools will remain
| part of IntelliJ IDEA Ultimate, the commercial version of
| the IDE. And of course the framework support will be
| fully integrated with Kotlin."
| sedatk wrote:
| > You have missed all the Kotlin only Jetpack libraries,
| NDK documentation now using Kotlin, Jetpack libraries
| originally released in Java now rewritten in Kotlin.
|
| Also, Oracle's lawsuit against Google for copying Java
| APIs.
| pjmlp wrote:
| This doesn't compute, because Kotlin is heavily dependent
| on the Java ecosystem, regardless how Google screw up Sun
| and the Java community with their Android Java.
| sedatk wrote:
| I still see it as a passive aggressive move for Google to
| get back at Oracle. They can't back away from Java API
| completely but they can hurt Oracle by discrediting Java
| language.
| Grimburger wrote:
| As someone not familiar with android development, this is
| somewhat confusing because I always thought Google was
| pushing dart/flutter for mobile these days?
|
| Or is it both at once?
| jitl wrote:
| Dart/Flutter is their React Native competitor. It's in
| the same space as Xamarin - A secondary language and UI
| toolkit who's selling point is rapid development for
| multiple platforms.
|
| Kotlin is to Java (on Android) as Swift is to Objective-C
| (on Apple) - the successor primary platform language.
| Grimburger wrote:
| Thanks, that's a solid summary.
|
| Feels like an environment that moves so quickly (to
| someone like me anyway). Can barely keep up.
| hota_mazi wrote:
| Not sure where you got that impression from, the Android
| team has never pushed, let alone mentioned, Dart/Flutter,
| ever. All the Flutter advertising you hear is from the
| Flutter team.
|
| Kotlin and Java are the main languages supported on
| Android.
| charcircuit wrote:
| From talking to a Googler the reason why Google devotes
| resources to Kotlin is that there was and still is a
| large external demand from the Android development
| community for Kotlin.
| pjmlp wrote:
| Kotlin adoption was triggered from inside, with some
| anti-Java attitude.
|
| https://talkingkotlin.com/the-first-kotlin-commit-in-
| android...
| charcircuit wrote:
| My information was from someone from the team working on
| jetpack compose. So maybe the answer I was given comes
| from a different context.
| jitl wrote:
| That poll pours a cold bucket of water on "fantastic
| adoption" among all the respondents, but compare adoption
| of Kotlin and Java releases after Kotlin's release. 1 in 6
| respondents using language versions after 2016 are using
| Kotlin. I don't think that's too shabby.
| pjmlp wrote:
| Java is still Java, regardless of the version, otherwise
| we should add Kotlin versions to the discussion as well.
| Tozen wrote:
| Zig is not the only one. Other newer languages like Vlang,
| Odin, Rust, Nim... offer strong C interop.
| jonpalmisc wrote:
| Can't speak about the others, but Rust's C interop is nothing
| like Zig's, not to mention that Zig can also compile C.
| kristoff_it wrote:
| > I also wish we could rewrite everything in a modern language,
| but the reality is that we can't and that if we could, it would
| take a LONG time. The ability to start new projects, or extend
| existing ones, with a modern and more ergonomic language--Zig--
| and be able to seamlessly work with C is incredible.
|
| That's the Maintain it With Zig approach :^)
|
| https://kristoff.it/blog/maintain-it-with-zig/
| mlinksva wrote:
| Sounds compelling. Is there a list of projects following this
| advice anywhere?
| randyrand wrote:
| Andrew how will the self hosted compiler maintain a known trust
| back to assembly?
|
| i.e how can someone look at a self hosted zig compiler and build
| it themself from source, never needing to download blobs from the
| internet?
|
| Otherwise you lose the ability to trust anything you build.
| kristoff_it wrote:
| Checkout the video Andrew linked in this thread. He talks about
| this in detail.
| amscanne wrote:
| Zig specifics aside... build it from source using _what_?
| Another compiler. Okay, so you can compile that one from
| source. But what does that? _Another_ compiler.
|
| Unless you're recording memory contents and executing
| instructions by hand, you've just discovered the Ken Thompson
| hack. At some point the pragmatic thing is to trust some bits
| from a trusted source (e.g. downloaded from an official repo w/
| a known cert, etc.).
| chakkepolja wrote:
| See https://dwheeler.com/trusting-trust/
| MichaelBurge wrote:
| Rice's Theorem makes the Ken Thompson hack impossible in
| general. He only executed his hack in a single short-term
| demonstration against a specific target, but it's not
| possible to make a "long-con" against an open-source project
| with lots of activity and lots of people building it even if
| you find a way to infect nearly all of those people.
| randyrand wrote:
| You just need 1 compiler written in assembly, and work your
| way up from there.
|
| C17 -> assembly of your choice, written in assembly
|
| Zig -> C17 "transpiler", written in C17
| drfuchs wrote:
| Why do you trust the assembler you got from somewhere any
| more than a compiler?
| randyrand wrote:
| Because hand written assembly is readable!
| pjmlp wrote:
| It might be, but have you also validated the microcode
| executing it?
|
| Or the Verilog/VHDL for the logic gates used by the CPU,
| for that matter?
| chongli wrote:
| I believe GP used assembler to refer to the program which
| reads your hand-written assembly and produces a binary.
| That program was presumably given to you so you need to
| trust it.
|
| No, the only way to solve this problem is to start with a
| computer entirely devoid of code and bit-bang your
| assembler into the machine with switches, the way the
| first users of the MITS Altair did it.
| ByteJockey wrote:
| You can also bootstrap the way lisp did and write the
| fist compiler in the language and get a bunch of grad
| students to hand-compile it.
|
| But, yeah if you don't have a bunch of grad students at
| the ready, an assembler hand-written in machine code is
| the only option if you want to trust the entire stack.
| Though I'm not sure what that would get you. I don't know
| of any higher language compilers that are written
| directly in assembly these days, so you'd never be able
| to compile your C/C++ compiler.
| chongli wrote:
| There was a C compiler written in assembly posted on HN 4
| years ago [1]!
|
| So yeah, if you wanted to, you could bit-bang an
| assembler capable of assembling a simple C compiler, then
| in your simple subset of C you could implement a full C
| compiler, and from there you can do anything you want!
|
| The grad student approach also sounds interesting. A
| basic Lisp interpreter can easily fit on a single letter-
| sized sheet of paper. A single person could hand-compile
| that, it would just take longer. But, if you're living
| alone in a cabin in the woods with your own hand-built
| computer and a personal library of computer books in hard
| copy, that would be a totally feasible project.
|
| [1] https://news.ycombinator.com/item?id=17851311
| ByteJockey wrote:
| I wasn't aware that there were any existing C compilers
| made in asm for modern architectures. I guess I shouldn't
| be too surprised.
|
| That's cool, thanks for showing me.
| mhh__ wrote:
| D has a C compiler built-in now, I wonder if we could bootstrap
| all the toolchain now.
| ByteJockey wrote:
| Don't most toolchains have a rather large C++ component?
|
| Just parsing C++ is a rather large undertaking (my
| understanding is that it's syntax is turing complete).
| mhh__ wrote:
| So you start with a C++ compiler written in C. Hence
| bootstrap, rather than compile from source.
| ByteJockey wrote:
| Oh, gotcha. Sorry, brain wasn't parsing right before the
| coffee today.
| cercatrova wrote:
| I was thinking of learning Rust but it seems a bit overkill due
| to manual memory management as compared to languages with similar
| speed like Nim, Zig, and Crystal. How would one compare these
| languages?
|
| Is it worth learning Rust or Zig and dealing with the borrow
| checker or manual memory management in general, or are GC
| languages like Nim or Crystal good enough? I'm not doing any
| embedded programming by the way, just interested in command line
| apps and their speed as compared to, say, TypeScript, which is
| what I usually write command line apps in.
| sanxiyn wrote:
| Zig uses manual memory management too (even more manual than
| Rust), so that's a bit strange question.
| cercatrova wrote:
| Interesting, based on their code samples it looked to me like
| a GC language since (at least from what I saw) I didn't see
| anything regarding memory management.
| sanxiyn wrote:
| I am not sure what code samples you looked at, but
| https://ziglearn.org/chapter-2/ should give you an idea.
| elcritch wrote:
| It's easy, IMHO, to mistake Zig as a GC'ed language or more
| broadly as a memory safe systems language. It's neither but
| it is a nicer C.
| zigger69 wrote:
| It's really easy in Zig to be honest. Just put `defer
| thing.deinit()` in the right scope and you're done. You gain
| explicitness and know exactly what's going on in your Code.
| Everything is obvious. That's the reason Zig is so incredible
| simple and easy to read. Zig also has a GPA that will tell
| you about memory leaks or anything.
| pizza234 wrote:
| > or are GC languages like Nim or Crystal good enough?
|
| Any programming language is good enough for their own use cases
| :) It's a matter of understanding which the use cases are.
|
| I'm a big Rust fan, but I nonetheless believe that the use
| cases for programming languages with manual memory management
| are comparatively small, in particular, since GC has been
| improved a lot in the last decade.
|
| For undecided people, I conventionally suggest Golang. Those
| who at some point need deep control, will recognize it and
| change accordingly.
| fyzix wrote:
| Nim strikes a great balance. No need for a low level language
| for cli and general software. I liked crystal but the lack of
| support on windows and lackluster dev experience made me stick
| to Nim.
|
| Nim also can double as a web language by transpiling to JS.
| nicoburns wrote:
| If you're doing things like command-line apps you'll probably
| find Rust much easier to work with than Zig. You can write
| high-level code that looks basically like TypeScript in Rust,
| whereas Zig is more manual. Nim and Crystal are easier from a
| language perspective, but they have much smaller ecosystems.
| lijogdfljk wrote:
| It's funny i came to Rust from Go, Python, NodeJS, etc after a
| combined .. 15 years or so. I've been using Rust full time
| (work & home) for ~2 years now.
|
| Obviously i'm biased, but i quite enjoy it. I find i am more
| efficient now than before, because it manages to give me the
| ease of the "easier" languages quite often with a rich set of
| tooling when i need to go deeper.
|
| Personally i feel the concern over the borrow checker is way
| overblown. 95% of the time the borrow checker is very simple.
| The common worst case i experience is "oh, i'm borrowing all of
| `foo` when i really want `foo.bar` - which is quite easily
| solved by borrowing just the `.bar` in a higher scope.
|
| The lifetime jazz is rarely even worth using when compared to
| "easier" languages. Throw a reference count around the data and
| you get similar behavior to that of them and you never had to
| worry about a single lifetime. Same goes for clones/etc.
|
| I often advocate this. For beginners to use the language like
| it was a GC'd language. A clone or reference count is often not
| a big deal and can significantly simplify your life. Then, when
| you're deeper into the language you can test the waters of
| lifetimes.
|
| So yea. Your mileage will vary i'm sure. But i find Rust to be
| closer to GC'd languages than actual manual languages, in UX at
| least. You won't screw up and leak or introduce undefined
| behavior, which is quite a big UX concern imo.
| melony wrote:
| The problem is that when lifetimes cause you problems, it can
| force the entire feature development to stop until the
| problem is fixed. There is no reasonable escape hatch or
| alternative (clone doesn't always work).
| klabb3 wrote:
| +1. My programming style normally consists of trial-and-
| error style prototyping for a couple of iterations, and
| then later refining that into something that's solid and
| robust. I find Rust's inofficial "prototyping mode"
| difficult to combine with it's regular "production grade
| mode" for practical purposes.
| klabb3 wrote:
| > Personally i feel the concern over the borrow checker is
| way overblown. 95% of the time the borrow checker is very
| simple.
|
| I have been using Rust professionally as well and had a
| different experience. For anything singlethreaded I agree
| with you. For any asynchronous behavior, whether it's threads
| or async or callbacks, the borrow checker gets very stubborn.
| As a response, people just Arc-Mutex everything, at which
| point you might as well have used a GC language and saved
| yourself the headache.
|
| However, enums with pattern matching and the trait system is
| still unbeatable and I miss it in other languages.
| capableweb wrote:
| The perspective of someone who is learning Rust (but not
| professionally) during the last few months. :
|
| - The borrow checker is one of the easier parts of Rust to
| grok, it's just as you say, not that complicated in the end.
|
| - Traits are more annoying to understand and find in source
| code when they can get added from anywhere, and suddenly you
| code gets extra functionality, or it's missing the right one
| unless you import the right crate but there is no "back-
| reference" so you're not clear what crate the code actually
| comes from.
|
| - Crates/libraries are harder to grok with their
| mod.rs/lib.rs files and whatnot, in order to structure your
| application over many files.
|
| - Macros are truly horrible in Rust, both to write and debug,
| but then my only experience with macros are with Clojure,
| where writing macros is basically just writing normal code
| and works basically the same way
|
| - Compilation times when you start using it for larger
| projects tend to be kind of awful. Some crates makes this
| even worse (like tauri, bevy) and you need to be careful on
| what you add as some can have a dramatic impact on
| compilation speed
|
| - The async ecosystem is not as mature as it seems on first
| look. I'm really looking forward to it all stabilizing in the
| future. Some libraries support only sync code, others only
| async but only via tokio, others using other async libraries.
| Read somewhere that eventually it'll be possible to interop
| between them, time will tell.
|
| - Control flow with Option/Result and everything that comes
| with it is surprisingly nice and something I'm replicating
| when using other languages (mainly a Clojure developer by
| day) now.
|
| My development journey was PHP -> JavaScript -> Ruby ->
| Golang -> Clojure with doing them all the capacity of
| backend/frontend/infrastructure/everything in-between,
| freelancing/consulting/working full-time at long term
| companies, so take my perspective with that in mind. Rust
| would be my first "low-level" language that I've used in
| larger capacity.
| bobajeff wrote:
| That sounds like good advice. I'll keep that in mind next
| time I attempt to use Rust on something.
| pyjarrett wrote:
| Ada is another option without a GC. I wrote a search tool for
| large codebases with it (https://github.com/pyjarrett/septum),
| and the easy and built-in tasking and pinning to CPUs allows
| you to easily go wide if the problem you're solving supports
| it.
|
| There's very little allocation since it supports returning VLAs
| (like strings) from functions via a secondary stack. Its Alire
| tool does the toolchain install and provides package
| management, so trying the language out is super easy. I've done
| a few bindings to things in C with it, which is ridiculously
| easy.
| throwaway239i3j wrote:
| Of the ones you mentioned, Zig is the only one that has
| explicit memory management.
|
| > Is it worth learning
|
| Languages are easy to pick up once you understand fundamentals.
| The borrow checker is intuitive if you have an understanding of
| stack frames, heap/data segment, references, moved types,
| shared memory.
|
| You then should be asking "Is it worth using?", then evaluate
| use cases.. pros/cons.. etc.
|
| For CLI, Rust is likely the easiest given it's macros, but if
| you struggle with the borrow checker then it won't be. You will
| be fighting the compiler instead of developing something.
|
| Depending what your CLI program is doing, you might want to
| evaluate what libraries are available, how they handle I/O, and
| parallelism.
|
| JavaScript has incredibly easy and fast concurrent I/O thanks
| to libuv and v8.
| travisgriggs wrote:
| >> Is it worth learning
|
| > Languages are easy to pick up once you understand
| fundamentals. The borrow checker is intuitive if you have an
| understanding of stack frames, heap/data segment, references,
| moved types, shared memory.
|
| I see this sentiment often. In the last 10 years, I have come
| up a level in raw C, learned Kotlin, Swift, Python,
| Elixir/Erlang, and a smattering of JavaScript, all coming
| from a background that included Fortran and Smalltalk.
|
| My problem with the dialogue is what is meant by "learn." I
| have architected, implemented, and maintain different
| components of our products in all these languages currently.
| I think that demonstrates I have "learned" these languages,
| at least at this level of "picked up." But I can't write
| Python the way Brett Canon does. Or Elixir the way Jose
| Valium does. Or any of their peers. And in that regard I
| still very much feel I have not "learned".
|
| I spent a couple days playing with Zig a month or so ago. I
| became familiar with the way it worked. I could spend another
| month or so in that phase, and then could probably
| comfortably accomplish things with it. But I don't think I'd
| feel like I'd "learned" it.
|
| It reminds me of my experience learning Norwegian. I lived in
| Norway for 2 years and did my best to speak as much Norwegian
| as I could. At six months I could definitely get by. At 13
| months, as I embraced the northern dialect, I was beginning
| to surprise Norwegians that I was from the states. I started
| dreaming in Norwegian at some point after that. But even at
| 24 months, able to carry on a fluid conversation, I realized
| I still could "learn" the language better than I currently
| knew it.
|
| So I guess, it always seems there needs to be more context,
| from both the asker and the answerer, when this "should I
| learn X" discussion is had. Learning is not a merit badge.
| mlindner wrote:
| Memory has to be managed by something. The more decisions that
| are made for you in how that happens the less flexibility there
| is for certain situations.
| cercatrova wrote:
| Sure but my use cases would be stuff I'd normally write in
| TypeScript or Python that already have garbage collection.
| Like I said I'm not doing embedded programming so I don't
| have too much of a need to manage memory.
|
| My question could be further constrained then to be, is
| learning Rust or Zig despite its manual memory management
| worth it for applications that are normally already garbage
| collected in their current implementations? Or are languages
| like Nim and Crystal enough? Does Rust and Zig have other
| benefits despite manual memory management?
| adgjlsfhk1 wrote:
| imo, most code can do just fine with GC. modern GCs can be
| relatively low overhead even with guaranteed small pauses
| (10ms). furthermore, most code that can't handle pauses can
| be written to not allocate any memory (so GC can be
| temporarily turned off). as such, the only two places where
| you need manual allocation are for OS development, and hard
| real time requirements.
| Comevius wrote:
| When tail latency (high-percentile latency) is important
| GC is not a good choice. Wait-free (threads progress
| independently) concurrent algorithms also need wait-free
| memory reclamation with bounded memory usage to be able
| to guarantee progress.
|
| But most software are throughput-oriented.
| pjmlp wrote:
| Additionally, not all GCs are made alike, and languages
| like D, F#, C#, Nim, Swift, among others, also offer
| value types and manual memory management, if desired.
| elcritch wrote:
| Also Swift and Nim w/ ARC use reference counting, which
| generally give much better latency and lower memory
| overhead. Reference counting is part of the reason iOS
| devices don't need as much RAM.
|
| Nim's ARC system also doesn't use atomic or locks which
| means it's runtime overhead is very low. I use it
| successfully on embedded devices for microsecond timed
| events with no large tail latencies.
| pjmlp wrote:
| Reference counting is a GC algorithm.
|
| I wouldn't buy into much Apple marketing regarding its
| performance though,
|
| https://github.com/ixy-languages/ixy-languages
|
| It makes sense in the context of tracing GC having been a
| failure in Objective-C due to its C semantics, while
| automating Cocoa's retain/release calls was much safer
| approach. Swift naturally built on top of that due to
| interoperability with Objective-C frameworks.
|
| Nim has taken other optimizations into consideration,
| however only in the new ORC implementation.
|
| Still, all of them are much better than managing memory
| manually.
| elcritch wrote:
| > I wouldn't buy into much Apple marketing regarding its
| performance though,
|
| I wouldn't make claims on Swifts overall performance, but
| just it's memory usage (really Obj-Cs) and particularly
| for GUI development. Java's GCs have always been very
| memory hungry, usually to the tune of 2x. Same with .Net.
| Though to be fair Go's and Erlang's GCs have much better
| memory footprints. Erlang's actor model benefits it
| there.
|
| Agreed, they're all better than manual memory management.
| MarcusE1W wrote:
| The way you describe your use case I think you are fine
| with a language with garbage collection like Nim (which has
| has a syntax a bit like Python) or Crystal. I would also
| throw Go in the ring or if you are interested to learn a
| bit of functional programming then you also could look at
| Ocaml.
|
| Zig has no garbage collection btw, but makes it easier than
| C to handle that. Another language without garbage
| collection that helps a lot to avoid memory issues is Ada
| (Looks a bit like Pascal). So there are alternatives to
| Rust.
| mlindner wrote:
| Rust's memory management is "manual" but it feels automatic
| for most uses.
| Starlevel001 wrote:
| [deleted]
| jgillich wrote:
| I've found that Go is not elegant enough for me and Rust is too
| difficult to write (I started using Rust in 2015 and after
| years of trying I eventually realized Rust doesn't make sense
| for most apps), so I'm all in on Crystal. Despite not having
| much prior Ruby experience, I absolutely love the language.
| zozbot234 wrote:
| Modern Rust is much more straightforward than it was in 2015.
| It's effectively two different languages, albeit maintaining
| backward compatibility (i.e. code written for Rust 1.0 should
| still compile today, with proper edition settings).
| pizza234 wrote:
| Crystal doesn't have built-in support for parallelism, let
| alone production-grade support. This is a significant lack
| for a modern language.
|
| For a language that is around 8 years old, this may be a
| serious problem, since the surrounding ecosystem has been
| probably written without parallelism in mind, and it may take
| a very long time to be updated (if ever).
| npn wrote:
| > Crystal doesn't have built-in support for parallelism
|
| They do, but it is hidden inside a compiler flag, if you
| compile your prject with `Dpreview_mt` then it will come
| with multi-threaded support. This has been an experimental
| feature for a few year though, and there is not much
| improvement since it first got introduced.
|
| Personally I don't use crystal for this kind of feature,
| and it runs stable enough when I use it for some cpu
| intensive tasks when I rarely need it.
|
| Crystal really shines when you need something that you
| usually write a python/ruby script to do, especially for
| tasks that run for hours. Converting some script from ruby
| to crystal and run it in production mode typically reduce
| the time consumed to 1/5 or even 1/10 of the original
| depends on the job. As someone who have to read gigabytes
| of text files regularly, Crystal is currently the best one
| for the task.
|
| The compilation time for released binary is something need
| much improvement though. And I'm not sure if they can even
| achieve incremental compilation.
| sanxiyn wrote:
| I use PyPy for such cases. Is Crystal better than PyPy?
| npn wrote:
| I think Crystal is better than Python in term of language
| design. Unlike Ruby and Python that were way older,
| crystal is relatively new, so they learned from other
| languages mistake and try to improve it, result in a more
| cleaner language.
|
| For the cases mentioned, I think crystal is immensely
| helpful: - Reading/writing files are easy, usually a
| single method will give you the result you want. -
| Working with directories are nice, things like
| `Dir.mkdir_p`, `Dir.each_child`, `File.exists?`... all
| existed to make your life easier. - Like ruby, you can
| invoke shell command easily using backticks - There are
| some useful libraries to for console app, like `colorize`
| or `option_parser`. Crystal is a battery included
| language, so the standard library is filled with useful
| libraries. - Working with lists and hashmaps is a breeze,
| since the Enumerable and Iterable modules are filled with
| useful methods, mostly inspired from ruby land. -
| Concurrent is built in, so you can trivially write
| performant IO-bounded tasks like web crawlers.
|
| For a project that made by a handful of people, I just
| can't praise the dev team enough for making a language
| this practical.
| lijogdfljk wrote:
| I second zozbot234's statement about it being far better than
| it was in those days.
|
| The language team has done a great job rounding rough edges,
| and this next roadmap is slated for even more polishing. They
| heavily prioritize dev experience which is why i think people
| like myself (a GC'd language person historically) use and
| love Rust so much.
| phendrenad2 wrote:
| I recommend C. Once you get the hang of it, it's as fast as
| writing Rust code (and you don't have to think about borrow
| checks).
| cercatrova wrote:
| Yeah, I sure _love_ debugging segfaults. I 've used C before
| and it was cool to learn but I haven't used it since.
| phendrenad2 wrote:
| Once you've used C enough, you get the skill to avoid
| writing segfaults, and when a segfault happens, they're a
| lot easier to debug.
|
| However, since you're just interested in writing a fast
| command-line app, what about the JVM or .NET? Those have a
| startup time issue, but once they're running they're very
| fast, less than an order of magnitude slower than
| C/Rust/Zig/etc.
| zigger69 wrote:
| Crystal: compilation speed is just too slow, sadly. Nim and
| Zig: I'd definitely just go with Zig. It's an extremely simple
| language, has no macros (but something much better than
| macros), is explicit, and in the long run it's just going to be
| worth it much more than Nim.
| Tozen wrote:
| Besides Rust and Zig, you might want to check out Vlang and
| Odin, who are in the same category.
| cercatrova wrote:
| I remember reading a lot of controversy about V back when I
| first heard about it so I decided not to look into it further
| [0], but I'll take a look again. Odin looks interesting, Pony
| does as well.
|
| [0] https://news.ycombinator.com/item?id=25511556
| ledgerdev wrote:
| I have been thinking about creating a little higher level
| language that targets server side web assembly. Zig looks very
| attractive, but concerned with how to handle things like strings
| and datetimes.
|
| How suitable is zig for such a task compared to say rust?
| pacaro wrote:
| I may be misunderstanding you, but it feels like strings and
| datetimes are library related more than language related.
|
| I know that higher level languages will have primitives that
| aim to represent strings, but if you need to get into the weeds
| with Unicode then you'll be leaning on a library regardless
| ledgerdev wrote:
| > I may be misunderstanding you, but it feels like strings
| and datetimes are library related more than language related.
|
| Yes would agree and see them as platform related. It's just
| too large a task to create from scratch. Like say on JVM you
| can compile to bytecode and have strings already built into
| platform, and java.time, and ability to access an ecosystem
| of libraries.
|
| With zig could one could use c or rust libraries?
| Shadonototra wrote:
| Best alternative to C, built with solid foundation to ensure fast
| compile speed, fast iteration, and easy to maintain code
| mlindner wrote:
| I don't understand why people want fast compile speeds so much.
| My day job is/was writing C on something that requires 30+
| minutes for one compilation run. It feels like people want fast
| compilation speeds because they need to keep running their
| software for some reason.
|
| Fast compilation speeds are a nice to have but basically
| irrelevant for normal developing of software in my experience.
| Comevius wrote:
| Hot code reloading is nice to have, but incremental
| compilation is essential. Your compiler has forsaken you if
| you have to wait 30 minutes for changes to compile.
| hu3 wrote:
| Sorry I may have missed the sarcasm.
|
| Your day job project takes half an hour to compile and you
| don't see the point in speeding that up?
| mlindner wrote:
| It wasn't sarcasm. My "irrelevant" point meant "it's
| irrelevant for a factor in picking a language", as in "it
| doesn't significantly contribute to whether a project is
| possible in the long term".
|
| If compile times are longer you just compile less often and
| write larger portions of code at a time and you generally
| keep writing code while it compiles.
| sanxiyn wrote:
| GP did say it is a nice to have.
| hu3 wrote:
| But also said that it's basically irrelevant. Hence my
| confusion.
| kortex wrote:
| The longer the delay between writing some chunk of code, and
| executing it, the more valuable mental context is lost. I
| find it way more productive to work in small chunks
| iteratively, testing as I go along, rather than writing for a
| long chunk of time, then running tests and figuring out what
| bugs are making tests fail.
| mlindner wrote:
| I only compile a couple of times per day max usually.
| Sometimes I won't compile at all in a day.
| chrisseaton wrote:
| Developers have to wait while your compiler runs. You're
| paying your compiler at your engineer hourly rate, if you
| like. That's very expensive!
| jgod wrote:
| Faster feedback loop is a much better development experience.
| diosito wrote:
| gautamcgoel wrote:
| Quick question, which maybe someone here can answer: I noticed
| that GitHub supports syntax highlighting in the Zig repo. How
| does that work for a new language such as Zig? Can you somehow
| upload a file which tells GitHub how you'd like programs written
| in your language to be displayed?
| aeldidi wrote:
| I believe this is the syntax highlighter's repo:
| https://github.com/github/linguist/blob/master/CONTRIBUTING....
| gautamcgoel wrote:
| Thanks, this is what I was wondering! Seems a new language
| needs to have 200 repos which use it before GitHub will
| consider adding syntax highlighting for it..
| [deleted]
| diosito wrote:
| spicybright wrote:
| A big achievement for any decently sized language. Nice work devs
| :)
| Shoop wrote:
| For reference:
|
| Tracking issue for overall progress on the self-hosted compiler:
| https://github.com/ziglang/zig/issues/89
|
| Zig's New Relationship with LLVM: https://kristoff.it/blog/zig-
| new-relationship-llvm/
| AndyKelley wrote:
| Hello HN! Here is some context to decorate this announcement:
|
| The Zig self-hosted compiler codebase consists of 197,549 lines
| of code.
|
| There are several different backends, each at varying levels of
| completion. Here is how many behavior tests are passing:
| LLVM: 1101/1138 (97%) WASM: 919/1138 (81%)
| C: 740/1138 (65%) x86_64: 725/1138 (64%)
| arm: 490/1138 (43%) aarch64: 411/1138 (36%)
|
| As you might guess, the one that this milestone is celebrating is
| the LLVM backend, which is now able to compile the compiler
| itself, despite 3% of the behavior tests not yet passing.
|
| The new compiler codebase, which is written in Zig instead of
| C++, uses significantly less memory, and represents a modest
| performance improvement. There are 5 more upcoming compiler
| milestones which will have a more significant impact on
| compilation speed. I talked about this in detail last weekend at
| the Zig meetup in Milan, Italy[1].
|
| There are 3 main things needed before we can ship this new
| compiler to everyone.
|
| 1. bug fixes
|
| 2. improved compile errors
|
| 3. implement the remaining runtime safety checks
|
| If you're looking forward to giving it a spin, subscribe to [this
| issue](https://github.com/ziglang/zig/issues/89) to be notified
| when it is landed in master branch.
|
| Edit: The talk recording about upcoming compiler milestones is
| uploaded now [1]
|
| [1]: https://www.youtube.com/watch?v=AqDdWEiSwMM
| dleslie wrote:
| Happy to see the C backend coming along. LLVM is a major
| barrier to use on esoteric embedded devices.
| sitkack wrote:
| You can also target C through Wasm.
|
| https://github.com/WebAssembly/wabt/tree/main/wasm2c
| exikyut wrote:
| My genuine question is what sort of code-size and/or
| performance impact the translation imposes.
|
| The simple example in the README.md seems straightforward
| enough, but I wonder if there are any pathological
| explosions in practice.
| geophertz wrote:
| > written in Zig instead of C++, uses significantly less
| memory, and represents a modest performance improvement
|
| That's particularly interesting considering the rust compiler
| in rust has never been as fast as the original OCaml one
| sanxiyn wrote:
| Well, OCaml Rust compiler also didn't use LLVM and used its
| own lightweight code generator and I think self-hosted Rust
| compiler frontend was in fact faster than OCaml Rust compiler
| frontend.
| parentheses wrote:
| With both projects, how much of the improvement is simply
| building for the second time?
| sanxiyn wrote:
| For Rust, I think improvement was almost entirely due to
| LLVM producing faster code. That's not applicable to Zig
| case, since both old and new compiler use LLVM. I don't
| know enough about Zig to answer.
| kibwen wrote:
| The original OCaml compiler didn't have essentially any of
| the static analysis that Rust would eventually be known for.
| Rust in 2011 (when rustc bootstrapped) was dramatically
| different from what would later stabilize in 2015.
| est31 wrote:
| I wonder how much this statement still holds. I've never used
| the OCaml bootstrap compiler but performance wise, the rust
| compiler has improved incredibly since the 1.0 release.
| sanxiyn wrote:
| An apple to apple comparison is impossible because rustboot
| compiled a very different language. But I suspect suitably
| updated rustboot would be still faster because compilation
| time is dominated by LLVM.
| pcwalton wrote:
| Rustboot's code generator was generally slower than LLVM.
| I think in some small test cases it might have been
| faster, but when implementing stuff like structure copies
| rustboot's codegen was horrendously slow because it would
| overload the register allocator.
| pcwalton wrote:
| Huh? That's not true at all. It took over 30 minutes to
| compile the self-hosted Rust compiler with the OCaml
| compiler, when rustc was far smaller than it is today.
| rustboot was agonizingly slow, and one of the main reasons
| why I was so anxious to switch to rustc back in those days
| was compilation speed.
|
| I was there and had to suffer through this more than
| virtually anyone else :)
| einpoklum wrote:
| * What kind of tests are these "behavior test"?
|
| * Is that a list of compilation targets?
|
| * If not all behavior tests pass, does that not mean that the
| compiler fails to compile programs correctly?
|
| Please indulge those of us who are not familiar with self-
| hosting compiler engineering.
| Spex_guy wrote:
| > What kind of tests are these "behavior test"?
|
| Snippets of zig code that use language features and then make
| sure those features did the right thing. You can find them
| here:
| https://github.com/ziglang/zig/tree/master/test/behavior
|
| > Is that a list of compilation targets?
|
| Mostly. Pedantically, it's a list of code generation
| backends, each of which may have multiple compilation
| targets. So for example the LLVM backend can target many
| architectures. The ones that are architecture specific are
| currently debug-only and cannot do optimization.
|
| > If not all behavior tests pass, does that not mean that the
| compiler fails to compile programs correctly?
|
| Some tests are not passing because they cause an incorrect
| compile error, others compile but have incorrect behavior
| (miscompilation). Don't use Zig in production yet ;)
|
| (edit: fix formatting)
| icsa wrote:
| > uses significantly less memory, and represents a modest
| performance improvement.
|
| The reduced memory has significant value. Being able to do the
| same build on less expensive hardware or do more with the same
| hardware is a significant financial performance improvement
| joshbaptiste wrote:
| yup 18x memory reduction improvement 8.5GB -> 0.5GB according
| to the vid..
| riffraff wrote:
| it's so surprising to hear there was a Zig Meetup in Milan, I'd
| not expect a large enough community to exist there, pretty
| cool!
| vanderZwan wrote:
| Probably a significant chunk of the larger European community
| was represented there too - don't forget that traveling to EU
| countries is relatively easy for EU citizens
| erwincoumans wrote:
| Congratulations with the milestone!
|
| Does using Zig over C++ lead to "less memory, and represents a
| modest performance"? Or was the C++ implementation a bit
| sloppy? (lacking data oriented design for instance)
|
| Also, what specifically are you most excited using Zig for?
| AndyKelley wrote:
| Thanks :)
|
| The new Zig implementation is certainly more well designed
| than the C++ implementation, for several reasons:
|
| * It's the second implementation of the language
|
| * It did not have to survive as much evolution and language
| churn
|
| * I leveled up as a programmer over the last 7 years
|
| * The Zig language safety and debugging features make it
| possible to do things I would never dream of attempting in
| C++ for fear of footguns. A lot of the data-oriented design
| stuff, for example, makes use of untagged unions, which are
| nightmarish to debug in C++ but trivial in Zig. In C++,
| accessing the wrong union field means you later end up trying
| to figure out where your memory became corrupted; in Zig it
| immediately crashes with a stack trace. This is just one
| example.
|
| * Zig makes certain data structures comfortable and ergonomic
| such as MultiArrayList. In C++ it's too hard, you end up
| making performance compromises to keep your sanity.
|
| Generally, I would say that C++ and Zig are in the same
| performance ballbark, but my (obviously biased) position is
| that the Zig language guides you away from bad habits where
| as C++ encourages them (such as smart pointers and reference
| counting).
|
| As for less memory, I think this is simply a clear win for
| Zig. No other modern languages compete with the tiny memory
| footprints of Zig software.
|
| Some of the projects I am exited to use Zig for:
|
| * rewriting Groove Basin (a music player server) in zig and
| adding more features
|
| * a local multiplayer arcade game that runs bare metal on a
| raspberry pi
|
| * a Digital Audio Workstation
| Karliss wrote:
| Can you clarify about the union part? Did you meant that
| zig allowed you replacing what would be untagged union in
| C++ with tagged union in zig? Or does the zig compiler has
| some kind of debug sanitizer mode which automatically turns
| untagged unions into tagged unions with checks?
| dralley wrote:
| In the past they've talked about the speed and memory
| improvements they've gotten from using MultiArrayList in
| the compiler, storing tags separately from the unions
| themselves. If you have a union with a size of 16 bytes
| and you add a tag to that which is 1 byte, a lot of space
| is wasted due to padding. If you keep the tags in a
| separate array, both arrays are individually densely
| packed. Less memory wasted due to padding == less memory
| use overall and better utilization of cache.
|
| But in terms of the implementation, this means working
| with untagged unions, because the tags are maintained
| externally.
| abbeyj wrote:
| If the union is untagged, how can it be determined (at
| runtime) that you've accessed the wrong field?
| Spex_guy wrote:
| The compiler adds a tag in debug modes, but not in
| release modes.
| elcritch wrote:
| Impressive work getting Zig self-hosting. However:
|
| > As for less memory, I think this is simply a clear win
| for Zig. No other modern languages compete with the tiny
| memory footprints of Zig software.
|
| Is not true at all. There are several other modern
| languages that compete with Zig for small memory
| footprints.
| geodel wrote:
| Would you tell about those languages or they are left as
| an exercise to readers?
| elcritch wrote:
| Sort of left it to the readers and hopefully encourage
| people to investigate for themselves. It's far too easy
| to get into "this benchmark vs this benchmark", etc.
| Memory usage is overall a complicated topic, which I
| think Andrew's comment doesn't do justice to. Granted the
| Zig team has done some impressive work, much of the
| memory usage & bloat in the C++ world comes from real
| world programs and libraries and the inevitable drift in
| program architecture over time.
|
| However, I could believe Zig's stdlib and culture
| encourages low memory footprints, but there isn't
| anything novel in the language that makes it inherently
| lower memory footprint. Though to name a few languages
| I'd say that can be directly comparable are Rust, D (esp.
| BetterC mode), and Nim. Even Julia & Go in the right
| context. Though honestly I often prefer wasting a few
| hundred bytes of RAM here and there, even on a
| microcontroller, for pure convenience.
|
| edit: forgot Odin. Another comment mentioned it, though
| I've never used it. It looks like it's used in
| production.
| maleldil wrote:
| Thanks for the detailed answer! I have more questions, if
| you'll indulge me:
|
| If I understood it correctly, you think smart pointers and
| reference counting are bad habits. Why? Especially the
| smart pointers bit.
|
| Why does Zig use less memory than other languages? Is it
| inherent to Zig, or can it be reproduced in other
| languages?
| lupire wrote:
| I think Zig prefers explicit memory management, because
| allocations may fail and should be handled explicitly,
| and because automatic deallocations lead to hard-to-
| predict lifetimes (excess memory usage, and bugs for
| resource handles that are destructed at hard to predict
| moments).
|
| These are things that a "systems language" programmer
| should put in the work to do correctly/near-optimally,
| and not ask the compiler to just do something "good
| enough", like Python would.
| pjmlp wrote:
| And as we know from C, every developer is quite capable
| of taking care of use after free possible bugs.
| messe wrote:
| The general purpose allocator in the zig standard library
| has protections against use after free bugs.
| pcwalton wrote:
| By quarantining all memory forever. This is not a
| scalable solution because keeping one allocation in a 4kB
| page alive will leak the whole rest of the page. And if
| you don't quarantine all memory forever then use after
| free comes back. If it were that easy to solve UAF then
| C++ would have solved it by now.
|
| There _is_ a scalable solution for UAF that doesn 't
| involve introducing a lifetime/region system: garbage
| collection. Of course, that comes with its own set of
| tradeoffs.
| AndyKelley wrote:
| The problem of a single allocation keeping a 4 KB page
| alive is something that you might commonly find with the
| C++ or Rust way of programming that encourages allocating
| many individual objects on the heap, but in Zig land this
| is pretty rare. In fact, compare a Zig implementation of
| a given application with an equivalent in any other
| language and you will find there is no contest with
| respect to the size of the memory footprint.
|
| There are many cool possibilities that C++ has never
| explored and frankly I find your argument unimaginative.
| pjmlp wrote:
| Just like C and C++ debugging allocators, so what is the
| improvement here?
| Comevius wrote:
| The point of C and Zig is that you can solve any kind of
| problems with them. In particular there is much more to
| memory reclamation than garbage collectors or Rust's
| borrow checker. Some of the reclamation schemes offer
| wait-freedom, or efficiently support linearizable
| operations.
| maleldil wrote:
| What can C and Zig do that Rust can't?
| politician wrote:
| This is kind of a trick question because they are all
| Turing Complete, but so is Assembly. The best way to
| interpret your question, then, is not "what is possible"
| but rather "what is simple to express correctly" and
| "what is simple to get correct eventually". Those are
| questions about how tricky it is to get something to
| compile in the language and which tools exist to help
| determine whether a compiled program does what's expected
| and does not do what's not expected.
| Comevius wrote:
| Nothing if unsafe Rust is considered, though Zig does not
| require you to fight the language nearly as much. This is
| especially obvious with embedded. Zig's philosophy of no
| hidden control flow or allocation makes things simple.
| Simplicity is power.
|
| For certain problems you would want a TLA+ specification
| for safety and especially liveness either way. It's not
| like Rust absolutely guarantees correctness in all cases.
|
| Rust sits in a sweet spot between C/Zig and languages
| like Java, but it's not an appropriate replacement for
| either of them.
| pjmlp wrote:
| Any kind of problem?
|
| So how do you solve SIMD with C, without language
| extensions?
|
| C is not a special snowflake.
| turminal wrote:
| Are there languages that "solve" SIMD?
| pjmlp wrote:
| ISPC for one.
|
| https://ispc.github.io/
| Comevius wrote:
| SIMD is not a problem, it's hardware. Compilers tend to
| take advantage of the CPU-specific SIMD registers and
| instructions, or you can use them explicitly.
|
| Writing generic SIMD code is more portable. C has
| libraries, and Zig also has vector primitives available
| through built-in functions.
| pjmlp wrote:
| Any language can have libraries, C is not special here,
| nor is Zig.
|
| In fact, a few GC enabled languages have explicit SIMD
| support.
| coder543 wrote:
| > because automatic deallocations lead to hard-to-predict
| lifetimes (excess memory usage, and bugs for resource
| handles that are destructed at hard to predict moments).
|
| I don't really feel this is the case in Rust.
| throwawayzr wrote:
| varajelle wrote:
| But it is. If you have a String on the stack, it's memory
| is only reclaimed at the end of the scope, while it often
| could be free'd before. This is especially bad in async
| code around await point since it means the memory need to
| be kept alive more than needed.
| coder543 wrote:
| I disagree. It is substantially and unequivocally better
| to hold onto memory until the end of scope than to leak
| memory by default. How can anyone argue that leaking
| memory is a better default? That's a ticking time bomb.
| Maybe someone is so optimistic as to believe they'll
| catch every leak before shipping new code?
|
| You can easily add a manual "drop" call in Rust at any
| point if you want to force an allocation to be freed
| sooner, but I speak from years of experience using Rust
| for work when I say that Rust's RAII model is not
| problematic in practice. I'm not simply speculating or
| theorizing, and I have professional experience with a
| variety of languages at all levels of the stack. I
| personally don't mind garbage collectors most of the
| time, but Rust is great when you need more control.
|
| In C++, RAII can absolutely be problematic because you
| are able to easily do things that cause undefined
| behavior _by accident_ , which is arguably worse than
| either leaking by default or the mere act of holding onto
| memory for the duration of a scope.
|
| If you can propose a system which cannot be contrived to
| have any downside, that would be fantastic! In the real
| world, Rust's approach to memory management is extremely
| pragmatic and beneficial. I'm sure someone will
| eventually improve on Rust's approach, but "leak by
| default" isn't it.
|
| I honestly do enjoy following Zig... it is a fascinating
| language taking a really interesting approach to solving
| many problems, but its memory safety story is not where I
| want it to be yet. Leaking memory by default is
| technically safe, but it's not exactly endearing.
| varajelle wrote:
| Of course you can call drop() manually, but almost nobody
| does or even think about it because that's not the way
| you program in a language with RAII.
|
| Don't get me wrong. I do think that rust and c++ RAII is
| much more convenient and safe than the C or Zig way.
|
| (I'd even prefer if you could annotate given struct in
| rust so the compiler could drop them as soon as it's no
| longer used, but that s not that simple)
| coder543 wrote:
| I definitely wish that Non-Lexical Lifetimes would
| eagerly drop anything that doesn't implement Drop.
|
| It would probably be a breaking change to automatically
| call an explicit Drop implementation anywhere other than
| the end of the current scope, so I think that would have
| to be left as-is. String doesn't implement Drop, so it
| could easily be dropped eagerly within the scope as soon
| as it won't be referenced again. Such a change would be
| roughly equivalent to any of the compiler optimizations
| that reorder statements in ways that should be
| unobservable.
| guidoism wrote:
| I've recently been writing personal code that leaks like
| a sieve. It's just not worth my time to find every leak
| when the lifetime of the process is finite and short and
| it will only ever run on a machine with gigs of memory. I
| haven't thought through your question enough but maybe a
| situation where memory usage would be super high if
| waiting until the end of scope? I'm probably trying to
| hard to come up with a situation but I have a gut feeling
| that freeing mid scope is important under certain
| circumstances to keep the code simple and understandable.
| charcircuit wrote:
| The lifetime of the memory a smart pointer manages is
| very predictable. It will have the same lifetime as the
| smart pointer itself (this is ignoring moves).
| RustyConsul wrote:
| I think what he's saying is there is a way to carelessly
| use smart pointers in rust.
|
| pub enum List { Empty, Elem(i32, Box<List>), }
|
| instead of :
|
| pub struct List { head: Link, }
|
| enum Link { Empty, Some(Box<Node>), }
|
| struct Node { elem: i32, next: Link, }
| erwincoumans wrote:
| Great, thanks for those details! I primarily develop using
| C++ and avoiding pitfalls (smart pointers, exceptions,
| unintended memory allocations) takes a lot of effort.
|
| I enjoy synthesizers (including Eurorack) and looking
| forward playing with a Zig DAW!
| jchw wrote:
| > * a Digital Audio Workstation
|
| Woah.
|
| A little bit ago I went to write a little tool that mucked
| with the FL Studio FLP format. It was easy enough to guess
| out the bits that I cared about, so I pretty much just did
| that using a couple quick projects with specific things in
| them. However, I did check to see if anyone else had mucked
| around with the FLP format, and couldn't help but notice
| your name. Was pretty surprised as a very curious onlooker
| to the Zig programming language. You certainly seem to get
| around :)
|
| That's a little tangential, but I guess I mention it
| because I was actually wondering if this was ever something
| you planned on doing given the fact that it was clear you
| had dabbled with DAW stuff (forgive me for not knowing if
| you have a more rich connection to music production than
| just that; I never bothered to check.)
|
| A DAW in Zig sounds like a kick-ass idea. I tried to write
| a sort-of DAW toy with friends in Rust and it was a lot of
| fun even if it mostly made me realize how challenging it
| could be. (And also how bad at math I am. It took me so
| much effort to feel like I could understand FFTs enough to
| actually _implement_ them.) It makes me wonder if your Zig
| DAW would be open source? It would be a fun project to
| attempt to contribute to, if or when it ever came to
| fruition.
|
| Exciting stuff. Congrats on the Zig milestone.
| jcpst wrote:
| Thanks for all the context. Curious to know more about the
| concept/design of the DAW.
| aaaaaaaaata wrote:
| Seconded.
| carapace wrote:
| Congratulations!
| [deleted]
| riffic wrote:
| Take off every 'ZIG'!!
|
| edit: all your base are belong to us
| mindwok wrote:
| I've spent the last few weeks building an 8080 emulator in Zig to
| learn both emulator programming and the language. Gotta say, it's
| been a pretty pleasant experience. My only issue was with dynamic
| dispatch, which lead me down quite a rabbit hole which I didn't
| ever fully come out of. Seems that the current situation is build
| your own using compiler functions and pointer casts.
___________________________________________________________________
(page generated 2022-04-16 23:00 UTC)