[HN Gopher] Things Zig comptime won't do
___________________________________________________________________
Things Zig comptime won't do
Author : JadedBlueEyes
Score : 448 points
Date : 2025-04-20 15:57 UTC (1 days ago)
(HTM) web link (matklad.github.io)
(TXT) w3m dump (matklad.github.io)
| gitroom wrote:
| Cool!
| pyrolistical wrote:
| What makes comptime really interesting is how fluid it is as you
| work.
|
| At some point you realize you need type information, so you just
| add it to your func params.
|
| That bubbles all the way up and you are done. Or you realize in
| certain situation it is not possible to provide the type and you
| need to solve a arch/design issue.
| Zambyte wrote:
| If the type that you're passing as an argument is the type of
| another argument, you can keep the API simpler by just using
| @TypeOf(arg) internally in the function instead.
| no_wizard wrote:
| I like the Zig language and tooling. I do wish there was a safety
| mode that give the same guarantees as Rust, but it's a huge step
| above C/C++. I am also extremely impressed with the Zig compiler.
|
| Perhaps the safety is the tradeoff with the comparative ease of
| using the language compared to Rust, but I'd love the best of
| both worlds if it were possible
| hermanradtke wrote:
| I wish for "strict" mode as well. My current thinking:
|
| TypeScript is to JavaScript
|
| as
|
| Zig is to C
|
| I am a huge TS fan.
| rc00 wrote:
| Is Zig aiming to extend C or extinguish it? The embrace story
| is well-established at this point but the remainder is often
| unclear in the messaging from the community.
| dooglius wrote:
| Zig is open source, so the analogy to Microsoft's EEE [0]
| seems misplaced.
|
| [0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_ex
| tingu...
| rc00 wrote:
| Open source or not isn't the point. The point is the
| mission and the ecosystem. Some of the Zig proponents
| laud the C compatibility. Others are seeking out the
| "pure Zig" ecosystem. Curious onlookers want to know if
| the Zig ecosystem and community will be as hostile to the
| decades of C libraries as the Rust zealots have been.
|
| To be fair, I don't believe there is a centralized and
| stated mission with Zig but it does feel like the story
| has moved beyond the "Incrementally improve your
| C/C++/Zig codebase" moniker.
| Zambyte wrote:
| > Curious onlookers want to know if the Zig ecosystem and
| community will be as hostile to the decades of C
| libraries as the Rust zealots have been.
|
| Definitely not the case in Zig. From my experience, the
| relationship with C libraries amounts to "if it works,
| use it".
| rc00 wrote:
| Are you referring to static linking? Dynamic linking?
| Importing/inclusion? How does this translate (no pun
| intended) when the LLVM backend work is completed? Does
| this extend to reproducible builds? Hermetic builds?
|
| And the relationship with C libraries certainly feels
| like a placeholder, akin to before the compiler was self-
| hosted. While I have seen some novel projects in Zig,
| there are certainly more than a few "pure Zig" rewrites
| of C libraries. Ultimately, this is free will. I just
| wonder if the Zig community is teeing up for a repeat of
| Rust's actix-web drama but rather than being because of
| the use of unsafe, it would be due to the use of C
| libraries instead of the all-Zig counterparts (assuming
| some level of maturity with the latter). While Zig's
| community appears healthier and more pragmatic, hype and
| ego have a way of ruining everything.
| Zambyte wrote:
| > static linking?
|
| Yes
|
| > Dynamic linking?
|
| Yes
|
| > Importing/inclusion?
|
| Yes
|
| > How does this translate (no pun intended) when the LLVM
| backend work is completed?
|
| I'm not sure what you mean. It sounds like you think
| they're working on being able to use LLVM as a backend,
| but that has already been supported, and now they're
| working on _not_ depending on LLVM as a requirement.
|
| > Does this extend to reproducible builds?
|
| My hunch would be yes, but I'm not certain.
|
| > Hermetic builds?
|
| I have never heard of this, but I would guess the same as
| reproducible.
|
| > While I have seen some novel projects in Zig, there are
| certainly more than a few "pure Zig" rewrites of C
| libraries.
|
| It's a nice exercise, especially considering how close C
| and Zig are semantically. It's helpful for learning to
| see how C things are done in Zig, and rewriting things
| lets you isolate that experience without also being
| troubled with creating something novel.
|
| For more than a few _not_ rewrites, check out
| https://github.com/allyourcodebase, which is a group that
| repackages existing C libraries with the Zig package
| manager / build system.
| ephaeton wrote:
| zig's C compat is being lowered from 'comptime'
| equivalent status to 'zig build'-time equivalent status.
| When you'll need to put 'extern "C"' annotations on any
| import/export to C, it'll have gone full-circle to C++ C
| compat, and thus be none the wiser.
|
| andrewrk's wording towards C and its main ecosystem
| (POSIX) is very hostile, if that is something you'd like
| to go by.
| PaulRobinson wrote:
| It's improved C.
|
| C interop is very important, and very valuable. However, by
| removing undefined behaviours, replacing macros that do
| weird things with well thought-through comptime, and making
| sure that the zig compiler is also a c compiler, you get a
| nice balance across lots of factors.
|
| It's a great language, I encourage people to dig into it.
| yellowapple wrote:
| The goal rather explicitly seems to be to extinguish it -
| the idea being that if you've got Zig, there should be no
| reason to need to write new code in C, because literally
| anything possible in C should be possible (and ideally done
| better) in Zig.
|
| Whether that ends up happening is obviously yet to be seen;
| as it stands there are plenty of Zig codebases with C in
| the mix. The idea, though, is that there shouldn't be
| anything stopping a programmer from replacing that C with
| Zig, and the two languages only coexist for the purpose of
| allowing that replacement to be gradual.
| xedrac wrote:
| I like Zig as a replacement for C, but not C++ due to its lack
| of RAII. Rust on the other hand is a great replacement for C++.
| I see Zig as filling a small niche where allocation failures
| are paramount - very constrained embedded devices, etc...
| Otherwise, I think you just get a lot more with Rust.
| xmorse wrote:
| Even better than RAII would be linear types, but it would
| require a borrow checker to track the lifetimes of objects.
| Then you would get a compiler error if you forget to call a
| .destroy() method
| throwawaymaths wrote:
| no you just need analysis with a dependent type system
| (which linear types are a subset of). it doesn't have to be
| in the compiler. there was a proof of concept here a few
| months ago:
|
| https://news.ycombinator.com/item?id=42923829
|
| https://news.ycombinator.com/item?id=43199265
| rastignack wrote:
| Compile times and painful to refactor codebase are rust's
| main drawbacks for me though.
|
| It's totally subjective but I find the language boring to
| use. For side projects I like having fun thus I picked zig.
|
| To each his own of course.
| nicce wrote:
| > refactor codebase are rust's main drawbacks
|
| Hard disagree about refactoring. Rust is one of the few
| languages where you can actually do refactoring rather
| safely without having tons of tests that just exist to
| catch issues if code changes.
| rastignack wrote:
| Lifetimes and generic tend to leak so you have to modify
| your code all around the place when you touch them
| though.
| ksec wrote:
| >but I'd love the best of both worlds if it were possible
|
| I am just going to quote what pcwalton said the other day that
| perhaps answer your question.
|
| >> I'd be much more excited about that promise [memory safety
| in Rust] if the compiler provided that safety, rather than
| asking the programmer to do an extraordinary amount of extra
| work to conform to syntactically enforced safety rules. Put the
| complexity in the compiler, dudes.
|
| > That exists; it's called garbage collection.
|
| >If you don't want the performance characteristics of garbage
| collection, something has to give. Either you sacrifice memory
| safety or you accept a more restrictive paradigm than GC'd
| languages give you. For some reason, programming language
| enthusiasts think that if you think really hard, every issue
| has some solution out there without any drawbacks at all just
| waiting to be found. But in fact, creating a system that has
| zero runtime overhead and unlimited aliasing with a mutable
| heap is as impossible as _finding two even numbers whose sum is
| odd._
|
| [1] https://news.ycombinator.com/item?id=43726315
| skybrian wrote:
| Yes, but I'm not hoping for that. I'm hoping for something
| like a scripting language with simpler lifetime annotations.
| Is Rust going to be the last popular language to be invented
| that explores that space? I hope not.
| hyperbrainer wrote:
| I was quite impressed with Austral[0], which used Linear
| Types and avoids the whole Rust-like implementation in
| favour of a more easily understandable system, albeit
| slightly more verbose.
|
| [0]https://borretti.me/article/introducing-austral
| renox wrote:
| Austra's concept are interesting but the introduction
| doesn't show how to handle correctly errors in this
| language..
| hyperbrainer wrote:
| Austral's specification is one of the most beautiful and
| well-written pieces of documentation I have ever found.
| It's section on error handling in Austral[0] cover
| everything from rationale and alternatives to concrete
| examples of how exceptions should be handled in
| conjunction with linear types.
|
| https://austral-lang.org/spec/spec.html#rationale-errors
| Philpax wrote:
| You may be interested in https://dada-lang.org/, which is
| not ready for public consumption, but is a language by one
| of Rust's designers that aims to be higher-level while
| still keeping much of the goodness from Rust.
| skybrian wrote:
| The first and last blog post was in 2021. Looks like it's
| still active on Github, though?
| Ygg2 wrote:
| > Is Rust going to be the last popular language to be
| invented that explores that space? I hope not.
|
| Seeing how most people hate the lifetime annotations, yes.
| For the foreseeable future.
|
| People want unlimited freedom. Unlimited freedom rhymes
| with unlimited footguns.
| xmorse wrote:
| There is Mojo and Vale (which was created by a now Mojo
| core contributor)
| the__alchemist wrote:
| Maybe this is a bad place to ask, but: Those experienced in
| manual-memory langs: What in particular do you find
| cumbersome about the borrow system? I've hit some annoyances
| like when splitting up struct fields into params where more
| than one is mutable, but that's the only friction point that
| comes to mind.
|
| I ask because I am obvious blind to other cases - that's what
| I'm curious about! I generally find the &s to be a net help
| even without mem safety ... They make it easier to reason
| about structure, and when things mutate.
| rc00 wrote:
| > What in particular do you find cumbersome about the
| borrow system?
|
| The refusal to accept code that the developer knows is
| correct, simply because it does not fit how the borrow
| checker wants to see it implemented. That kind of heavy-
| handed and opinionated supervision is overhead to
| productivity. (In recent times, others have taken to saying
| that Rust is less "fun.")
|
| When the purpose of writing code is to solve a problem and
| not engage in some pedantic or academic exercise, there are
| much better tools for the job. There are also times when
| memory safety is not a paramount concern. That makes the
| overhead of Rust not only unnecessary but also unwelcome.
| the__alchemist wrote:
| Thank you for the answer! Do you have an example? I'm
| having a fish-doesn't-know-water problem.
| int_19h wrote:
| Basically anything that involves objects mutually
| referencing each other.
| the__alchemist wrote:
| Oh, that does sound tough in rust! I'm not even sure how
| to approach it; good to know it's a useful pattern in
| other langs.
| int_19h wrote:
| Well, one can always write unsafe Rust.
|
| Although the more usual pattern here is to ditch pointers
| and instead have a giant array of objects referring to
| each other via indices into said array. But this is
| effectively working around the borrow checker - those
| indices are semantically unchecked references, and
| although out-of-bounds checks will prevent memory
| corruption, it is possible to store index to some object
| only for that object to be replaced with something else
| entirely later.
| estebank wrote:
| > it is possible to store index to some object only for
| that object to be replaced with something else entirely
| later.
|
| That's what generational arenas are for, at the cost of
| having to check for index validity on every access. But
| that cost is only in comparison to "keep a pointer in a
| field" with no additional logic, which is bug-prone.
| Cloudef wrote:
| >unsafe rust Which is worse than C
| Ygg2 wrote:
| > The refusal to accept code that the developer knows is
| correct,
|
| How do you know it is correct? Did you prove it with pre-
| condition, invariants and post-condition? Or did you
| assume based on prior experience.
| yohannesk wrote:
| Writing correct code did not start after the introduction
| of the rust programming language
| Ygg2 wrote:
| Nope, but claims of knowing to write correct code
| (especially C code) without borrow checker sure did spike
| with its introduction. Hence, my question.
|
| How do you know you haven't been writing unsafe code for
| years, when C unsafe guidelines have like 200 entries[1].
|
| [1]https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_d
| efinit... (Annex J.2 page 490)
| int_19h wrote:
| It's not difficult to write a provably correct
| implementation of doubly linked list in C, but it is very
| painful to do in Rust because the borrow checker really
| hates this kind of mutually referential objects.
| Ygg2 wrote:
| Hard part of writing actually provable code isn't the
| code. It's the proof. What are invariants of double
| linked list that guarantee safety?
|
| Writing provable anything is hard because it forces you
| to think carefully about that. You can no longer reason
| by going into flow mode, letting fast and incorrect part
| of the brain take over.
| edflsafoiewq wrote:
| One example is a function call that doesn't compile, but
| will if you inline the function body. Compilation is
| prevented only by the insufficient expressiveness of the
| function signature.
| lelanthran wrote:
| Rust prevents classes of bugs by preventing specific
| patterns.
|
| This means it rejects, by definition alone, bug-free code
| because that bug free code uses a pattern that is not
| acceptable.
|
| IOW, while Rust rejects code with bugs, it also rejects
| code without bugs.
|
| It's part of the deal when choosing Rust, and people who
| choose Rust know this upfront and are okay with it.
| bigstrat2003 wrote:
| > This means it rejects, by definition alone, bug-free
| code because that bug free code uses a pattern that is
| not acceptable.
|
| That is not true by definition alone. It is only true if
| you add the corollary that the patterns which rustc
| prevents are sometimes bug-free code.
| lelanthran wrote:
| > That is not true by definition alone. It is only true
| if you add the corollary that the patterns which rustc
| prevents are sometimes bug-free code.
|
| That corollary is only required in the cases that a
| pattern is unable to produce bug-free code.
|
| In practice, there _isn 't_ a pattern that reliably, 100%
| of the time and deterministically produces a bug.
| charlotte-fyi wrote:
| Isn't the persistent failure of developers to "know" that
| their code is correct the entire point? Unless you have
| mechanical proof, in the aggregate and working on any
| project of non-trivial size "knowing" is really just
| "assuming." This isn't academic or pedantic, it's a basic
| epistemological claim with regard to what writing
| software actually looks like in practice. You, in fact,
| do not know, and your insistence that you do is precisely
| the reason that you are at greater risk of creating
| memory safety vulnerabilities.
| sgeisenh wrote:
| Lifetime annotations can be burdensome when trying to avoid
| extraneous copies and they feel contagious (when you add a
| lifetime annotation to a frequently used type, it bubbles
| out to anything that uses that type unless you're willing
| to use unsafe to extend lifetimes). The solutions to this
| problem (tracking indices instead of references) lose a lot
| of benefits that the borrow checker provides.
|
| The aliasing rules in Rust are also pretty strict. There
| are plenty of single-threaded programs where I want to be
| able to occasionally read a piece of information through an
| immutable reference, but that information can be modified
| by a different piece of code. This usually indicates a
| design issue in your program but sometimes you just want to
| throw together some code to solve an immediate problem. The
| extra friction from the borrow checker makes it less
| attractive to use Rust for these kinds of programs.
| bogdanoff_2 wrote:
| >There are plenty of single-threaded programs where I
| want to be able to occasionally read a piece of
| information through an immutable reference, but that
| information can be modified by a different piece of code.
|
| You could do that using Cell or RefCell. I agree that it
| makes it more cumbersome.
| Starlevel004 wrote:
| Lifetimes add an impending sense of doom to writing any
| sort of deeply nested code. You get this deep without
| writing a lifetime... uh oh, this struct needs a reference,
| and now you need to add a generic parameter to everything
| everywhere you've ever written and it feels miserable.
| Doubly so when you've accidentally omitted a lifetime
| generic somewhere and it compiles now but then you do some
| refactoring and it _won 't_ work anymore and you need to go
| back and re-add the generic parameter everywhere.
| the__alchemist wrote:
| I guess the dodge on this one is not using refs in
| structs. This opens you up to index errors though because
| it presumably means indexing arrays etc. Is this the
| tradeoff. (I write loads of rusts in a variety of
| domains, and rarely need a manual lifetime)
| quotemstr wrote:
| And those index values are just pointers by another name!
| estebank wrote:
| It's not "just pointers", because they can have
| additional semantics and assurances beyond "give me the
| bits at this address". The index value can be tied to a
| specific container (using new types for indexing so tha
| you can't make the mistake of getting value 1 from
| container A when it represents an index from container
| B), can prevent use after free (by embedding data about
| the value's "generation" in the key), and makes the index
| resistant to relocation of the values (because of the
| additional level of indirection of the index to the
| value's location).
| quotemstr wrote:
| Yes, but like raw pointers, they lack lifetime guarantees
| and invite use after free vulnerabilities
| pornel wrote:
| There is a stark contrast in usability of self-
| contained/owning types vs types that are temporary views
| bound by a lifetime of the place they are borrowing from.
| But this is an inherent problem for all non-GC languages
| that allow saving pointers to data on the stack (Rust
| doesn't need lifetimes for by-reference heap types). In
| languages without lifetimes you just don't get any
| compiler help in finding places that may be affected by
| dangling pointers.
|
| This is similar to creating a broadly-used data structure
| and realizing that some field has to be optional.
| Option<T> will require you to change everything touching
| it, and virally spread through all the code that wanted
| to use that field unconditionally. However, that's not
| the fault of the Option syntax, it's the fault of
| semantics of optionality. In languages that don't make
| this "miserable" at compile time, this problem manifests
| with a whack-a-mole of NullPointerExceptions at run time.
|
| With experience, I don't get this "oh no, now there's a
| lifetime popping up everywhere" surprise in Rust any
| more. Whether something is going to be a temporary view
| or permanent storage can be known ahead of time, and if
| it can be both, it can be designed with Cow-like types.
|
| I also got a sense for when using a temporary loan is a
| premature optimization. All data has to be stored
| somewhere (you can't have a reference to data that hasn't
| been stored). Designs that try to be ultra-efficient by
| allowing only temporary references often force data to be
| stored in a temporary location first, and then borrowed,
| which doesn't avoid any allocations, only adds
| dependencies on external storage. Instead, the design can
| support moving or collecting data into owned (non-
| temporary) storage directly. It can then keep it for an
| arbirary lifetime without lifetime annotations, and hand
| out temporary references to it whenever needed. The run-
| time cost can be the same, but the semantics are much
| easier to work with.
| dzaima wrote:
| I imagine a large part is just how one is used to doing
| stuff. Not being forced to be explicit about mutability and
| lifetimes allows a bunch of neat stuff that does not
| translate well to Rust, even if the desired thing in
| question might not be hard to do in another way. (but that
| other way might involve more copies / indirections, which
| users of manually-memory langs would (perhaps rightfully,
| perhaps pointlessly) desire to avoid if possible, but Rust
| users might just be comfortable with)
|
| This separation is also why it is basically impossible to
| make apples-to-apples comparisons between languages.
|
| Messy things I've hit (from ~5KLoC of Rust; I'm a Rust
| beginner, I primarily do C) are: cyclical references; a
| large structure that needs efficient single-threaded
| mutation while referenced from multiple places (i.e. must
| use some form of cell) at first, but needs to be sharable
| multithreaded after all the mutating is done; self-
| referential structures are roughly impossible to move
| around (namely, an object holding &-s to objects allocated
| by a bump allocator, movable around as a pair, but that's
| not a thing (without libraries that I couldn't figure out
| at least)); and refactoring mutability/lifetimes is also
| rather messy.
| spullara wrote:
| With Java ZGC the performance aspect has been fixed (<1ms
| pause times and real world throughput improvement). Memory
| usage though will always be strictly worse with no obvious
| way to improve it without sacrificing the performance gained.
| estebank wrote:
| IMO the best chance Java has to close the gap on memory
| utilisation is Project Valhalla[1] which brings value types
| to the JVM, but the specifics will matter. If it requires
| backwards incompatible opt-in ceremony, the adoption in the
| Java ecosystem is going to be an uphill battle, so the wins
| will remain theoretical and be unrealised. If it is
| transparent, then it might reduce the memory pressure of
| Java applications overnight. Last I heard was that the
| project was ongoing, but production readiness remained far
| in the future. I hope they pull it off.
|
| 1: https://openjdk.org/projects/valhalla/
| spullara wrote:
| Agree, been waiting for it for almost a decade.
| no_wizard wrote:
| I have zero issue with needing runtime GC or equivalent like
| ARC.
|
| My issue is with ergonomics and performance. In my experience
| with a range of languages, the most performant way of writing
| the code is not the way you would idiomatically write it.
| They make good performance more complicated than it should
| be.
|
| This holds true to me for my work with Java, Python, C# and
| JavaScript.
|
| What I suppose I'm looking for is a better compromise between
| having some form of managed runtime vs non managed
|
| And yes, I've also tried Go, and it's DX is its own type of
| pain for me. I should try it again now that it has generics
| neonsunset wrote:
| Using spans, structs, object and array pools is considered
| fairly idiomatic C# if you care about performance (and many
| methods now default to just spans even outside that).
|
| What kind of idiomatic or unidiomatic C# do you have in
| mind?
|
| I'd say if you are okay with GC side effects, achieving
| good performance targets is way easier than if you care
| about P99/999.
| throwawaymaths wrote:
| in principle it should be doable, possibly not in the
| language/compiler itself, there was this POC a few months ago:
|
| https://github.com/ityonemo/clr
| pjmlp wrote:
| Most of Zig's safety was already available in 1978's Modula-2,
| but apparently languages have to come in curly brackets for
| adoption.
| chongli wrote:
| _languages have to come in curly brackets for adoption_
|
| Python and Ruby are two very popular counterexamples.
| pjmlp wrote:
| Not really, Ruby has plenty of curly brackets, e.g. 5.times
| { puts "hello!" }.
|
| In both cases, while it wasn't curly brackets that drove
| their adoption, it was unavoidable frameworks.
|
| Most people only use Ruby when they have Rails projects,
| and what made Python originally interesting was Zope CMS.
|
| And nowadays AI/ML frameworks, that are actually written in
| C, C++ and Fortran, making Python relevant because
| scientists decided on picking Python for their library
| bindings, it could have been Tcl just as well, as choices
| go.
|
| So yeah, maybe not always curly brackets, but definitly
| something that makes it unavoidable, sadly Modula-2 lacked
| that, an OS vendor pushing it no matter what, FAANG style.
| pklausler wrote:
| Which AI/ML frameworks are written in Fortran?
| pjmlp wrote:
| Probably none, it was more a kind of expression, given
| the tradition of "Python" libraries, that are actually
| bindings to C, C++, Fortran libraries.
| ashvardanian wrote:
| Previous submission:
| https://news.ycombinator.com/item?id=43738703
| karmakaze wrote:
| > Zig's comptime feature is most famous for what it can do:
| generics!, conditional compilation!, subtyping!, serialization!,
| ORM! That's fascinating, but, to be fair, there's a bunch of
| languages with quite powerful compile time evaluation
| capabilities that can do equivalent things.
|
| I'm curious what are these other languages that can do these
| things? I read HN regularly but don't recall them. Or maybe
| that's including things like Java's annotation processing which
| is so clunky that I wouldn't classify them to be equivalent.
| awestroke wrote:
| Rust, D, Nim, Crystal, Julia
| elcritch wrote:
| Definitely, you can do most of those things in Nim without
| macros using templates and compile time stuff. It's
| preferable to macros when possible. Julia has fantastic
| compile time abilities as well.
|
| It's beautiful to implement an incredibly fast serde in like
| 10 lines without requiring other devs to annotate their
| packages.
|
| I wouldn't include Rust on that list if we're speaking of
| compile time and compile time type abilities.
|
| Last time I tried it Rust's const expression system is pretty
| limited. Rust's macro system likewise is also very weak.
|
| Primarily you can only get type info by directly passing the
| type definition to a macro, which is how derive and all work.
| tialaramex wrote:
| Rust has _two_ macro systems, the proc macros are allowed
| to do absolutely whatever they please because they 're
| actually executing in the compiler.
|
| Now, _should_ they do anything they please? Definitely not,
| but they can. That 's why there's a (serious) macro which
| runs your Python code, and a (joke, in the sense that you
| should never use it, not that it wouldn't work) macro which
| replaces your running compiler with a different one so that
| code which is otherwise invalid will compile anyway...
| int_19h wrote:
| > Rust's macro system likewise is also very weak.
|
| How so? Rust procedural macros operate on token stream
| level while being able to tap into the parser, so I
| struggle to think of what they _can 't_ do, aside from
| limitations on the syntax of the macro.
| Nullabillity wrote:
| Rust macros don't really understand the types involved.
|
| If you have a derive macro for
| #[derive(MyTrait)] struct Foo { bar:
| Bar, baz: Baz, }
|
| then your macro _can_ see that it references Bar and Baz,
| but it can 't know _anything_ about how those types are
| defined. Usually, the way to get around it is to define
| some trait on both Bar and Baz, which your Foo struct
| depends on, but that still only gives you access to that
| information at runtime, not when evaluating your macro.
|
| Another case would be something like
| #[my_macro] fn do_stuff() -> Bar {
| let x = foo(); x.bar() }
|
| Your macro would be able to see that you call the
| functions foo() and Something::bar(), but it wouldn't
| have the context to know the type of x.
|
| And even if you did have the context to be able to see
| the scope, you probably still aren't going to reimplement
| rustc's type inference rules just for your one macro.
|
| Scala (for example) is different: any AST node is tagged
| with its corresponding type that you can just ask for,
| along with any context to expand on that (what fields
| does it have? does it implement this supertype? are there
| any relevant implicit conversions in scope?). There are
| both up- and downsides to that (personally, I do quite
| like the locality that Rust macros enforce, for example),
| but Rust macros are unquestionably _weaker_.
| elcritch wrote:
| Thanks, that's exactly what I was referencing. In lisp
| the type doesn't matter as much, just the structure, as
| maps or other dynamic pieces will be used. However in
| typed languages it matters a lot.
| forrestthewoods wrote:
| Rust macros are a mutant foreign language.
|
| A much much better system would be one that lets you
| write vanilla Rust code to manipulate either the token
| stream or the parsed AST.
| dwattttt wrote:
| ...? Proc macros _are_ vanilla Rust code written to
| manipulate a token stream.
| forrestthewoods wrote:
| You're right. I should have said I want vanilla Rust code
| for vanilla macros and I want to manipulate the AST not
| token streams.
|
| Token manipulation code is frequently full of syn! macro
| hell. So even token manipulation is only kind of normal
| Rust code.
| dhruvrajvanshi wrote:
| It doesn't have access to the type system, for example.
| It just sees it's input as what you typed in the code. It
| wouldn't be able to see through aliases.
| rurban wrote:
| Perl BEGIN blocks
| tmtvl wrote:
| PPR + keyword::declare (shame that Damien didn't actually
| call it keyword::keyword).
| foobazgt wrote:
| Yeah, I'm not a big fan of annotation processing either. It's
| simultaneously heavyweight and unwieldy, and yet doesn't do
| enough. You get all the annoyance of working with a full-blown
| AST, and none of the power that comes with being able to
| manipulate an AST.
|
| Annotations themselves are pretty great, and AFAIK, they are
| most widely used with reflection or bytecode rewriting instead.
| I get that the maintainers dislike macro-like capabilities, but
| the reality is that many of the nice libraries/facilities Java
| has (e.g. transparent spans), just aren't possible without AST-
| like modifications. So, the maintainers don't provide 1st class
| support for rewriting, and they hold their noses as popular
| libraries do it.
|
| Closely related, I'm pretty excited to muck with the new class
| file API that just went GA in 24
| (https://openjdk.org/jeps/484). I don't have experience with it
| yet, but I have high hopes.
| pron wrote:
| Java's annotation processing is intentionally limited so that
| compiling with them cannot change the semantics of the Java
| language as defined by the Java Language Specification (JLS).
|
| Note that more intrusive changes -- including not only
| bytecode-rewriting agents, but also the use of those AST-
| modifying "libraries" (really, languages) -- require command-
| line flags that tell you that the semantics of code may be
| impacted by some other code that is identified in those
| flags. This is part of "integrity by default":
| https://openjdk.org/jeps/8305968
| foobazgt wrote:
| Just because something mucks with a program's AST doesn't
| mean that it's introducing a new "language". You wouldn't
| call using reflection, "creating a new language", either,
| and many of these libraries can be implemented either way.
| (Usually a choice between adding an additional build step,
| runtime overhead, and ease of implementation). It just
| really depends upon the details of the transform.
|
| The integrity by default JEPs are really about trying to
| reduce developers depending upon JDK/JRE implementation
| details, for example, sun.misc.Unsafe. From the JEP:
|
| "In short: The use of JDK-internal APIs caused serious
| migration issues, there was no practical mechanism that
| enabled robust security in the current landscape, and new
| requirements could not be met. Despite the value that the
| unsafe APIs offer to libraries, frameworks, and tools, the
| ongoing lack of integrity is untenable. Strong
| encapsulation and the restriction of the unsafe APIs -- by
| default -- are the solution."
|
| If you're dependent on something like ClassFileTransformer,
| -javaagent, or setAccessible, you'll just set a command-
| line flag. If you're not, it's because you're already doing
| this through other means like a custom ClassLoader or a
| build step.
| pron wrote:
| > Just because something mucks with a program's AST
| doesn't mean that it's introducing a new "language".
|
| That depends on the language specification. The Java spec
| dictates what code a Java compiler must accept _and must
| reject_. Any "mucking with AST" that changes that is, by
| definition, not Java. For example, many Lombok programs
| are clearly not written in Java because the Java spec
| dictates that a Java compiler (with or without annotation
| processors) _must_ reject them.
|
| In Scheme or Clojure, user-defined AST transformations
| are very much part of the language.
|
| > The integrity by default JEPs are really about trying
| to reduce developers depending upon JDK/JRE
| implementation details
|
| I'm one of the JEP's authors, and it concerns multiple
| things. In general, it concerns being able to make
| guarantees about certain invariants.
|
| > If you're not, it's because you're already doing this
| through other means like a custom ClassLoader or a build
| step.
|
| Custom class loaders fall within integrity by default, as
| their impact is localised. Build step transforms also
| require an explicit run of some executable. The point of
| integrity by default is that any possibility of breaking
| invariants that the spec wishes to enforce must require
| some visible, auditable step. This is to specifically
| exclude invariant-breaking operations by code that
| appears to be a regular library.
| ephaeton wrote:
| well, the lisp family of languages surely can do all of that,
| and more. Check out, for example, clojure's version of zig's
| dropped 'async'. It's a macro.
| hiccuphippo wrote:
| The quote in Spanish about a Norse god is from a story by Jorge
| Luis Borges, here's an English translation:
| https://biblioklept.org/2019/04/02/the-disk-a-very-short-sto...
| _emacsomancer_ wrote:
| And in Spanish here: https://www.poeticous.com/borges/el-
| disco?locale=es
|
| (Not having much Spanish, I at first thought "Odin's
| disco(teque)" and then "no, that doesn't make sense about
| sides", but then, surely primed by English "disco", thought "it
| must mean Odin's record/lp/album".)
| wiml wrote:
| Odin's records have no B-sides, because everything Odin
| writes is fire!
| tialaramex wrote:
| Back when things really had A and B sides, it was
| moderately common for big artists to release a "Double A"
| in which both titles were heavily promoted, e.g. Nirvana's
| "All Apologies" and "Rape Me" are a double A, the Beatles
| "Penny Lane" and "Strawberry Fields Forever" likewise.
| kruuuder wrote:
| If you have read the story and, like me, are still wondering
| which part of the story is the quote at the top of the post:
|
| "It's Odin's Disc. It has only one side. Nothing else on Earth
| has only one side."
| tines wrote:
| A mobius strip does!
| bibanez wrote:
| A mobius strip made out of paper has 2 sides, the usual one
| and the edge.
| WJW wrote:
| How about things like Klein bottles that have no edges?
| (Although I guess that unlike a Mobius strip it's not
| possible to make a real one here on Earth so the quote
| from OP still holds)
| Validark wrote:
| The story is indeed very short, but hits hard. Odin reveals
| himself and his mystical disc that he states makes him king as
| long as he holds it. The Christian hermit (by circumstance) who
| had previously received him told him he didn't worship Him,
| that he worshiped Christ instead, and then murdered him for the
| disc in the hopes he could sell it for a bunch of money. He
| dumped Odin's body in the river and never found the disc. The
| man hated Odin to this day for not just handing over the disc
| to him.
|
| I wonder if there's some message in here. As a modern American
| reader, if I believed the story was contemporary, I'd think
| it's making a point about Christianity substituting honor for
| destructive greed. That a descendant of the wolves of Odin
| would worship a Hebrew instead and kill him for a bit of money
| is quite sad, but I don't think it an inaccurate
| characterization. There's also the element of resentment
| towards Odin for not just handing over monetary blessings.
| That's sad to me as well. Part of me hopes that one day Odin
| isn't held in such contempt.
| ww520 wrote:
| This is a very educational blog post. I knew 'comptime for' and
| 'inline for' were comptime related, but didn't know the
| difference. The post explains the inline version only knows the
| length at comptime. I guess it's for loop unrolling.
| hansvm wrote:
| The normal use case for `inline for` is when you have to close
| over something only known at compile time (like when iterating
| over the fields of a struct), but when your behavior depends on
| runtime information (like conditionally assigning data to those
| fields).
|
| Unrolling as a performance optimization is usually slightly
| different, typically working in batches rather than unrolling
| the entire thing, even when the length is known at compile
| time.
|
| The docs suggest not using `inline` for performance without
| evidence it helps in your specific usage, largely because the
| bloated binary is likely to be slower unless you have a good
| reason to believe your case is special, and also because
| `inline` _removes_ optimization potential from the compiler
| rather than adding it (its inlining passes are very, very good,
| and despite having an extremely good grasp on which things
| should be inlined I rarely outperform the compiler -- I'm never
| worse, but the ability to not have to even think about it
| unless/until I get to the microoptimization phase of a project
| is liberating).
| pron wrote:
| Yes!
|
| To me, the uniqueness of Zig's comptime is a combination of two
| things:
|
| 1. comtpime _replaces_ many other features that would be
| specialised in other languages with or without rich compile-time
| (or runtime) metaprogramming, _and_
|
| 2. comptime is referentially transparent [1], that makes it
| strictly "weaker" than AST macros, but simpler to understand;
| what's surprising is just how capable you can be with a comptime
| mechanism with access to introspection yet without the
| referentially opaque power of macros.
|
| These two give Zig a unique combination of simplicity and power.
| We're used to seeing things like that in Scheme and other Lisps,
| but the approach in Zig is very different. The outcome isn't as
| general as in Lisp, but it's powerful enough while keeping code
| easier to understand.
|
| You can like it or not, but it is very interesting and very novel
| (the novelty isn't in the feature itself, but in the place it has
| in the language). Languages with a novel design and approach that
| you can learn in a couple of days are quite rare.
|
| [1]: In short, this means that you get no access to names or
| expressions, only the values they yield.
| User23 wrote:
| Has anyone grafted Zig style macros into Common Lisp?
| toxik wrote:
| Isn't this kind of thing sort of the default thing in Lisp?
| Code is data so you can transform it.
| fn-mote wrote:
| There are no limitations on the transformations in lisp.
| That can make macros very hard to understand. And hard for
| later program transformers to deal with.
|
| The innovation in Zig is the restrictions that limit the
| power of macros.
| TinkersW wrote:
| Lisp is so powerful, but without static types you can't
| even do basic stuff like overloading, and have to invent a
| way to even check the type(for custom types) so you can
| branch on type.
| dokyun wrote:
| > Lisp is so powerful, but <tired old shit from someone
| who's never used Lisp>.
|
| You use defmethod for overloading. Types check
| themselves.
| User23 wrote:
| And a modern compiler will jmp past the type checks if
| the inferencer OKs it!
| wild_egg wrote:
| > but without static types
|
| So add static types.
|
| https://github.com/coalton-lang/coalton
| pjmlp wrote:
| No need for overloading when you have CLOS and multi-
| method dispatch.
| Zambyte wrote:
| There isn't really as clear of a distinction between
| "runtime" and "compile time" in Lisp. The comptime keyword is
| essentially just the opposite of quote in Lisp. Instead of
| using comptime to say what should be evaluated early, you use
| quote to say what should be evaluated later. Adding comptime
| to Lisp would be weird (though obviously not impossible,
| because it's Lisp), because that is essentially the default
| for expressions.
| Conscat wrote:
| The truth of this varies between Lisp based languages.
| Conscat wrote:
| The Scopes language might be similar to what you're asking
| about. Its notion of "spices" which complement the "sugars"
| feature is a similar kind of constant evaluation. It's not a
| Common Lisp dialect, though, but it is sexp based.
| pron wrote:
| That wouldn't be very meaningful. The semantics of Zig's
| comptime is more like that of subroutines in a dynamic
| language - say, JavaScript functions - than that of macros.
| The point is that it's executed, and yields errors, at a
| different phase, i.e. compile time.
| paldepind2 wrote:
| I was a bit confused by the remark that comptime is
| referentially transparent. I'm familiar with the term as it's
| used in functional programming to mean that an expression can
| be replaced by its value (stemming from it having no side-
| effects). However, from a quick search I found an old related
| comment by you [1] that clarified this for me.
|
| If I understand correctly you're using the term in a different
| (perhaps more correct/original?) sense where it roughly means
| that two expressions with the same meaning/denotation can be
| substituted for each other without changing the
| meaning/denotation of the surrounding program. This property is
| broken by macros. A macro in Rust, for instance, can
| distinguish between `1 + 1` and `2`. The comptime system in Zig
| in contrast does not break this property as it only allows one
| to inspect values and not un-evaluated ASTs.
|
| [1]: https://news.ycombinator.com/item?id=36154447
| deredede wrote:
| Those are equivalent, I think. If you can replace an
| expression by its value, any two expressions with the same
| value are indistinguishable (and conversely a value is an
| expression which is its own value).
| pron wrote:
| Yes, I am using the term more correctly (or at least more
| generally), although the way it's used in functional
| programming is a special case. A referentially transparent
| term is one whose sub-terms can be replaced by their
| references without changing the reference of the term as a
| whole. A functional programming language is simply one where
| all references are values or "objects" _in the programming
| language itself_.
|
| The expression `i++` in C is not a value _in C_ (although it
| is a "value" in some semantic descriptions of C), yet a C
| expression that contains `i++` and cannot distinguish between
| `i++` and any other C operation that increments i by 1, is
| referentially transparent, which is pretty much all C
| expressions except for those involving C macros.
|
| Macros are not referentially transparent because they can
| distinguish between, say, a variable whose _name_ is `foo`
| and is equal to 3 and a variable whose name is `bar` and is
| equal to 3. In other words, their outcome may differ not just
| by what is being referenced (3) but also by _how_ it 's
| referenced (`foo` or `bar`), hence they're referentially
| opaque.
| cannabis_sam wrote:
| Regarding 2. How are comptime values restricted to total
| computations? Is it just by the fact that the compiler actually
| finished, or are there any restrictions on comptime
| evaluations?
| pron wrote:
| They don't need to be restricted to total computation to be
| referentially transparent. Non-termination is also a
| reference.
| mppm wrote:
| Yes, comptime evaluation is restricted to a configurable
| number of back-branches. 1000 by default, I think.
| keybored wrote:
| I've never managed to understand your year-long[1] manic praise
| over this feature. Given that you're a language implementer.
|
| It's very cool to be able to just say "Y is just X". You know
| in a museum. Or at a distance. Not necessarily as something you
| have to work with daily. Because I would rather take something
| ranging from Java's interface to Haskell's typeclasses since
| once implemented, they'll just work. With comptime types,
| according to what I've read, you'll have to bring your T to the
| comptime and find out right then and there if it will work.
| Without enough foresight it might not.
|
| That's not something I want. I just want generics or parametric
| polymorphism or whatever it is to work once it compiles. If
| there's a <T> I want to slot in T without any surprises. And
| whether Y is just X is a very distant priority at that point.
| Another distant priority is if generics and whatever else _is
| all just X undernea_... I mean just let me use the language
| declaratively.
|
| I felt like I was on the idealistic end of the spectrum when I
| saw you criticizing other languages that are not installed on 3
| billion devices as too academic.[2] Now I'm not so sure?
|
| [1] https://news.ycombinator.com/item?id=24292760
|
| [2] But does Scala technically count since it's on the JVM
| though?
| ww520 wrote:
| I'm sorry but I don't understand what you're complaining
| about comptime. All the stuff you said you wanted to work
| (generic, parametric polymorphism, slotting <T>, etc) just
| work with comptime. People are praising about comptime
| because it's a simple mechanism that replacing many features
| in other languages that require separate language features.
| Comptime is very simple and natural to use. It can just float
| with your day to day programming without much fuss.
| keybored wrote:
| comptime can't outright replace many language features
| because it chooses different tradeoffs to get to where it
| wants. You get a "one thing to rule all" at the expense of
| less declarative use.
|
| Which I already said in my original comment. But here's a
| source that I didn't find last time: https://strongly-
| typed-thoughts.net/blog/zig-2025#comptime-i...
|
| Academics have thought about evaluating things at compile
| time (or any _time_ ) for decades. No, you can't just slot
| in eval at a weird place that no one ever thought of (they
| did) and immediately solve a suite of problems that other
| languages use multiple discrete features for (there's a
| reason they do that).
| throwawaymaths wrote:
| > comptime can't outright replace many language features
| because it chooses different tradeoffs to get to where it
| wants.
|
| You're missing the point. I don't have any theory to
| qualify this, but:
|
| I've worked in a language with lisp-ey macros, and I
| absolutely hate hate hate when people build too-clever
| DSLs that hide a lot of weird shit like creating variable
| names or pluralizing database tables for me, swapping
| camel-case and snake case, creating a ton of logic under
| the hood that's hard to chase.
|
| Zig's comptime for the most part shys you away from those
| sorts of things. So yes, it's not fully feature parity in
| the language theory sense, but it really blocks you or
| discourages you away from _shit you don 't need to do,
| please for the love of god don't_. Hard to justify
| theoretically. it's real though.
|
| It's just something you notice after working with it for
| while.
| keybored wrote:
| No, you are clearly missing the point because I laid out
| concrete critiques about how Zig doesn't replace certain
| concrete language features with One Thing to Rule Them
| All. All in reply to someone complimenting Zig on that
| same subject.
|
| That you want to make a completely different point about
| macros gone wild is not my problem.
| hitekker wrote:
| Do you have a source for "criticizing other languages not
| installed on 3 billion devices as too academic" ?
|
| Without more context, this comment sounds like rehashing old
| (personal?) drama.
| keybored wrote:
| pron has been posting about programming languages for years
| and years, here, in public, for all to see. I guess reading
| them makes it personal? (We don't know each other)
|
| The usual persona is the hard-nosed pragmatist[1] who
| thinks language choice doesn't matter and that PL
| preference is mostly about "programmer enjoyment".
|
| [1] https://news.ycombinator.com/item?id=16889706
|
| Edit: The original claim might have been skewed. Due to
| occupation the PL discussions often end up being about Java
| related things, and the JVM language which is criticized
| has often been Scala specifically. Here he recommends
| Kotlin over Scala (not Java):
| https://news.ycombinator.com/item?id=9948798
| pron wrote:
| My "manic praise" extends to the novelty of the feature as
| Zig's design is revolutionary. It is exciting because it's
| very rare to see completely novel designs in programming
| languages, especially in a language that is both easy to
| learn _and_ intended for low-level programming.
|
| I wait 10-15 years before judging if a feature is "good";
| determining that a feature is bad is usually quicker.
|
| > With comptime types, according to what I've read, you'll
| have to bring your T to the comptime and find out right then
| and there if it will work. Without enough foresight it might
| not.
|
| But the point is that all that is done _at compile time_ ,
| which is also the time when all more specialised features are
| checked.
|
| > That's not something I want. I just want generics or
| parametric polymorphism or whatever it is to work once it
| compiles.
|
| Again, everything is checked at compile-time. Once it
| compiles it will work just like generics.
|
| > I mean just let me use the language declaratively.
|
| That's fine and expected. I believe that most language
| preferences are aesthetic, and there have been few objective
| reasons to prefer some designs over others, and usually it's
| a matter of personal preference or "extra-linguistic"
| concerns, such as availability of developers and libraries,
| maturity, etc..
|
| > Now I'm not so sure?
|
| Personally, I wouldn't dream of using Zig or Rust for
| important software because they're so unproven. But I do find
| novel designs _fascinating_. Some even match my own aesthetic
| preferences.
| keybored wrote:
| > But the point is that all that is done at compile time,
| which is also the time when all more specialised features
| are checked.
|
| > ...
|
| > Again, everything is checked at compile-time. Once it
| compiles it will work just like generics.
|
| No. My compile-time when using a library with a comptime
| type in Zig is not guaranteed to work because my user
| experience could depend on if the library writer tested
| with the types (or compile-time input) that I am using.[1]
| That's not a problem in Java or Haskell: if the library
| works for Mary it will work for John no matter what the
| type-inputs are.
|
| > That's fine and expected. I believe that most language
| preferences are aesthetic, and there have been few
| objective reasons to prefer some designs over others, and
| usually it's a matter of personal preference or "extra-
| linguistic" concerns, such as availability of developers
| and libraries, maturity, etc..
|
| Please don't retreat to aesthetics. What I brought up is a
| concrete and objective user experience tradeoff.
|
| [1] based on https://strongly-typed-
| thoughts.net/blog/zig-2025#comptime-i...
| pron wrote:
| > No. My compile-time when using a library with a
| comptime type in Zig is not guaranteed to work because my
| user experience could depend on if the library writer
| tested with the types (or compile-time input) that I am
| using.[1] That's not a problem in Java or Haskell: if the
| library works for Mary it will work for John no matter
| what the type-inputs are.
|
| What you're saying isn't very meaningful. Even generics
| may impose restrictions on their type parameters (e.g.
| typeclasses in Zig or type bounds in Java) and don't
| necessarily work for all types. In both cases you know at
| compile-time whether your types fit the bounds or not.
|
| It is true that the restrictions in Haskell/Java are more
| declarative, but the distinction is more a matter of
| personal aesthetic preference, which is exactly what's
| expressed in that blog post (although comptime is about
| as different from C++ templates as it is from
| Haskell/Java generics). Like anything, and especially
| truly novel approaches, it's not for everyone's tastes,
| but neither are Java, Haskell, or Rust, for that matter.
| That doesn't make Zig's approach any less novel or
| interesting, even if you don't like it. I find Rust's
| design unpalatable, but that doesn't mean it's not
| interesting or impressive, and Zig's approach -- again,
| like it or not -- is even more novel.
| keybored wrote:
| > What you're saying isn't very meaningful. Even generics
| may impose restrictions on their type parameters (e.g.
| typeclasses in Zig or type bounds in Java) and don't
| necessarily work for all types. In both cases you know at
| compile-time whether your types fit the bounds or not.
|
| Java type-bounds is what I mean with declarative. The
| library author wrote them, I know them, I have to follow
| them. It's all spelled out. According to the link that's
| not the case with the Zig comptime machinery. It's
| effectively duck-typed from the point of view of the
| client (declaration).
|
| I _also_ had another source in mind which explicitly
| described how Zig comptime is "duck typed" but I can't
| seem to find it. Really annoying.
|
| > It is true that the restrictions in Haskell/Java are
| more declarative, but the distinction is more a matter of
| personal aesthetic preference, which is exactly what's
| expressed in that blog post (although comptime is about
| as different from C++ templates as it is from
| Haskell/Java generics).
|
| It's about as aesthetic as having spelled out reasons
| (usability) for preferring static typing over dynamic
| typing or vice versa. It's really not. At all.
|
| > , but that doesn't mean it's not interesting or
| impressive, and Zig's approach -- again, like it or not
| -- is even more novel.
|
| I prefer meaningful leaps forward in programming language
| usability over supposed most-streamlined and clever
| approaches (comptime all the way down). I guess I'm just
| a pragmatist in that very narrow area.
| pron wrote:
| > According to the link that's not the case with the Zig
| comptime machinery. It's effectively duck-typed from the
| point of view of the client (declaration).
|
| It is "duck-typed", but it is _checked at compile time_.
| Unlike ducktyping in JS, you know whether or not your
| type is a valid argument just as you would for Java type
| bounds -- the compiler lets you know. Everything is also
| all spelled out, just in a different way.
|
| > It's about as aesthetic as having spelled out reasons
| (usability) for preferring static typing over dynamic
| typing or vice versa. It's really not. At all.
|
| But everything is checked statically, so all the
| arguments of failing fast apply here, too.
|
| > I prefer meaningful leaps forward in programming
| language usability over supposed most-streamlined and
| clever approaches (comptime all the way down). I guess
| I'm just a pragmatist in that very narrow area.
|
| We haven't had "meaningful leaps forward in programming
| language usability" in a very long time (and there are
| fundamental reasons for that, and indeed the situation
| was predicted decades ago). But if we were to have a
| meaningful leap forward, first we'd need some leap
| forward and then we could try learning how meaningful it
| is (which usually takes a very long time). I don't know
| that Zig's comptime is a meaningful leap forward or not,
| but as one of the most novel innovations in programming
| languages in a very long time, at least it's something
| that's worth a look.
| keybored wrote:
| > Because I would rather take something ranging from Java's
| interface to Haskell's typeclasses since once implemented,
| they'll just work. With comptime types, according to what
| I've read, you'll have to bring your T to the comptime and
| find out right then and there if it will work. Without enough
| foresight it might not.
|
| This was perhaps a bad comparison and I should have compared
| e.g. Java generics to Zig's comptime T.
| WalterBright wrote:
| It's not novel. D pioneered compile time function execution
| (CTFE) back around 2007. The idea has since been adopted in
| many other languages, like C++.
|
| One thing it is used for is generating string literals, which
| then can be fed to the compiler. This takes the place of
| macros.
|
| CTFE is one of D's most popular and loved features.
| az09mugen wrote:
| A little bit out of context, I just want to thank you and all
| the contributors for the D programming language.
| WalterBright wrote:
| That means a lot to us. Thanks!
| msteffen wrote:
| If I understand TFA correctly, the author claims that D's
| approach is actually different:
| https://matklad.github.io/2025/04/19/things-zig-comptime-
| won...
|
| "In contrast, there's absolutely no facility for dynamic
| source code generation in Zig. You just can't do that, the
| feature isn't! [sic]
|
| Zig has a completely different feature, partial
| evaluation/specialization, which, none the less, is enough to
| cover most of use-cases for dynamic code generation."
| WalterBright wrote:
| The partial evaluation/specialization is accomplished in D
| using a template. The example from the link:
| fn f(comptime x: u32, y: u32) u32 { if (x == 0)
| return y + 1; if (x == 1) return y * 2;
| return y; }
|
| and in D: uint f(uint x)(uint y) {
| if (x == 0) return y + 1; if (x == 1) return y
| * 2; return y; }
|
| The two parameter lists make it a function template, the
| first set of parameters are the template parameters, which
| are compile time. The second set are the runtime
| parameters. The compile time parameters can also be types,
| and aliased symbols.
| msteffen wrote:
| Here is, I think, an interesting example of the kind of
| thing TFA is talking about. In case you're not already
| familiar, there's an issue that game devs sometimes
| struggle with, where, in C/C++, an array of structs (AoS)
| has a nice syntactic representation in the language and
| is easy to work with/avoid leaks, but a struct of arrays
| (SoA) has a more compact layout in memory and better
| performance.
|
| Zig has a library to that allows you to have an AoS that
| is laid out in memory like a SoA:
| https://zig.news/kristoff/struct-of-arrays-soa-in-zig-
| easy-i... . If you read the implementation (https://githu
| b.com/ziglang/zig/blob/master/lib/std/multi_arr...) the
| SoA is an elaborately specialized type, parameterized on
| a struct type that it introspects at compile time.
|
| It's neat because one might reach for macros for this
| sort of the thing (and I'd expect the implementation to
| be quite complex, if it's even possible) but the details
| of Zig's comptime--you can inspect the fields of the type
| parameter struct, and the SoA can be highly flexible
| about its own fields--mean that you don't need a macro
| system, and the Zig implementation is actually simpler
| than a macro approach probably would be.
| WalterBright wrote:
| D doesn't have a macro system, either, so I don't
| understand what you mean.
| msteffen wrote:
| IIUC, it does have code generation--the ability to
| generate strings at compile-time and feed them back into
| the compiler.
|
| The argument that the author of TFA is making is that
| Zig's comptime is a very limited feature (which, they
| argue, is good. It restricts users from introducing
| architecture dependencies/cross-compilation bugs, is more
| amenable to optimization, etc), and yet it allows users
| to do most of the things that more general alternatives
| (such as code generation or a macro system) are often
| used for.
|
| In other words, while Zig of course didn't invent
| compile-time functions (see lisp macros), it is notable
| and useful from a PL perspective if Zig users are doing
| things that seem to require macros or code generation
| without actually having those features. D users, in
| contrast, do have code generation.
|
| Or, alternatively, while many languages support
| metaprogramming of some kind, Zig's metaprogramming
| language is at a unique maxima of safety (which macros
| and code generation lack) and utility (which e.g. Java/Go
| runtime reflection, which couldn't do the AoS/SoA thing,
| lack)
| naasking wrote:
| Using a different type vs. a different syntax can be an
| important usability consideration, particularly since D
| also has templates and other features, where Zig provides
| only the comptime type for all of them. Homogeneity can
| also be a nice usability win, though there are downsides
| as well.
| WalterBright wrote:
| Zig's use of comptime in a function argument makes it a
| template :-/
|
| I bet if you use such a function with different comptime
| arguments, compile it, and dump the assembler you'll see
| that function appearing multiple times, each with
| somewhat different code generated for it.
| naasking wrote:
| > Zig's use of comptime in a function argument makes it a
| template :-/
|
| That you can draw an isomorphism between two things does
| not mean they are ergonomically identical.
| pcwalton wrote:
| When we're responding to quite valid points about other
| languages having essentially the same features as Zig
| with subjective claims about ergonomics, the idea that
| Zig comptime is "revolutionary" is looking awfully
| flimsy. I agree with Walter: Zig isn't doing anything
| novel. Picking some features while leaving others out is
| something that every language does; if doing that is
| enough to make a language "revolutionary", then every
| language is revolutionary. The reality is a lot simpler
| and more boring: for Zig enthusiasts, the set of features
| that Zig has appeals to them. Just like enthusiasts of
| every programming language.
| pron wrote:
| I'm sorry, but not being able to see that a design that
| uses a touchscreen _to eliminate the keyboard_ is novel
| despite the touchscreen itself having been used elsewhere
| alongside a keyboard, shows a misunderstanding of what
| design is.
|
| Show me the language that used a general purpose compile-
| time mechanisms _to avoid specialised features_ such as
| generics /templates, interfaces/typeclasses, macros, and
| conditional compilation before Zig, then I'll say that
| language was revolutionary.
|
| I also find it hard to believe that you can't see how
| replacing all these features with a single one (that
| isn't AST macros) is novel. I'm not saying you have to
| think it's a good idea -- that's a matter of personal
| taste (at least until we can collect more objective data)
| -- but it's clearly novel.
|
| I don't know all the languages in the world and it's
| possible there was a language that did that before Zig,
| but none of the languages mentioned here did. Of course,
| it's possible that no other language did that because
| it's stupid, but that doesn't mean it's not novel
| (especially as the outcome does not appear stupid on the
| face of it).
| pcwalton wrote:
| But Zig's comptime only approximates the features you
| mentioned; it doesn't fully implement them. Which is what
| the original article is saying. To use your analogy,
| using a touchscreen to eliminate a keyboard isn't very
| impressive if your touchscreen keyboard is missing keys.
|
| If you say that incomplete implementations count, then I
| could argue that the C preprocessor subsumes
| generics/templates, interfaces/typeclasses+, macros, and
| conditional compilation.
|
| +Exercise for the reader: build a generics system in the
| C preprocessor that #error's out if the wrong type is
| passed using the trick in [1].
|
| [1]: https://stackoverflow.com/a/45450646
| pron wrote:
| > But Zig's comptime only approximates the features you
| mentioned; it doesn't fully implement them
|
| That's like saying that a touchscreen device without a
| keyboard only approximates a keyboard but doesn't fully
| implement one. The important thing is that the feature
| _performs the duty_ of those other features.
|
| > If you say that incomplete implementations count, then
| I could argue that the C preprocessor subsumes
| generics/templates, interfaces/typeclasses+, macros, and
| conditional compilation.
|
| There are two problems with this, even if we assumed that
| the power of C's preprocessor is completely equivalent to
| Zig's comptime:
|
| First, C's preprocessor is a distinct meta-language; one
| major point of Zig's comptime is that the metalanguage is
| the same language as the object language.
|
| Second, it's unsurprising that macros -- whether they're
| more sophisticated or less -- can do the role of all
| those other features. As I wrote in my original comment
| (https://news.ycombinator.com/item?id=43745438) one of
| the exciting things about Zig is that a feature that _isn
| 't_ macros (and is strictly weaker than macros, as it's
| referentially transparent) can replace them for the most
| part, while enjoying a greater ease of understanding.
|
| I remember that one of my first impressions of Zig was
| that it evoked the magic of Lisp (at least that was my
| gut feeling), but in a completely different way, one that
| doesn't involve AST manipulation, and doesn't suffer from
| many of the problems that make List macros problematic
| (i.e. creating DSLs with their own rules). I'm not saying
| it may not have _other_ problems, but that is very novel.
|
| I hadn't seen any such fresh designs in well over a
| decade. Now, it could be that I simply don't know enough
| languages, but you also haven't named other languages
| that work on this design principle, so I think my
| excitement was warranted. I'll let you know if I think
| that's not only a fresh and exciting design but also a
| _good_ one in ten years.
|
| BTW, I have no problem with you finding Zig's comptime
| unappealing to your tastes or even believing it suffers
| from fundamental issues that may prove problematic in
| practice (although personally I think that, when
| considering both pros and cons of this design versus the
| alternatives, there's some promise here). I just don't
| understand how you can say that the design isn't novel
| while not naming one other language with a similar core
| design: a mechanism for partial evaluation of the object
| language (with access to additional reflective
| operations) that replace those other features I mentioned
| (by performing their duty, if not exactly their mode of
| operation).
|
| For example, I've looked at Terra, but it makes a
| distinction between the meta language and the object (or
| "runtime") language.
| matklad wrote:
| >I'm not saying it may not have other problems, but that
| is very novel.
|
| Just to explicitly acknowledge this, it inherits the C++
| problem that you don't get type errors inside a function
| until you call the function and, when that happens, its
| not always immediately obvious whether the problem is in
| the caller or in the callee.
| matklad wrote:
| >for Zig enthusiasts, the set of features that Zig has
| appeals to them. Just like enthusiasts of every
| programming language.
|
| I find it rather amusing that it's a Java and a Rust
| enthusiast who are extolling Zig approach here! I am not
| particularly well read with respect to programming
| languages, but I don't recall many languages which define
| generic pair as fn Pair(A: type, B:
| type) type { return struct { fst: A, snd: B
| }; }
|
| The only one that comes to mind is 1ML, and I'd argue
| that it is also revolutionary.
| pcwalton wrote:
| Well, if you strip away the curly braces and return
| statement, that's just a regular type definition.
| Modeling generic types as functions from types to types
| is just System F, which goes back to 1975. Turing-
| complete type-level programming is common in tons of
| languages, from TypeScript to Scala to Haskell.
|
| I think the innovation here is _imperative_ type-level
| programming--languages that support type-level
| programming are typically functional languages, or
| functional languages at the type level. Certainly
| interesting, but not revolutionary IMO.
| matklad wrote:
| The thing is, this is not type-level programming, this is
| term-level programming. That there's no separate language
| of types is the feature. Functional/imperative is
| orthogonal. You can imagine functional Zig which writes
| Pair :: type -> type -> type let Pair a b =
| product a b
|
| This is one half of the innovation, dependent-types lite.
|
| The second half is how every other major feature is
| expressed _directly_ via comptime/partial evaluation, not
| even syntax sugar is necessary. Generic, macros, and
| conditional compilation are the three big ones.
| pcwalton wrote:
| > This is one half of the innovation, dependent-types
| lite.
|
| But that's not dependent types. Dependent types are types
| that depend on values. If all the arguments to a function
| are either types or values, then you don't have dependent
| types: you have kind polymorphism, as implemented for
| example in GHC extensions [1].
|
| > The second half is how every other major feature is
| expressed _directly_ via comptime/partial evaluation, not
| even syntax sugar is necessary. Generic, macros, and
| conditional compilation are the three big ones.
|
| I'd argue that not having syntactic sugar is pretty
| minor, but reasonable people can differ I suppose.
|
| [1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/e
| xts/poly...
| matklad wrote:
| > Dependent types are types that depend on values.
|
| Like this? fn f(comptime x: bool) if
| (x) u32 else bool { return if (x) 0 else
| false; }
| edflsafoiewq wrote:
| No, dependent types depend on runtime values.
| matklad wrote:
| Yeah, that one Zig can not do, hence "-lite".
| pcwalton wrote:
| The point is that comptime isn't dependent types at all.
| If your types can't depend on runtime values, they aren't
| dependent types. It's something more like kind
| polymorphism in GHC (except more dynamically typed),
| something which GHC explicitly calls out as not dependent
| types. (Also it's 12 years old [1]).
|
| [1]:
| https://www.seas.upenn.edu/~sweirich/papers/fckinds.pdf
| pcwalton wrote:
| That's still just a function of type [?]K[?]L.K - L with
| a bound on K. From a type theory perspective, a comptime
| argument, when the function is used in such a way as to
| return a type, is not a value, even though it looks like
| one. Rather, true or false in this context is a type.
| (Yes, really. This is a good example of why Zig reusing
| the keyword "comptime" obscures the semantics.) If
| comptime true or comptime false were actually values,
| then you could put runtime values in there too.
| WalterBright wrote:
| I might be misunderstanding something, but this is how it
| works in D: struct Pair(A, B) { A fst;
| B snd; } Pair!(int, float) p; // declaration
| of p as instance of Pair
|
| It's just a struct with the addition of type parameters.
| baazaa wrote:
| that's a comically archaic way of using the verb 'to be',
| not a grammatical error. you see it in phrases like "to be
| or not to be", or "i think, therefore i am". "the feature
| isn't" just means it doesn't exist.
| CRConrad wrote:
| Damn, beat me by half a day.
| sixthDot wrote:
| Sure, CTFE can be used to generate strings, then later
| "mixed-in" as source code, but also can be used to execute
| normal functions and then the result can be stored in a
| compile-time constant (in D that's the `enum` storage
| class), for example generating an array using a function
| literal called at compile-time: enum arr =
| { return iota(5).map!(i => i * 10).array; }();
| static assert(arr == [0,10,20,30,40]);
| CRConrad wrote:
| > the feature isn't! [sic]
|
| To be, or not to be... The feature is not.
|
| (IOW, English may not be the author's native language. I'm
| fairly sure it means "The feature doesn't exist".)
| Someone wrote:
| > D pioneered compile time function execution (CTFE) back
| around 2007
|
| Pioneered? Forth had that in the 1970s, lisp somewhere in the
| 1960s (I'm not sure whether the first versions of either had
| it, so I won't say 1970 respectively 1960), and there may be
| other or even older examples.
| WalterBright wrote:
| True, but consider that Forth and Lisp started out as
| interpreted languages, meaning the whole thing can be done
| at compile time. I haven't seen this feature before in a
| language that was designed to be compiled to machine code,
| such as C, Pascal, Fortran, etc.
|
| BTW, D's ImportC C compiler does CTFE, too!! CTFE is a
| natural fit for C, and works like a champ. Standard C
| should embrace it.
| Someone wrote:
| Nitpick: Lisp didn't start out as an interpreted
| language. It started as an idea from a theoretical
| computer scientist, and wasn't supposed to be
| implemented. https://en.wikipedia.org/wiki/Lisp_(programm
| ing_language)#Hi...:
|
| _" Steve Russell said, look, why don't I program this
| eval ... and I said to him, ho, ho, you're confusing
| theory with practice, this eval is intended for reading,
| not for computing. But he went ahead and did it. That is,
| he compiled the eval in my paper into IBM 704 machine
| code, fixing bugs, and then advertised this as a Lisp
| interpreter, which it certainly was. So at that point
| Lisp had essentially the form that it has today"_
| throwawaymaths wrote:
| You're missing the point. If anything D is littered with
| features and feature bloat (CTFE included). Zig (as the
| author of the blog mentions) is more than somewhat defined by
| what it _can 't_ do.
| WalterBright wrote:
| I fully agree that the difference is a matter of taste.
|
| All living languages accrete features over time. D started
| out as a much more modest language. It originally eschewed
| templates and operator overloading, for example.
|
| Some features were abandoned, too, like complex numbers and
| the "bit" data type.
| pron wrote:
| It is novel to the point of being revolutionary. As I wrote
| in my comment, "the novelty isn't in the feature itself, but
| in the place it has in the language". It's one thing to come
| up with a feature. It's a whole other thing to position it
| within the language. Various compile-time evaluations are not
| even remotely positioned in D, Nim, or C++ as they are in
| Zig. The point of Zig's comptime is not that it _allows_ you
| to do certain computations at compile-time, but that it
| _replaces_ more specialised features such as templates
| /generics, interfaces, macros, and conditional compilation.
| That creates a completely novel simplicity/power balance.
|
| If the presence of features is how we judge design, then the
| product with the most features would be considered the best
| design. Of course, often the opposite is the case. The
| absence of features is just as crucial for design as their
| presence. It's like saying that a device with a touchscreen
| _and_ a physical keyboard has essentially the same properties
| as a device with only a touchscreen.
|
| If a language has a mechanism that can do exactly what Zig's
| comptime does but it _also_ has generics or templates,
| macros, and /or conditional compilation, then it doesn't have
| anything resembling Zig's comptime.
| WalterBright wrote:
| > Various compile-time evaluations are not even remotely
| positioned in D, Nim, or C++ as they are in Zig.
|
| See my other reply. I don't understand your comment.
|
| https://news.ycombinator.com/item?id=43748490
| pron wrote:
| The revolution in Zig isn't in what the comptime
| mechanism is able to do, but how it allows the language
| to not have other features, which is what gives that
| language its power to simplicity ratio.
|
| Let me put it like this: Zig's comptime is a general
| compilation time computation mechanism that has
| introspection capabilities _and replaces_ generics
| /templates, interfaces/typeclasses, macros, and
| conditional compilation.
|
| It's like that the main design feature of some devices is
| that they have a touchscreen _but not a keyboard_. The
| novelty isn 't the touchscreen; it's in the touchscreen
| _eliminating the keyboard_. The touchscreen itself doesn
| 't have to be novel; the novelty is how it's used to
| eliminate the keyboard. If your device has a touchscreen
| _and_ a keyboard, then it does not have the same design
| feature.
|
| Zig's novel comptime is a mechanism that _eliminates_
| other specialised features, and if these features are
| still present, then your language doesn 't have Zig's
| comptime. It has a touchscreen and a keyboard, whereas
| Zig's novelty is a touchscreen _without_ a keyboard.
| WalterBright wrote:
| The example of a comptime parameter to a function is a
| template, whether you call it that or not :-/ A function
| template is a function with compile time parameters.
|
| The irony here is back in the 2000's, many programmers
| were put off by C++ templates, and found them to be
| confusing. Myself included. But when I (belatedly)
| realized that function templates were functions with
| compile time parameters, I had an epiphany:
|
| Don't call them templates! Call them functions with
| compile time parameters. The people who were confused by
| templates understood that immediately. Then later, after
| realizing that they had been using templates all along,
| became comfortable with templates.
|
| BTW, I wholeheartedly agree that it is better to have a
| small set of features that can do the same thing as a
| larger set of features. But I'm not seeing how comptime
| is accomplishing that.
| pron wrote:
| > But I'm not seeing how comptime is accomplishing that.
|
| Because Zig does the work of C++'s templates, macros,
| conditional compilation, constexprs, and concepts with
| one relatively simple feature.
| WalterBright wrote:
| From the article: fn print(comptime T:
| type, value: T) void {
|
| That's a template. In D it looks like:
| void print(T)(T value) {
|
| which is also a template.
| pcwalton wrote:
| I think another way to put it is that the fact that Zig
| reuses the keyword "comptime" to denote type-level
| parameters and to denote compile-time evaluation doesn't
| mean that there's only one feature. There are still two
| features (templates and CTFE), just two features that
| happen to use the same keyword.
| pron wrote:
| Maybe you can insist that these are two features
| (although I disagree), but calling one of them templates
| really misses the mark. That's because, at least in C++,
| templates have their own template-level language (of
| "metafunctions"), whereas that's not the case in Zig.
| E.g. that C++'s `std::enable_if` is just the regular `if`
| in Zig makes all the difference (and also shows why there
| may not really be two features here, only one).
| sekao wrote:
| Agreed. Zig's approach re-uses the existing machinery of
| the language far more than C++ templates do. Another
| example of this is that Zig has almost no restrictions on
| what kinds of values can be `comptime` parameters. In
| C++, "non-type template parameters" are restricted to a
| small subset of types (integers, enums, and a few
| others). Rust's "const generics" are even more
| restrictive: only integers for now.
|
| In Zig I can pass an entire struct instance full of
| config values as a single comptime parameter and thread
| it anywhere in my program. The big difference here is
| that when you treat compile-time programming as a
| "special" thing that is supported completely differently
| in the language, you need to add these features in a
| painfully piecemeal way. Whereas if it's just re-using
| the machinery already in place in your language, these
| restrictions don't exist and your users don't need to
| look up what values can be comptime values...they're just
| another kind of thing I pass to functions, so "of course"
| I can pass a struct instance.
| edflsafoiewq wrote:
| std::enable_if exists to disable certain overloads during
| overload resolution. Zig has no overloading, so it has no
| equivalent.
| matklad wrote:
| I'd flip it over and say that C++ has overloading&SFINAE
| to enable polymorphism which it otherwise can't express.
| edflsafoiewq wrote:
| Such as? The basic property of overloading is it's open.
| Any closed set of overloads can be converted to a single
| function which does the same dispatch logic with ifs and
| type traits (it may not be very readable).
| edflsafoiewq wrote:
| They are the same thing though. Conceptually there's a
| partial evaluation pass whose goal is to eliminate all
| the comptimes by lowering them to regular runtime values.
| The apparent different "features" just arise from its
| operation on the different kinds of program constructs.
| To eliminate a expression, it evaluates the expression
| and replaces it with its value. To eliminate a loop, it
| unrolls it. To eliminate a call to a function with
| comptime arguments, it generates a specialized function
| for those arguments and replaces it with a call to the
| specialized function.
| baranul wrote:
| Comptime is often pushed as being something extraordinarily
| special, when it's not. Many other languages have similar. Jai,
| Vlang, Dlang, etc...
|
| What could be argued, is if Zig's version of it is
| comparatively better, but that is a very difficult argument to
| make. Not only in terms of how different languages are used,
| but something like an overall comparison of features looks to
| be needed in order to make any kind of convincing case, beyond
| hyping a particular feature.
| cassepipe wrote:
| You didn't read the article because that's the argument being
| made (whether you think these points have merit) :
|
| > My understanding is that Jai, for example, doesn't do this,
| and runs comptime code on the host.
|
| > Many powerful compile-time meta programming systems work by
| allowing you to inject arbitrary strings into compilation,
| sort of like #include whose argument is a shell-script that
| generates the text to include dynamically. For example, D
| mixins work that way:
|
| > And Rust macros, while technically producing a token-tree
| rather than a string, are more or less the same
| ephaeton wrote:
| zig's comptime has some (objectively: debatable? subjectively:
| definite) shortcomings that the zig community then overcomes with
| zig build to generate code-as-strings to be lateron @imported and
| compiled.
|
| Practically, "zig build"-time-eval. As such there's another
| 'comptime' stage with more freedom, unlimited run-time (no
| @setEvalBranchQuota), can do IO (DB schema, network lookups,
| etc.) but you lose the freedom to generate zig types as values in
| the current compilation; instead of that you of course have the
| freedom to reduce->project from target compiled semantic back to
| input syntax down to string to enter your future compilation
| context again.
|
| Back in the day, where I had to glue perl and tcl via C at one
| point in time, passing strings for perl generated through tcl is
| what this whole thing reminds me of. Sure it works. I'm not happy
| about it. There's _another_ "macro" stage that you can't even see
| in your code (it's just @import).
|
| The zig community bewilders me at times with their love for
| lashing themselves. The sort of discussions which new sort of
| self-harm they'd love to enforce on everybody is borderline
| disturbing.
| User23 wrote:
| Learning XS (maybe with Swig?) was a great way to actually
| understand Perl.
| bsder wrote:
| > The zig community bewilders me at times with their love for
| lashing themselves. The sort of discussions which new sort of
| self-harm they'd love to enforce on everybody is borderline
| disturbing.
|
| Personally, I find the idea that a _compiler_ might be able to
| reach outside itself completely terrifying (Access the network
| or a database? Are you nuts?).
|
| That should be 100% the job of a build system.
|
| Now, you can certainly argue that generating a text file may or
| may not be the best way to reify the result back into the
| compiler. However, what the compiler gets and generates should
| be completely deterministic.
| bmacho wrote:
| They are not advocating for IO in the compiler, but
| everything else that other languages can do with macros: run
| commands comptime, generate code, read code, modify code.
| It's proven to be very useful.
| bsder wrote:
| I'm going to make you defend that statement that they are
| "useful". I would counter than macros are "powerful".
|
| However, "macros" are a disaster to debug in every language
| that they appear. "comptime" sidesteps that because you can
| generally force it to run at runtime where your normal
| debugging mechanisms work just fine (returning a type being
| an exception).
|
| "Macros" generally impose extremely large cognitive
| overhead and making them hygienic has spawned the careers
| of countless CS professors. In addition, macros often
| impose significant _compiler_ overhead (how many crates do
| Rust 's proc-macros pull in?).
|
| It is not at all clear that the full power of general
| macros is worth the downstream grief that they cause (I
| also hold this position for a lot of compiler
| optimizations, but that's a rant for a different day).
| disentanglement wrote:
| > However, "macros" are a disaster to debug in every
| language that they appear.
|
| I have only used proper macros in Common Lisp, but at
| least there they are developed and debugged just like any
| other function. You call `macroexpand` in the repl to see
| the output of the macro and if there's an error you
| automatically get thrown in the same debugger that you
| use to debug other functions.
| bsder wrote:
| So, for debugging, we're already in the REPL--which means
| an interactive environment and the very significant
| amount of overhead baggage that goes with that (heap
| allocation, garbage collection, tty, interactive prompt,
| overhead of macroexpand, etc.).
|
| At the very least, that places you outside the boundary
| of a lot of the types of system programming that
| languages like C, C++, Rust, and Zig are meant to do.
| ephaeton wrote:
| > Personally, I find the idea that a compiler might be able
| to reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| What is "itself" here, please? Access a static 'external'
| source? Access a dynamically generated 'external' source? If
| that file is generated in the build system / build process as
| derived information, would you put it under version control?
| If not, are you as nuts as I am?
|
| Some processes require sharp tools, and you can't always be
| afraid to handle one. If all you have is a blunt tool, well,
| you know how the saying goes for C++.
|
| > However, what the compiler gets and generates should be
| completely deterministic.
|
| The zig community treats 'zig build' as "the compile step",
| ergo what "the compiler" gets ultimately is decided "at
| compile, er, zig build time". What the compiler gets, i.e.,
| what zig build generates within the same user-facing process,
| is not deterministic.
|
| Why would it be. Generating an interface is something that
| you want to be part of a streamline process. Appeasing C
| interfaces will be moving to a zig build-time multi-step
| process involving zig's 'translate-c' whose output you then
| import into your zig file. You think anybody is going to
| treat that output differently than from what you'd get from
| doing this invisibly at comptime (which, btw, is what
| practically happens now)?
| bsder wrote:
| > The zig community treats 'zig build' as "the compile
| step", ergo what "the compiler" gets ultimately is decided
| "at compile, er, zig build time". What the compiler gets,
| i.e., what zig build generates within the same user-facing
| process, is not deterministic.
|
| I know of no build system that is completely deterministic
| unless you go through the process of _very_ explicitly
| pinning things. Whereas practically _every_ compiler is
| deterministic (gcc, for example, would rebuild itself 3
| times and compare the last two to make sure they were byte
| identical). Perhaps there needs to be "zigmeson" (work out
| and generate dependencies) and "zigninja" (just call
| compiler on static resources) to set things apart, but it
| doesn't change the fact that "zig build" dispatches to a
| "build system" and "zig"/"zig cc" dispatches to a
| "compiler".
|
| > Appeasing C interfaces will be moving to a zig build-time
| multi-step process involving zig's 'translate-c' whose
| output you then import into your zig file. You think
| anybody is going to treat that output differently than from
| what you'd get from doing this invisibly at comptime
| (which, btw, is what practically happens now)?
|
| That's a completely different issue, but it illustrates the
| problem _perfectly_.
|
| The problem is that @cImport() can be called from two
| different modules on the same file. What about if there are
| three? What about if they need different versions? What
| happens when a previous @cImport modifies how that file
| translates. How do you do link time optimization on that?
|
| This is _exactly_ why your compiler needs to run on static
| resources that have already been resolved. I 'm fine with
| my build system calling a SAT solver to work out a Gordian
| Knot of dependencies. I am _not_ fine with my compiler
| needing to do that resolution.
| throwawaymaths wrote:
| > What is "itself"
|
| If I understand correctly the zig compiler is sandboxed to
| the local directory of the project's build file. Except for
| possibly c headers.
|
| The builder and linker can reach out a bit.
| ephaeton wrote:
| at "build time", the default language's build tool, a zig
| program, can reach anywhere and everywhere. To build a
| zig project, you'd use a zig program to create
| dependencies and invoke the compiler, cache the results,
| create output binaries, link them, etc.
|
| Distinguishing between `comptime` and `build time` is a
| distinction from the ivory tower. 'zig build' can happily
| reach anywhere, and generate anything.
| throwawaymaths wrote:
| Its not just academic, because if you try to @include
| something from out of path in your code you'll not be
| happy. Moreover, 'zig build' is not the only tool in the
| zig suite, there's individual compilation commands too.
| So there are real implications to this.
|
| It is also helpful for code/security review to have a
| one-stop place to look to see if anything outside of the
| git tree/submodule system can affect what's run.
| panzi wrote:
| > Personally, I find the idea that a compiler might be able
| to reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| Yeah, although so can build.rs or whatever you call in your
| Makefile. If something like cargo would have built-in
| sandboxing, that would be interesting.
| jenadine wrote:
| You can run cargo in a sandbox.
| panzi wrote:
| Yeah, but I want cargo to do that for me. And tell me if
| any build.rs does something it shouldn't.
| forrestthewoods wrote:
| > Personally, I find the idea that a compiler might be able
| to reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| It's not the compiler per se.
|
| Let's say you want a build system that is capable of
| generating code. Ok we can all agree that's super common and
| not crazy.
|
| Wouldn't it be great if the code that generated Zig code also
| be written in Zig? Why should codegen code be written in some
| completely unrelated language? Why should developers have to
| learn a brand new language to do compile time code Gen? Why
| yes Rust macros I'm staring angrily at you!
| eddythompson80 wrote:
| > Personally, I find the idea that a compiler might be able
| to reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| Why though? F# has this feature called TypeProviders where
| you can emit types to the compiler. For example, you can do
| do: type DbSchema =
| PostgresTypeProvider<"postgresql://postgres:..."> type
| WikipediaArticle =
| WikipediaTypeProvider<"https://wikipedia.org/wiki/Hello">
|
| and now you have a type that references that Article or that
| DB. You can treat it as if you had manually written all those
| types. You can fully inspect it in the IDE, debugger or
| logger. It's a full type that's autogenerated in a temp
| directory.
|
| When I first saw it, I thought it was really strange. Then
| thought about it abit, played with it, and thought it was
| brilliant. Literally one of the smartest ideas ever. It's
| first class codegen framework. There were some limitations,
| but still.
|
| After using it in a real project, you figure out why it
| didn't catch on. It's so close, but it's missing something.
| Just one thing is out of place there. The interaction is
| painful for anything that's not a file source, like
| CsvTypeProvider or a public internet url. It does also create
| this odd dependenciey that your code has that can't be source
| controlled or reproduced. There were hacks and workarounds,
| but nothing felt right for me.
|
| It was however, the best attempt at a statically typed
| language trying to imitate python or javascript scripting
| syntax. Where you just say put a db uri, and you start
| assuming types.
| SleepyMyroslav wrote:
| >Personally, I find the idea that a compiler might be able to
| reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| In gamedev code is small part of the end product. "Data-
| driven" is the term if you want to look it up. Doing an
| optimization pass that will partially evaluate data+code
| together as part of the build is normal. Code has like
| 'development version' that supports data modifications and
| 'shipping version' that can assume that data is known.
|
| The more traditional example of PGO+LTO is just another
| example how code can be specialized for existing data. I
| don't know a toolchain that survives change of PGO profiling
| data between builds without drastic changes in the resulting
| binary.
| bsder wrote:
| Is the PGO data not a static file which is then fed into
| the compiler? That still gives you a deterministic
| compiler, no?
| pjmlp wrote:
| It does have share a lot of it with other communities like
| Odin, Go, Jai,...
|
| Don't really get it, lets go back to the old days because it is
| cool, kind of vibe.
|
| Ironically nothing this matters in the long term, as eventually
| LLMs will be producing binaries directly.
| Cloudef wrote:
| The zig community cares about compilation speed. Unrestricted
| comptime would be quite disasterous for that.
| ephaeton wrote:
| I feel that's such a red herring.
|
| You can @setEvalBranchQuota essentially as big as you want,
| @embedFile an XML file, comptime parse it and generate types
| based on that (BTDT). You can slow down compilation as much
| as you want to already. Unrestricting the expressiveness of
| comptime has as much to do with compile times, as much as the
| restricted amount, and perceived entanglement of zig build
| and build.zig has to do with compile times.
|
| The knife about unrestricted / restricted comptime cuts both
| ways. Have you considered stopping using comptime and
| generate strings for cachable consumption of portable zig
| code for all the currently supported comptime use-cases right
| now? Why wouldn't you? What is it that you feel is more apt
| to be done at comptime? Can you accept that others see other
| use-cases that don't align with andrewrk's (current) vision?
| If I need to update a slow generation at 'project buildtime'
| your 'compilation speed' argument tanks as well. It's the
| problem space that dictates the minimal/optimal solution, not
| the programming language designer's headspace.
| fxtentacle wrote:
| I actually like build-time code generation MUCH MORE than,
| let's say, run-time JVM bytecode patching. Using an ORM in Java
| is like playing with magic, you never know what works or how.
| Using an ORM with code generation is much nicer, suddenly my
| IDE can show me what each function does, I can debug them and
| reason about them.
| rk06 wrote:
| I consider it a feature as similar feature in csharp requires
| me to dabble in msbuild props and target, which are very
| unfriendly. Moreover, this kind of support is what makes js
| special and js ecosystem innovative
| jmull wrote:
| You're complaining about generating code...
|
| While I agree that's typically a bad idea, this seems to have
| nothing to do specifically with zig.
|
| I get how you start with the idea that there's something
| deficient in zig's comptime causing this, but... what?
|
| I also have some doubts about how commonly used free-form code
| generation is with zig.
| paldepind2 wrote:
| This is honestly really cool! I've heard praises about Zig's
| comptime without really understanding what makes it tick. It
| initially sounds like Rust's constant evaluation which is not
| particularly capable. The ability to have types represented as
| values at compilation time, and _only_ at compile time, is
| clearly very powerful. It approximates dynamic languages or run-
| time reflection without any of the run-time overhead and without
| opening the Pandora's box that is full blown macros as in Lisp or
| Rust's procedural macros.
| forrestthewoods wrote:
| > When you execute code at compile time, on which machine does it
| execute? The natural answer is "on your machine", but it is
| wrong!
|
| I don't understand this.
|
| If I am cross-compiling a program is it not true that comptime
| code literally executes on my local host machine? Like, isn't
| that literally the definition of "compile-time"?
|
| If there is an endian architecture change I could see Zig
| choosing to _emulate_ the target machine on the host machine.
|
| This feels so wrong to me. HostPlatform and TargetPlatform can be
| different. That's fine! Hiding the host platform seems wrong. Can
| aomeone explain why you want to hide this seemingly critical
| fact?
|
| Don't get me wrong, I'm 100% on board the cross-compile train.
| And Zig does it literally better than any other compiled language
| that I know. So what am I missing?
|
| Or wait. I guess the key is that, unlike Jai, comptime Zig code
| does NOT run at compile time. It merely refers to things that are
| KNOWN at compile time? Wait that's not right either. I'm
| confused.
| int_19h wrote:
| The point is that something like sizeof(pointer) should have
| the same value in comptime code that it has at runtime for a
| given app. Which, yes, means that the comptime interpreter
| emulates the target machine.
|
| The reason is fairly simple: you want comptime code to be able
| to compute correct values for use at runtime. At the same time,
| there's zero benefit to _not_ hiding the host platform in
| comptime, because, well, what use case is there for knowing
| e.g. the size of pointer in the arch on which the compiler is
| running?
| forrestthewoods wrote:
| > Which, yes, means that the comptime interpreter emulates
| the target machine.
|
| Reasonable if that's how it works. I had absolutely no idea
| that Zig comptime worked this way!
|
| > there's zero benefit to not hiding the host platform in
| comptime
|
| I don't think this is clear. It is possibly good to hide host
| platform given Zig's more limited comptime capabilities.
|
| However in my $DayJob an _extremely_ common and painful
| source of issues is trying to hide host platform when it can
| not in fact be hidden.
| int_19h wrote:
| Can you give an example of a use case where you _wouldn 't_
| want comptime behavior to match runtime, but instead expose
| host/target differences?
| forrestthewoods wrote:
| Let's pretend I was writing some compile-time code that
| generates code. For example maybe I'm generating serde
| code. Or maybe I'm generating bindings for C, Python,
| etc.
|
| My generation code is probably going to allocate some
| memory and have some pointers and do some stuff. Why on
| earth would I want this compile-time code to run on an
| emulated version of the target platform? If I'm on a
| 64-bit platform then pointers are 8-bytes why would I
| pretend they aren't? Even if the target is 32-bit?
|
| Does that make sense? If the compiletime code ONLY runs
| on the host platform then you plausibly need to expose
| both host and target.
|
| I'm pretty sure I'm thinking about zig comptime all
| wrong. Something isn't clicking.
| von_lohengramm wrote:
| It sounds like the sort of compile-time code that you're
| talking about is closer to "buildtime" code in Zig, that
| is Zig code compiled for the host platform and executed
| by the build system to generate code (or data) to be used
| when compiling for the target system. As it stands now,
| there's absolutely nothing special about buildtime code
| in Zig other than Zig's build system providing good
| integration.
|
| On the other hand, "comptime" is actually executed within
| the compiler similar to C++'s `consteval`. There's no
| actual "emulation" going on. The "emulation" is just
| ensuring that any observable characteristic of the
| platform matches the target, but it's all smoke and
| mirrors. You can create pointers to memory locations, but
| these memory locations and pointers are not real. They're
| all implemented using the same internal mechanisms that
| power the rest of the compilation process. The compiler's
| logic to calculate the value of a global constant (`const
| a: i32 = 1 + 2;`) is the "comptime" that allows generic
| functions, ORMs, and all these other neat use cases.
| bunderbunder wrote:
| _Zig has a completely different feature, partial evaluation
| /specialization, which, none the less, is enough to cover most of
| use-cases for dynamic code generation._
|
| These kinds of insights are what I love about Zig. Andrew Kelley
| just might be the patron saint of the KISS principle.
|
| A long time ago I had an enlightenment experience where I was
| doing something clever with macros in F#, and it wasn't until I
| had more-or-less finished the whole thing that I realized I could
| implement it in a lot less (and more readable) code by doing some
| really basic stuff with partial application and higher order
| functions. And it would still be performant because the compiler
| would take care of the clever bits for me.
|
| Not too long after that, macros largely disappeared from my Lisp
| code, too.
| minetest2048 wrote:
| Fortunately its not just you, in Julia community there's a
| thread that discusses why you shouldn't use metaprogramming as
| a first solution as multiple dispatch and higher order
| functions are cleaner and faster:
| https://discourse.julialang.org/t/how-to-warn-new-users-away...
___________________________________________________________________
(page generated 2025-04-21 23:01 UTC)