[HN Gopher] Things Zig comptime won't do
___________________________________________________________________
Things Zig comptime won't do
Author : JadedBlueEyes
Score : 244 points
Date : 2025-04-20 15:57 UTC (7 hours ago)
(HTM) web link (matklad.github.io)
(TXT) w3m dump (matklad.github.io)
| gitroom wrote:
| Cool!
| pyrolistical wrote:
| What makes comptime really interesting is how fluid it is as you
| work.
|
| At some point you realize you need type information, so you just
| add it to your func params.
|
| That bubbles all the way up and you are done. Or you realize in
| certain situation it is not possible to provide the type and you
| need to solve a arch/design issue.
| Zambyte wrote:
| If the type that you're passing as an argument is the type of
| another argument, you can keep the API simpler by just using
| @TypeOf(arg) internally in the function instead.
| no_wizard wrote:
| I like the Zig language and tooling. I do wish there was a safety
| mode that give the same guarantees as Rust, but it's a huge step
| above C/C++. I am also extremely impressed with the Zig compiler.
|
| Perhaps the safety is the tradeoff with the comparative ease of
| using the language compared to Rust, but I'd love the best of
| both worlds if it were possible
| hermanradtke wrote:
| I wish for "strict" mode as well. My current thinking:
|
| TypeScript is to JavaScript
|
| as
|
| Zig is to C
|
| I am a huge TS fan.
| rc00 wrote:
| Is Zig aiming to extend C or extinguish it? The embrace story
| is well-established at this point but the remainder is often
| unclear in the messaging from the community.
| dooglius wrote:
| Zig is open source, so the analogy to Microsoft's EEE [0]
| seems misplaced.
|
| [0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_ex
| tingu...
| rc00 wrote:
| Open source or not isn't the point. The point is the
| mission and the ecosystem. Some of the Zig proponents
| laud the C compatibility. Others are seeking out the
| "pure Zig" ecosystem. Curious onlookers want to know if
| the Zig ecosystem and community will be as hostile to the
| decades of C libraries as the Rust zealots have been.
|
| To be fair, I don't believe there is a centralized and
| stated mission with Zig but it does feel like the story
| has moved beyond the "Incrementally improve your
| C/C++/Zig codebase" moniker.
| Zambyte wrote:
| > Curious onlookers want to know if the Zig ecosystem and
| community will be as hostile to the decades of C
| libraries as the Rust zealots have been.
|
| Definitely not the case in Zig. From my experience, the
| relationship with C libraries amounts to "if it works,
| use it".
| rc00 wrote:
| Are you referring to static linking? Dynamic linking?
| Importing/inclusion? How does this translate (no pun
| intended) when the LLVM backend work is completed? Does
| this extend to reproducible builds? Hermetic builds?
|
| And the relationship with C libraries certainly feels
| like a placeholder, akin to before the compiler was self-
| hosted. While I have seen some novel projects in Zig,
| there are certainly more than a few "pure Zig" rewrites
| of C libraries. Ultimately, this is free will. I just
| wonder if the Zig community is teeing up for a repeat of
| Rust's actix-web drama but rather than being because of
| the use of unsafe, it would be due to the use of C
| libraries instead of the all-Zig counterparts (assuming
| some level of maturity with the latter). While Zig's
| community appears healthier and more pragmatic, hype and
| ego have a way of ruining everything.
| Zambyte wrote:
| > static linking?
|
| Yes
|
| > Dynamic linking?
|
| Yes
|
| > Importing/inclusion?
|
| Yes
|
| > How does this translate (no pun intended) when the LLVM
| backend work is completed?
|
| I'm not sure what you mean. It sounds like you think
| they're working on being able to use LLVM as a backend,
| but that has already been supported, and now they're
| working on _not_ depending on LLVM as a requirement.
|
| > Does this extend to reproducible builds?
|
| My hunch would be yes, but I'm not certain.
|
| > Hermetic builds?
|
| I have never heard of this, but I would guess the same as
| reproducible.
|
| > While I have seen some novel projects in Zig, there are
| certainly more than a few "pure Zig" rewrites of C
| libraries.
|
| It's a nice exercise, especially considering how close C
| and Zig are semantically. It's helpful for learning to
| see how C things are done in Zig, and rewriting things
| lets you isolate that experience without also being
| troubled with creating something novel.
|
| For more than a few _not_ rewrites, check out
| https://github.com/allyourcodebase, which is a group that
| repackages existing C libraries with the Zig package
| manager / build system.
| ephaeton wrote:
| zig's C compat is being lowered from 'comptime'
| equivalent status to 'zig build'-time equivalent status.
| When you'll need to put 'extern "C"' annotations on any
| import/export to C, it'll have gone full-circle to C++ C
| compat, and thus be none the wiser.
|
| andrewrk's wording towards C and its main ecosystem
| (POSIX) is very hostile, if that is something you'd like
| to go by.
| PaulRobinson wrote:
| It's improved C.
|
| C interop is very important, and very valuable. However, by
| removing undefined behaviours, replacing macros that do
| weird things with well thought-through comptime, and making
| sure that the zig compiler is also a c compiler, you get a
| nice balance across lots of factors.
|
| It's a great language, I encourage people to dig into it.
| yellowapple wrote:
| The goal rather explicitly seems to be to extinguish it -
| the idea being that if you've got Zig, there should be no
| reason to need to write new code in C, because literally
| anything possible in C should be possible (and ideally done
| better) in Zig.
|
| Whether that ends up happening is obviously yet to be seen;
| as it stands there are plenty of Zig codebases with C in
| the mix. The idea, though, is that there shouldn't be
| anything stopping a programmer from replacing that C with
| Zig, and the two languages only coexist for the purpose of
| allowing that replacement to be gradual.
| xedrac wrote:
| I like Zig as a replacement for C, but not C++ due to its lack
| of RAII. Rust on the other hand is a great replacement for C++.
| I see Zig as filling a small niche where allocation failures
| are paramount - very constrained embedded devices, etc...
| Otherwise, I think you just get a lot more with Rust.
| xmorse wrote:
| Even better than RAII would be linear types, but it would
| require a borrow checker to track the lifetimes of objects.
| Then you would get a compiler error if you forget to call a
| .destroy() method
| throwawaymaths wrote:
| no you just need analysis with a dependent type system
| (which linear types are a subset of). it doesn't have to be
| in the compiler. there was a proof of concept here a few
| months ago:
|
| https://news.ycombinator.com/item?id=42923829
|
| https://news.ycombinator.com/item?id=43199265
| rastignack wrote:
| Compile times and painful to refactor codebase are rust's
| main drawbacks for me though.
|
| It's totally subjective but I find the language boring to
| use. For side projects I like having fun thus I picked zig.
|
| To each his own of course.
| nicce wrote:
| > refactor codebase are rust's main drawbacks
|
| Hard disagree about refactoring. Rust is one of the few
| languages where you can actually do refactoring rather
| safely without having tons of tests that just exist to
| catch issues if code changes.
| rastignack wrote:
| Lifetimes and generic tend to leak so you have to modify
| your code all around the place when you touch them
| though.
| ksec wrote:
| >but I'd love the best of both worlds if it were possible
|
| I am just going to quote what pcwalton said the other day that
| perhaps answer your question.
|
| >> I'd be much more excited about that promise [memory safety
| in Rust] if the compiler provided that safety, rather than
| asking the programmer to do an extraordinary amount of extra
| work to conform to syntactically enforced safety rules. Put the
| complexity in the compiler, dudes.
|
| > That exists; it's called garbage collection.
|
| >If you don't want the performance characteristics of garbage
| collection, something has to give. Either you sacrifice memory
| safety or you accept a more restrictive paradigm than GC'd
| languages give you. For some reason, programming language
| enthusiasts think that if you think really hard, every issue
| has some solution out there without any drawbacks at all just
| waiting to be found. But in fact, creating a system that has
| zero runtime overhead and unlimited aliasing with a mutable
| heap is as impossible as _finding two even numbers whose sum is
| odd._
|
| [1] https://news.ycombinator.com/item?id=43726315
| skybrian wrote:
| Yes, but I'm not hoping for that. I'm hoping for something
| like a scripting language with simpler lifetime annotations.
| Is Rust going to be the last popular language to be invented
| that explores that space? I hope not.
| hyperbrainer wrote:
| I was quite impressed with Austral[0], which used Linear
| Types and avoids the whole Rust-like implementation in
| favour of a more easily understandable system, albeit
| slightly more verbose.
|
| [0]https://borretti.me/article/introducing-austral
| renox wrote:
| Austra's concept are interesting but the introduction
| doesn't show how to handle correctly errors in this
| language..
| Philpax wrote:
| You may be interested in https://dada-lang.org/, which is
| not ready for public consumption, but is a language by one
| of Rust's designers that aims to be higher-level while
| still keeping much of the goodness from Rust.
| skybrian wrote:
| The first and last blog post was in 2021. Looks like it's
| still active on Github, though?
| Ygg2 wrote:
| > Is Rust going to be the last popular language to be
| invented that explores that space? I hope not.
|
| Seeing how most people hate the lifetime annotations, yes.
| For the foreseeable future.
|
| People want unlimited freedom. Unlimited freedom rhymes
| with unlimited footguns.
| xmorse wrote:
| There is Mojo and Vale (which was created by a now Mojo
| core contributor)
| the__alchemist wrote:
| Maybe this is a bad place to ask, but: Those experienced in
| manual-memory langs: What in particular do you find
| cumbersome about the borrow system? I've hit some annoyances
| like when splitting up struct fields into params where more
| than one is mutable, but that's the only friction point that
| comes to mind.
|
| I ask because I am obvious blind to other cases - that's what
| I'm curious about! I generally find the &s to be a net help
| even without mem safety ... They make it easier to reason
| about structure, and when things mutate.
| rc00 wrote:
| > What in particular do you find cumbersome about the
| borrow system?
|
| The refusal to accept code that the developer knows is
| correct, simply because it does not fit how the borrow
| checker wants to see it implemented. That kind of heavy-
| handed and opinionated supervision is overhead to
| productivity. (In recent times, others have taken to saying
| that Rust is less "fun.")
|
| When the purpose of writing code is to solve a problem and
| not engage in some pedantic or academic exercise, there are
| much better tools for the job. There are also times when
| memory safety is not a paramount concern. That makes the
| overhead of Rust not only unnecessary but also unwelcome.
| the__alchemist wrote:
| Thank you for the answer! Do you have an example? I'm
| having a fish-doesn't-know-water problem.
| int_19h wrote:
| Basically anything that involves objects mutually
| referencing each other.
| the__alchemist wrote:
| Oh, that does sound tough in rust! I'm not even sure how
| to approach it; good to know it's a useful pattern in
| other langs.
| Ygg2 wrote:
| > The refusal to accept code that the developer knows is
| correct,
|
| How do you know it is correct? Did you prove it with pre-
| condition, invariants and post-condition? Or did you
| assume based on prior experience.
| yohannesk wrote:
| Writing correct code did not start after the introduction
| of the rust programming language
| Ygg2 wrote:
| Nope, but claims of knowing to write correct code
| (especially C code) without borrow checker sure did spike
| with its introduction. Hence, my question.
|
| How do you know you haven't been writing unsafe code for
| years, when C unsafe guidelines have like 200 entries[1].
|
| [1]https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_d
| efinit... (Annex J.2 page 490)
| int_19h wrote:
| It's not difficult to write a provably correct
| implementation of doubly linked list in C, but it is very
| painful to do in Rust because the borrow checker really
| hates this kind of mutually referential objects.
| edflsafoiewq wrote:
| One example is a function call that doesn't compile, but
| will if you inline the function body. Compilation is
| prevented only by the insufficient expressiveness of the
| function signature.
| sgeisenh wrote:
| Lifetime annotations can be burdensome when trying to avoid
| extraneous copies and they feel contagious (when you add a
| lifetime annotation to a frequently used type, it bubbles
| out to anything that uses that type unless you're willing
| to use unsafe to extend lifetimes). The solutions to this
| problem (tracking indices instead of references) lose a lot
| of benefits that the borrow checker provides.
|
| The aliasing rules in Rust are also pretty strict. There
| are plenty of single-threaded programs where I want to be
| able to occasionally read a piece of information through an
| immutable reference, but that information can be modified
| by a different piece of code. This usually indicates a
| design issue in your program but sometimes you just want to
| throw together some code to solve an immediate problem. The
| extra friction from the borrow checker makes it less
| attractive to use Rust for these kinds of programs.
| bogdanoff_2 wrote:
| >There are plenty of single-threaded programs where I
| want to be able to occasionally read a piece of
| information through an immutable reference, but that
| information can be modified by a different piece of code.
|
| You could do that using Cell or RefCell. I agree that it
| makes it more cumbersome.
| Starlevel004 wrote:
| Lifetimes add an impending sense of doom to writing any
| sort of deeply nested code. You get this deep without
| writing a lifetime... uh oh, this struct needs a reference,
| and now you need to add a generic parameter to everything
| everywhere you've ever written and it feels miserable.
| Doubly so when you've accidentally omitted a lifetime
| generic somewhere and it compiles now but then you do some
| refactoring and it _won 't_ work anymore and you need to go
| back and re-add the generic parameter everywhere.
| the__alchemist wrote:
| I guess the dodge on this one is not using refs in
| structs. This opens you up to index errors though because
| it presumably means indexing arrays etc. Is this the
| tradeoff. (I write loads of rusts in a variety of
| domains, and rarely need a manual lifetime)
| quotemstr wrote:
| And those index values are just pointers by another name!
| pornel wrote:
| There is a stark contrast in usability of self-
| contained/owning types vs types that are temporary views
| bound by a lifetime of the place they are borrowing from.
| But this is an inherent problem for all non-GC languages
| that allow saving pointers to data on the stack (Rust
| doesn't need lifetimes for by-reference heap types). In
| languages without lifetimes you just don't get any
| compiler help in finding places that may be affected by
| dangling pointers.
|
| This is similar to creating a broadly-used data structure
| and realizing that some field has to be optional.
| Option<T> will require you to change everything touching
| it, and virally spread through all the code that wanted
| to use that field unconditionally. However, that's not
| the fault of the Option syntax, it's the fault of
| semantics of optionality. In languages that don't make
| this "miserable" at compile time, this problem manifests
| with a whack-a-mole of NullPointerExceptions at run time.
|
| With experience, I don't get this "oh no, now there's a
| lifetime popping up everywhere" surprise in Rust any
| more. Whether something is going to be a temporary view
| or permanent storage can be known ahead of time, and if
| it can be both, it can be designed with Cow-like types.
|
| I also got a sense for when using a temporary loan is a
| premature optimization. All data has to be stored
| somewhere (you can't have a reference to data that hasn't
| been stored). Designs that try to be ultra-efficient by
| allowing only temporary references often force data to be
| stored in a temporary location first, and then borrowed,
| which doesn't avoid any allocations, only adds
| dependencies on external storage. Instead, the design can
| support moving or collecting data into owned (non-
| temporary) storage directly. It can then keep it for an
| arbirary lifetime without lifetime annotations, and hand
| out temporary references to it whenever needed. The run-
| time cost can be the same, but the semantics are much
| easier to work with.
| spullara wrote:
| With Java ZGC the performance aspect has been fixed (<1ms
| pause times and real world throughput improvement). Memory
| usage though will always be strictly worse with no obvious
| way to improve it without sacrificing the performance gained.
| no_wizard wrote:
| I have zero issue with needing runtime GC or equivalent like
| ARC.
|
| My issue is with ergonomics and performance. In my experience
| with a range of languages, the most performant way of writing
| the code is not the way you would idiomatically write it.
| They make good performance more complicated than it should
| be.
|
| This holds true to me for my work with Java, Python, C# and
| JavaScript.
|
| What I suppose I'm looking for is a better compromise between
| having some form of managed runtime vs non managed
|
| And yes, I've also tried Go, and it's DX is its own type of
| pain for me. I should try it again now that it has generics
| throwawaymaths wrote:
| in principle it should be doable, possibly not in the
| language/compiler itself, there was this POC a few months ago:
|
| https://github.com/ityonemo/clr
| ashvardanian wrote:
| Previous submission:
| https://news.ycombinator.com/item?id=43738703
| karmakaze wrote:
| > Zig's comptime feature is most famous for what it can do:
| generics!, conditional compilation!, subtyping!, serialization!,
| ORM! That's fascinating, but, to be fair, there's a bunch of
| languages with quite powerful compile time evaluation
| capabilities that can do equivalent things.
|
| I'm curious what are these other languages that can do these
| things? I read HN regularly but don't recall them. Or maybe
| that's including things like Java's annotation processing which
| is so clunky that I wouldn't classify them to be equivalent.
| awestroke wrote:
| Rust, D, Nim, Crystal, Julia
| elcritch wrote:
| Definitely, you can do most of those things in Nim without
| macros using templates and compile time stuff. It's
| preferable to macros when possible. Julia has fantastic
| compile time abilities as well.
|
| It's beautiful to implement an incredibly fast serde in like
| 10 lines without requiring other devs to annotate their
| packages.
|
| I wouldn't include Rust on that list if we're speaking of
| compile time and compile time type abilities.
|
| Last time I tried it Rust's const expression system is pretty
| limited. Rust's macro system likewise is also very weak.
|
| Primarily you can only get type info by directly passing the
| type definition to a macro, which is how derive and all work.
| tialaramex wrote:
| Rust has _two_ macro systems, the proc macros are allowed
| to do absolutely whatever they please because they 're
| actually executing in the compiler.
|
| Now, _should_ they do anything they please? Definitely not,
| but they can. That 's why there's a (serious) macro which
| runs your Python code, and a (joke, in the sense that you
| should never use it, not that it wouldn't work) macro which
| replaces your running compiler with a different one so that
| code which is otherwise invalid will compile anyway...
| int_19h wrote:
| > Rust's macro system likewise is also very weak.
|
| How so? Rust procedural macros operate on token stream
| level while being able to tap into the parser, so I
| struggle to think of what they _can 't_ do, aside from
| limitations on the syntax of the macro.
| Nullabillity wrote:
| Rust macros don't really understand the types involved.
|
| If you have a derive macro for
| #[derive(MyTrait)] struct Foo { bar:
| Bar, baz: Baz, }
|
| then your macro _can_ see that it references Bar and Baz,
| but it can 't know _anything_ about how those types are
| defined. Usually, the way to get around it is to define
| some trait on both Bar and Baz, which your Foo struct
| depends on, but that still only gives you access to that
| information at runtime, not when evaluating your macro.
|
| Another case would be something like
| #[my_macro] fn do_stuff() -> Bar {
| let x = foo(); x.bar() }
|
| Your macro would be able to see that you call the
| functions foo() and Something::bar(), but it wouldn't
| have the context to know the type of x.
|
| And even if you did have the context to be able to see
| the scope, you probably still aren't going to reimplement
| rustc's type inference rules just for your one macro.
|
| Scala (for example) is different: any AST node is tagged
| with its corresponding type that you can just ask for,
| along with any context to expand on that (what fields
| does it have? does it implement this supertype? are there
| any relevant implicit conversions in scope?). There are
| both up- and downsides to that (personally, I do quite
| like the locality that Rust macros enforce, for example),
| but Rust macros are unquestionably _weaker_.
| rurban wrote:
| Perl BEGIN blocks
| foobazgt wrote:
| Yeah, I'm not a big fan of annotation processing either. It's
| simultaneously heavyweight and unwieldy, and yet doesn't do
| enough. You get all the annoyance of working with a full-blown
| AST, and none of the power that comes with being able to
| manipulate an AST.
|
| Annotations themselves are pretty great, and AFAIK, they are
| most widely used with reflection or bytecode rewriting instead.
| I get that the maintainers dislike macro-like capabilities, but
| the reality is that many of the nice libraries/facilities Java
| has (e.g. transparent spans), just aren't possible without AST-
| like modifications. So, the maintainers don't provide 1st class
| support for rewriting, and they hold their noses as popular
| libraries do it.
|
| Closely related, I'm pretty excited to muck with the new class
| file API that just went GA in 24
| (https://openjdk.org/jeps/484). I don't have experience with it
| yet, but I have high hopes.
| pron wrote:
| Java's annotation processing is intentionally limited so that
| compiling with them cannot change the semantics of the Java
| language as defined by the Java Language Specification (JLS).
|
| Note that more intrusive changes -- including not only
| bytecode-rewriting agents, but also the use of those AST-
| modifying "libraries" (really, languages) -- require command-
| line flags that tell you that the semantics of code may be
| impacted by some other code that is identified in those
| flags. This is part of "integrity by default":
| https://openjdk.org/jeps/8305968
| ephaeton wrote:
| well, the lisp family of languages surely can do all of that,
| and more. Check out, for example, clojure's version of zig's
| dropped 'async'. It's a macro.
| hiccuphippo wrote:
| The quote in Spanish about a Norse god is from a story by Jorge
| Luis Borges, here's an English translation:
| https://biblioklept.org/2019/04/02/the-disk-a-very-short-sto...
| _emacsomancer_ wrote:
| And in Spanish here: https://www.poeticous.com/borges/el-
| disco?locale=es
|
| (Not having much Spanish, I at first thought "Odin's
| disco(teque)" and then "no, that doesn't make sense about
| sides", but then, surely primed by English "disco", thought "it
| must mean Odin's record/lp/album".)
| wiml wrote:
| Odin's records have no B-sides, because everything Odin
| writes is fire!
| tialaramex wrote:
| Back when things really had A and B sides, it was
| moderately common for big artists to release a "Double A"
| in which both titles were heavily promoted, e.g. Nirvana's
| "All Apologies" and "Rape Me" are a double A, the Beatles
| "Penny Lane" and "Strawberry Fields Forever" likewise.
| kruuuder wrote:
| If you have read the story and, like me, are still wondering
| which part of the story is the quote at the top of the post:
|
| "It's Odin's Disc. It has only one side. Nothing else on Earth
| has only one side."
| tines wrote:
| A mobius strip does!
| ww520 wrote:
| This is a very educational blog post. I knew 'comptime for' and
| 'inline for' were comptime related, but didn't know the
| difference. The post explains the inline version only knows the
| length at comptime. I guess it's for loop unrolling.
| hansvm wrote:
| The normal use case for `inline for` is when you have to close
| over something only known at compile time (like when iterating
| over the fields of a struct), but when your behavior depends on
| runtime information (like conditionally assigning data to those
| fields).
|
| Unrolling as a performance optimization is usually slightly
| different, typically working in batches rather than unrolling
| the entire thing, even when the length is known at compile
| time.
|
| The docs suggest not using `inline` for performance without
| evidence it helps in your specific usage, largely because the
| bloated binary is likely to be slower unless you have a good
| reason to believe your case is special, and also because
| `inline` _removes_ optimization potential from the compiler
| rather than adding it (its inlining passes are very, very good,
| and despite having an extremely good grasp on which things
| should be inlined I rarely outperform the compiler -- I'm never
| worse, but the ability to not have to even think about it
| unless/until I get to the microoptimization phase of a project
| is liberating).
| pron wrote:
| Yes!
|
| To me, the uniqueness of Zig's comptime is a combination of two
| things:
|
| 1. comtpime _replaces_ many other features that would be
| specialised in other languages with or without rich compile-time
| (or runtime) metaprogramming, _and_
|
| 2. comptime is referentially transparent [1], that makes it
| strictly "weaker" than AST macros, but simpler to understand;
| what's surprising is just how capable you can be with a comptime
| mechanism with access to introspection yet without the
| referentially opaque power of macros.
|
| These two give Zig a unique combination of simplicity and power.
| We're used to seeing things like that in Scheme and other Lisps,
| but the approach in Zig is very different. The outcome isn't as
| general as in Lisp, but it's powerful enough while keeping code
| easier to understand.
|
| You can like it or not, but it is very interesting and very novel
| (the novelty isn't in the feature itself, but in the place it has
| in the language). Languages with a novel design and approach that
| you can learn in a couple of days are quite rare.
|
| [1]: In short, this means that you get no access to names or
| expressions, only the values they yield.
| User23 wrote:
| Has anyone grafted Zig style macros into Common Lisp?
| toxik wrote:
| Isn't this kind of thing sort of the default thing in Lisp?
| Code is data so you can transform it.
| fn-mote wrote:
| There are no limitations on the transformations in lisp.
| That can make macros very hard to understand. And hard for
| later program transformers to deal with.
|
| The innovation in Zig is the restrictions that limit the
| power of macros.
| Zambyte wrote:
| There isn't really as clear of a distinction between
| "runtime" and "compile time" in Lisp. The comptime keyword is
| essentially just the opposite of quote in Lisp. Instead of
| using comptime to say what should be evaluated early, you use
| quote to say what should be evaluated later. Adding comptime
| to Lisp would be weird (though obviously not impossible,
| because it's Lisp), because that is essentially the default
| for expressions.
| Conscat wrote:
| The truth of this varies between Lisp based languages.
| Conscat wrote:
| The Scopes language might be similar to what you're asking
| about. Its notion of "spices" which complement the "sugars"
| feature is a similar kind of constant evaluation. It's not a
| Common Lisp dialect, though, but it is sexp based.
| paldepind2 wrote:
| I was a bit confused by the remark that comptime is
| referentially transparent. I'm familiar with the term as it's
| used in functional programming to mean that an expression can
| be replaced by its value (stemming from it having no side-
| effects). However, from a quick search I found an old related
| comment by you [1] that clarified this for me.
|
| If I understand correctly you're using the term in a different
| (perhaps more correct/original?) sense where it roughly means
| that two expressions with the same meaning/denotation can be
| substituted for each other without changing the
| meaning/denotation of the surrounding program. This property is
| broken by macros. A macro in Rust, for instance, can
| distinguish between `1 + 1` and `2`. The comptime system in Zig
| in contrast does not break this property as it only allows one
| to inspect values and not un-evaluated ASTs.
|
| [1]: https://news.ycombinator.com/item?id=36154447
| deredede wrote:
| Those are equivalent, I think. If you can replace an
| expression by its value, any two expressions with the same
| value are indistinguishable (and conversely a value is an
| expression which is its own value).
| pron wrote:
| Yes, I am using the term more correctly (or at least more
| generally), although the way it's used in functional
| programming is a special case. A referentially transparent
| term is one whose sub-terms can be replaced by their
| references without changing the reference of the term as a
| whole. A functional programming language is simply one where
| all references are values or "objects" _in the programming
| language itself_.
|
| The expression `i++` in C is not a value _in C_ (although it
| is a "value" in some semantic descriptions of C), yet a C
| expression that contains `i++` and cannot distinguish between
| `i++` and any other C operation that increments i by 1, is
| referentially transparent, which is pretty much all C
| expressions except for those involving C macros.
|
| Macros are not referentially transparent because they can
| distinguish between, say, a variable whose _name_ is `foo`
| and is equal to 3 and a variable whose name is `bar` and is
| equal to 3. In other words, their outcome may differ not just
| by what is being referenced (3) but also by _how_ it 's
| referenced (`foo` or `bar`), hence they're referentially
| opaque.
| cannabis_sam wrote:
| Regarding 2. How are comptime values restricted to total
| computations? Is it just by the fact that the compiler actually
| finished, or are there any restrictions on comptime
| evaluations?
| pron wrote:
| They don't need to be restricted to total computation to be
| referentially transparent. Non-termination is also a
| reference.
| keybored wrote:
| I've never managed to understand your year-long[1] manic praise
| over this feature. Given that you're a language implementer.
|
| It's very cool to be able to just say "Y is just X". You know
| in a museum. Or at a distance. Not necessarily as something you
| have to work with daily. Because I would rather take something
| ranging from Java's interface to Haskell's typeclasses since
| once implemented, they'll just work. With comptime types,
| according to what I've read, you'll have to bring your T to the
| comptime and find out right then and there if it will work.
| Without enough foresight it might not.
|
| That's not something I want. I just want generics or parametric
| polymorphism or whatever it is to work once it compiles. If
| there's a <T> I want to slot in T without any surprises. And
| whether Y is just X is a very distant priority at that point.
| Another distant priority is if generics and whatever else _is
| all just X undernea_... I mean just let me use the language
| declaratively.
|
| I felt like I was on the idealistic end of the spectrum when I
| saw you criticizing other languages that are not installed on 3
| billion devices as too academic.[2] Now I'm not so sure?
|
| [1] https://news.ycombinator.com/item?id=24292760
|
| [2] But does Scala technically count since it's on the JVM
| though?
| ephaeton wrote:
| zig's comptime has some (objectively: debatable? subjectively:
| definite) shortcomings that the zig community then overcomes with
| zig build to generate code-as-strings to be lateron @imported and
| compiled.
|
| Practically, "zig build"-time-eval. As such there's another
| 'comptime' stage with more freedom, unlimited run-time (no
| @setEvalBranchQuota), can do IO (DB schema, network lookups,
| etc.) but you lose the freedom to generate zig types as values in
| the current compilation; instead of that you of course have the
| freedom to reduce->project from target compiled semantic back to
| input syntax down to string to enter your future compilation
| context again.
|
| Back in the day, where I had to glue perl and tcl via C at one
| point in time, passing strings for perl generated through tcl is
| what this whole thing reminds me of. Sure it works. I'm not happy
| about it. There's _another_ "macro" stage that you can't even see
| in your code (it's just @import).
|
| The zig community bewilders me at times with their love for
| lashing themselves. The sort of discussions which new sort of
| self-harm they'd love to enforce on everybody is borderline
| disturbing.
| User23 wrote:
| Learning XS (maybe with Swig?) was a great way to actually
| understand Perl.
| bsder wrote:
| > The zig community bewilders me at times with their love for
| lashing themselves. The sort of discussions which new sort of
| self-harm they'd love to enforce on everybody is borderline
| disturbing.
|
| Personally, I find the idea that a _compiler_ might be able to
| reach outside itself completely terrifying (Access the network
| or a database? Are you nuts?).
|
| That should be 100% the job of a build system.
|
| Now, you can certainly argue that generating a text file may or
| may not be the best way to reify the result back into the
| compiler. However, what the compiler gets and generates should
| be completely deterministic.
| bmacho wrote:
| They are not advocating for IO in the compiler, but
| everything else that other languages can do with macros: run
| commands comptime, generate code, read code, modify code.
| It's proven to be very useful.
| ephaeton wrote:
| > Personally, I find the idea that a compiler might be able
| to reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| What is "itself" here, please? Access a static 'external'
| source? Access a dynamically generated 'external' source? If
| that file is generated in the build system / build process as
| derived information, would you put it under version control?
| If not, are you as nuts as I am?
|
| Some processes require sharp tools, and you can't always be
| afraid to handle one. If all you have is a blunt tool, well,
| you know how the saying goes for C++.
|
| > However, what the compiler gets and generates should be
| completely deterministic.
|
| The zig community treats 'zig build' as "the compile step",
| ergo what "the compiler" gets ultimately is decided "at
| compile, er, zig build time". What the compiler gets, i.e.,
| what zig build generates within the same user-facing process,
| is not deterministic.
|
| Why would it be. Generating an interface is something that
| you want to be part of a streamline process. Appeasing C
| interfaces will be moving to a zig build-time multi-step
| process involving zig's 'translate-c' whose output you then
| import into your zig file. You think anybody is going to
| treat that output differently than from what you'd get from
| doing this invisibly at comptime (which, btw, is what
| practically happens now)?
| panzi wrote:
| > Personally, I find the idea that a compiler might be able
| to reach outside itself completely terrifying (Access the
| network or a database? Are you nuts?).
|
| Yeah, although so can build.rs or whatever you call in your
| Makefile. If something like cargo would have built-in
| sandboxing, that would be interesting.
| paldepind2 wrote:
| This is honestly really cool! I've heard praises about Zig's
| comptime without really understanding what makes it tick. It
| initially sounds like Rust's constant evaluation which is not
| particularly capable. The ability to have types represented as
| values at compilation time, and _only_ at compile time, is
| clearly very powerful. It approximates dynamic languages or run-
| time reflection without any of the run-time overhead and without
| opening the Pandora's box that is full blown macros as in Lisp or
| Rust's procedural macros.
| forrestthewoods wrote:
| > When you execute code at compile time, on which machine does it
| execute? The natural answer is "on your machine", but it is
| wrong!
|
| I don't understand this.
|
| If I am cross-compiling a program is it not true that comptime
| code literally executes on my local host machine? Like, isn't
| that literally the definition of "compile-time"?
|
| If there is an endian architecture change I could see Zig
| choosing to _emulate_ the target machine on the host machine.
|
| This feels so wrong to me. HostPlatform and TargetPlatform can be
| different. That's fine! Hiding the host platform seems wrong. Can
| aomeone explain why you want to hide this seemingly critical
| fact?
|
| Don't get me wrong, I'm 100% on board the cross-compile train.
| And Zig does it literally better than any other compiled language
| that I know. So what am I missing?
|
| Or wait. I guess the key is that, unlike Jai, comptime Zig code
| does NOT run at compile time. It merely refers to things that are
| KNOWN at compile time? Wait that's not right either. I'm
| confused.
| int_19h wrote:
| The point is that something like sizeof(pointer) should have
| the same value in comptime code that it has at runtime for a
| given app. Which, yes, means that the comptime interpreter
| emulates the target machine.
|
| The reason is fairly simple: you want comptime code to be able
| to compute correct values for use at runtime. At the same time,
| there's zero benefit to _not_ hiding the host platform in
| comptime, because, well, what use case is there for knowing
| e.g. the size of pointer in the arch on which the compiler is
| running?
| forrestthewoods wrote:
| > Which, yes, means that the comptime interpreter emulates
| the target machine.
|
| Reasonable if that's how it works. I had absolutely no idea
| that Zig comptime worked this way!
|
| > there's zero benefit to not hiding the host platform in
| comptime
|
| I don't think this is clear. It is possibly good to hide host
| platform given Zig's more limited comptime capabilities.
|
| However in my $DayJob an _extremely_ common and painful
| source of issues is trying to hide host platform when it can
| not in fact be hidden.
___________________________________________________________________
(page generated 2025-04-20 23:00 UTC)