[HN Gopher] Zig's New Async I/O
___________________________________________________________________
Zig's New Async I/O
https://www.youtube.com/watch?v=mdOxIc0HM04
Author : todsacerdoti
Score : 214 points
Date : 2025-10-29 12:35 UTC (1 days ago)
(HTM) web link (andrewkelley.me)
(TXT) w3m dump (andrewkelley.me)
| sanity wrote:
| Andrew Kelley is one of my all-time favorite technical speakers,
| and zig is packed full of great ideas. He also seems to be a
| great model of an open source project leader.
| throwawaymaths wrote:
| I would say zig is not just packed full of great ideas, there's
| a whole graveyard of ideas that were thrown out (and a universe
| of ideas that were rejected out of hand). Zig is full of great
| ideas that _compose well together_.
| bcardarella wrote:
| One that was tossed in 0.15 was `usingnamespace` which was a
| bit rough to refactor away from.
| ancarda wrote:
| I really hope this is going to be entirely optional, but I know
| realistically it just won't be. If Rust is any example, a
| language that has optional async support, async will permeate
| into the whole ecosystem. That's to be expected with colored
| functions. The stdlib isn't too bad but last time I checked a lot
| of crates.io is filled with async functions for stuff that
| doesn't actually block.
|
| Async clearly works for many people, I do fully understand people
| who can't get their heads around threads and prefer async. It's
| wonderful that there's a pattern people can use to be productive!
|
| For whatever reason, async just doesn't work for me. I don't feel
| comfortable using it and at this point I've been trying on and
| off for probably 10+ years now. Maybe it's never going to happen.
| I'm much more comfortable with threads, mutex locks, channels,
| Erlang style concurrency, nurseries -- literally ANYTHING but
| async. All of those are very understandable to me and I've built
| production systems with all of those.
|
| I hope when Zig reaches 1.0 I'll be able to use it. I started
| learning it earlier this month and it's been really enjoyable to
| use.
| eddd-ddde wrote:
| > I do fully understand people who can't get their heads around
| threads and prefer async.
|
| Those are independent of each other. You can have async with
| and without threads. You can have threads with and without
| async.
| rowanG077 wrote:
| Yeah, it always mystifies me when people talking about async
| vs threads when they are completely orthogonal concepts. It
| doesn't give me the feeling they understand what they are
| talking about.
| speed_spread wrote:
| I agree, async is way more popular than it should be. At least
| (AFAIU) Zig doesn't color functions so there won't be a big
| ecosystem rift between blocking and async libraries.
| flohofwoe wrote:
| > I'm much more comfortable with threads
|
| The example code shown in the first few minutes of the video is
| actually using regular OS threads for running the async code ;)
|
| The whole thing is quite similar to the Zig allocator
| philosophy. Just like an application already picks a root
| allocator to pass down into libraries, it now also picks an IO
| implementation and passes it down. A library in turn doesn't
| care about how async is implemented by the IO system, it just
| calls into the IO implementation it got handed from the
| application.
| throwawaymaths wrote:
| or you can pick one manually. Or, you can pick more than one
| and use as needed in different parts of your application.
| (probably less of a thing for IO than allocator).
| hugs wrote:
| same. i don't like async. i don't like having to prepend
| "await" to every line of code. instead, lately (in js), i've
| been playing more with worker threads, message passing, and the
| "atomics" api. i get the benefits of concurrency without the
| extra async/await-everywhere baggage.
| iknowstuff wrote:
| lol it's just a very different tradeoff. Especially in js
| those approaches are far more of a "baggage" than async/await
| hugs wrote:
| probably, but i'm petty like that. i just really don't like
| async/await. i'm looking forward to eventually being
| punished for that opinion!
| 01HNNWZ0MV43FF wrote:
| I understand threads but I like using async for certain things.
|
| If I had a web service using threads, would I map each request
| to one thread in a thread pool? It seems like a waste of OS
| resources when the IO multiplexing can be done without OS
| threads.
|
| > last time I checked a lot of crates.io is filled with async
| functions for stuff that doesn't actually block.
|
| Like what? Even file I/O blocks for large files on slow
| devices, so something like async tarball handling has a use
| case.
|
| It's best to write in the sans-IO style and then your threading
| or async can be a thin layer on top that drives a dumb state
| machine. But in practice I find that passable sans-IO code is
| harder to write than passable async. It makes a lot of sense
| for a deep indirect dependency like an HTTP library, but less
| sense for an app
| tomck wrote:
| > I do fully understand people who can't get their heads around
| threads and prefer async
|
| This is a bizarre remark
|
| Async/await isn't "for when you can't get your head around
| threads", it's a completely orthogonal concept
|
| Case in point: javascript has async/await, but everything is
| singlethreaded, there is no parallelism
|
| Async/await is basically just coroutines/generators underneath.
|
| Phrasing async as 'for people who can't get their heads around
| threads' makes it sound like you're just insecure that you
| never learned how async works yet, and instead of just sitting
| down + learning it you would rather compensate
|
| Async is probably a more complex model than threads/fibers for
| expressing concurrency. It's fine to say that, it's fine to not
| have learned it if that works for you, but it's silly to put
| one above the other as if understanding threads makes
| async/await irrelevant
|
| > The stdlib isn't too bad but last time I checked a lot of
| crates.io is filled with async functions for stuff that doesn't
| actually block
|
| Can you provide an example? I haven't found that to be the case
| last time I used rust, but I don't use rust a great deal
| anymore
| vjerancrnjak wrote:
| It's more about how the code ends up evolving.
|
| Async-await in JS is sometimes used to swallow exceptions.
| It's very often used to do 1 thing at a time when N things
| could be done instead. It serializes the execution a lot when
| it could be concurrent. if (await
| is_something_true()) { // here is_something_true()
| can be false }
|
| And above, the most common mistake.
|
| Similar side-effects happen in other languages that have
| async-await sugar.
|
| It smells as bad as the Zig file interface with intermediate
| buffers reading/writing to OS buffers until everything is a
| buffer 10 steps below.
|
| It's fun for small programs but you really have to be very
| strict to not have it go wrong (performance, correctness).
| tomck wrote:
| I think you replied to the wrong person.
|
| That being said, I don't understand your
| `is_something_true` example.
|
| > It's very often used to do 1 thing at a time when N
| things could be done instead
|
| That's true, but I don't think e.g. fibres fare any better
| here. I would say that expressing that type of parallel
| execution is much more convenient with async/await and
| Promise.all() or whatever alternative, compared to e.g. raw
| promises or fibres.
| ksec wrote:
| >Case in point: javascript has async/await, but everything is
| singlethreaded, there is no parallelism, Async/await is
| basically just coroutines/generators underneath.
|
| May be I just wish Zig dont call it async and use a different
| name.
| dang wrote:
| > _This is a bizarre remark_
|
| > _makes it sound like you 're just insecure_
|
| > _instead of just sitting down + learning it you would
| rather compensate_
|
| Can you please edit out swipes like these from your HN posts?
| This is in the site guidelines:
| https://news.ycombinator.com/newsguidelines.html.
|
| Your comment would be fine without those bits.
| duped wrote:
| It sounds like your trouble with async is mistaking concurrency
| for parallelism.
| Maxatar wrote:
| Weird claim since threads were originally introduced as a
| concurrency primitive, basically a way to make user facing
| programs more responsive while sharing the same address space
| and CPU.
|
| The idea of generalizing threads for use in parallel
| computing/SMP didn't come until at least a decade after the
| introduction of threads for use as a concurrency tool.
| metaltyphoon wrote:
| > Weird claim since threads were originally introduced as a
| concurrency primitive
|
| Wasn't this only true when CPUs were single core only? Only
| when multi core CPUs came, true parallelism could happen
| (outside of using multiple CPUs)
| audunw wrote:
| You don't have to hope. Avoiding function colours and being
| able to write libraries that are agnostic to whether the IO is
| async or not is one of the top priorities of this new IO
| implementation.
|
| If you don't want to use async/await just don't call functions
| through io.async.
| pkulak wrote:
| > can't get their heads around threads and prefer async
|
| Wow. Do you expect anyone to continue reading after a comment
| like that?
| lukaslalinsky wrote:
| It's optional to the point that you can write single-threaded
| version without any io.async/io.concurrent, but you will need
| to pass the io parameter around, if you want to do I/O. You are
| mistaking what is called "async" here for what other languages
| call async/await. It's a very different concept. Async in this
| context means just "spawn this function in a background, but if
| you can't, just run it right now".
| gethly wrote:
| I almost fell out of my chair at 22:06 from laughter :D
| yohbho wrote:
| due to broken audio?
| hinkley wrote:
| Not broken, that's Charlie Brown Teacher.
|
| I guess they didn't get a release from the question asker and
| so they edited it out?
|
| The problem here though is that the presenter didn't repeat
| the question for the audience which is a rookie mistake.
| synergy20 wrote:
| have not played with zig for a while, remain in the C world.
|
| with the cleanup attribute(a cheap "defer" for C), and the
| sanitizers, static analysis tools, memory tagging extension(MTE)
| for memory safety at hardware level, etc, and a zig 1.0 still
| probably years away, what's the strong selling point that I need
| spend time with zig these days? Asking because I'm unsure if I
| should re-try it.
| flohofwoe wrote:
| My 2ct: A much more useful stdlib (that's also currently the
| most unstable part though, and I don't agree with all the
| design decisions in the stdlib - but mostly still better than a
| nearly useless stdlib like C provides - and don't even get me
| started about the C++ stdlib heh), integrated build system and
| package manager (even great for pure C/C++ projects), and
| comptime with all the things it enables (like straightforward
| reflection and generics).
|
| It also fixes a shitton of tiny design warts that we've become
| accustomed to in C (also very slowly happening in the C
| standard, but that will take decades while Zig has those fixes
| now).
|
| Also, probably the best integration with C code you can get
| outside of C++. E.g. hybrid C/Zig projects is a regular use
| case and has close to no friction.
|
| C won't go away for me, but tinkering with Zig is just more fun
| :)
| cassepipe wrote:
| Could you expand on what are the design decisions you
| disapprove of ? You got me curious
| flohofwoe wrote:
| Many parts of the stdlib don't separate between public
| interface and internal implementation details. It's
| possible to accidentially mess with data items which are
| clearly meant to be implementation-private.
|
| I think this is because many parts of the stdlib are too
| object-oriented, for instance the containers are more or
| less C++ style objects, but this style of programming
| really needs RAII and restricted visibility rules like
| public/private (now Zig shouldn't get those, but IMHO the
| stdlib shouldn't pretend that those language features
| exist).
|
| As a sister comment says, Zig is a great programming
| language, but the stdlib needs some sort of basic and
| consistent design philosophy whitch matches the language
| capabilities.
|
| Tbf though, C gets around this problem by simply not
| providing a useful stdlib and delegating all the tricky
| design questions to library authors ;)
| smj-edison wrote:
| Honestly getting to mess with internal implementation
| details is my favorite part of using Zig's standard
| library. I'm working on a multithreaded interpreter right
| now, where each object has a unique index, so I need
| effectively a 4GB array to store them all. It's generally
| considered rude to allocate that all at once, so I'm
| using 4GB of virtual memory. I can literally just swap
| out std.MultiArrayList's backing array with vmem, set its
| capacity, and use all of its features, except now with
| virtual memory[1].
|
| [1] https://github.com/smj-
| edison/zicl/blob/bacb08153305d5ba97fc...
| flohofwoe wrote:
| I would at least like to see some sort of naming
| convention for implementation-private properties, maybe
| like: const Bla = struct {
| // public access intended bla: i32,
| blub: i32, // here be dragons
| _private: struct { x: i32,
| y: i32, }, };
|
| ...that way you can still access and mess up those
| 'private' items, but at least it's clear now which of the
| struct items are part of the 'public API contract' and
| which are considered internal implementation details
| which may change on a whim.
| smj-edison wrote:
| Aah, that's a good point. I suppose you could argue that
| the doc notes indicate whether something it meant to be
| accessed directly, like when ArrayList mentions "This
| field is intended to be accessed directly.", but that's
| really hard to determine at a glance.
| KallDrexx wrote:
| This is my main frustration with the push back against
| visibility modifiers. It's treated as an all or nothing
| approach, that any support for visibility modifiers locks
| anyone out from touching those fields.
|
| It could just be a compiler error/warning that has to be
| explicitly opted into to touch those fields. This allows
| you to say "I know this is normally a footgun to modify
| these fields, and I might be violating an invariant
| condition, but I am know what I'm doing".
| bsder wrote:
| "Visibility" _invariably_ bites you in the ass at some
| point. See: Rust, newtypes and the Orphan rule, for
| example.
|
| As such, I'm happy to not have visibility modifiers at
| all.
|
| I do absolutely agree that "std" needs a good design pass
| to make it consistent--groveling in ".len" fields instead
| of ".len()" functions is definitely a bad idea. However,
| the nice part about Zig is that replacement _doesn 't
| need extra compiler support_. Anyone can do that pass and
| everyone can then import and use it.
|
| > This allows you to say "I know this is normally a
| footgun to modify these fields, and I might be violating
| an invariant condition, but I am know what I'm doing".
|
| Welcome to "Zig will not have warnings." That's why it's
| all or nothing.
|
| It's the single thing that absolutely grinds my gears
| about Zig. However, it's also probably the single thing
| that can be relaxed at a later date and not completely
| change the language. Consequently, I'm willing to put up
| with it given the rest of the goodness I get.
| pixelpoet wrote:
| How do you cope with the lack of vector operators? I am
| basically only writing vector code all day and night, and
| the lack of infix operators for vectors is just as
| unacceptable as not having them for normal ints and
| floats would be.
|
| Nobody is asking for Pizza * Weather && (Lizard + Sleep),
| that strawman argument to justify ints and floats as the
| only algebraic types is infuriating :(
|
| I'd love to read a blog post about your Zig setup and
| workflow BTW.
| lukaslalinsky wrote:
| I really like Zig as a language, but I think the standard
| library is a huge weakness. I find the design extremely
| inconsistent. It feels like a collection of small packages,
| not a coherent unit. And I personally think this std.Io
| interface is on a similar path. The idea of abstracting our
| all I/O calls is great, but the actual interface is sketchy,
| in my opinion.
| throwawaymaths wrote:
| https://github.com/ziglang/zig/issues/1629
| pron wrote:
| I don't think any language that's not well-established, let
| alone one that isn't stabilised yet, would have a _strong_
| selling point if what you 're looking for right now is to write
| production code that you'll maintain for a decade or more to
| come (I mean, companies do use languages that aren't as
| established as C in production apps, including Zig, but that's
| certainly not for everyone).
|
| But if you're open to learning languages for
| tinkering/education purposes, I would say that Zig has several
| significant "intrinsic" advantages compared to C.
|
| * It's _much_ more expressive (it 's at least as expressive as
| C++), while still being a very simple language (you can learn
| it fully in a few days).
|
| * Its cross-compilation tooling is something of a marvel.
|
| * It offers not only spatial memory safety, but protection from
| other kinds of undefined behaviour, in the form of things like
| tagged unions.
| AndyKelley wrote:
| text version: https://andrewkelley.me/post/zig-new-async-io-text-
| version.h...
| dang wrote:
| Thanks - we'll switch the URL to that above and re-up the
| thread.
| pragmatic wrote:
| https://andrewkelley.me/post/zig-new-async-io-text-version.h...
|
| Text version.
|
| The desynced video makes the video a bit painful to watch.
| dang wrote:
| Yup, we'll use the text version and keep the video link in the
| toptext.
| PaulHoule wrote:
| It seems to me that async io struggles whenever people try it.
|
| For instance it is where Rust goes to die because it subverts the
| stack-based paradigm behind ownership. I used to find it was fun
| to write little applications like web servers in aio Python,
| particularly if message queues and websockets were involved, but
| for ordinary work you're better off using gunicorn. The trouble
| is that conventional async i/o solutions are all single threaded
| and in an age where it's common to have a 16 core machine on your
| desktop it makes no sense. It would be like starting a chess game
| dumping out all your pieces except for your King.
|
| Unfashionable languages like Java and .NET that have quality
| multithreaded runtimes are the way to go because they provide a
| single paradigm to manage both concurrency and parallelism.
| erichocean wrote:
| Project Loom makes Java in particular really nice, virtual
| threads can "block" without blocking the underlying OS thread.
| No callbacks at all, and you can even use Structured
| Concurrency to implement all sorts of Go- and Erlang-like
| patterns.
|
| (I use it from Clojure, where it pairs great with the "thread"
| version of core.async (i.e. Go-style) channels.)
| vlovich123 wrote:
| > Unfashionable languages like Java and .NET that have quality
| multithreaded runtimes are the way to go because they provide a
| single paradigm to manage both concurrency and parallelism.
|
| At the cost of not being able to actually provide the same
| throughput, latency, or memory usage that lower level languages
| that don't enforce the same performance pessimizing
| abstractions on everything can. Engineering is about tradeoffs
| but pretending like Java or .NET have solved this is naiive.
| brandonbloom wrote:
| If you watched the video closely, you'll have noticed that this
| design parameterizes the code by an `io` interface, which
| enables pluggable implementations. Correctly written code in
| this style can work transparently with evented or threaded
| runtimes.
| PaulHoule wrote:
| Really? Ordinary synchronous code calls an I/O routine which
| returns. Asynchronous code calls an I/O routine and then gets
| called back. That's a fundamental difference and you can only
| square it by making the synchronous code look like
| asynchronous code (callback gets called right away) or
| asynchronous code look like synchronous code (something like
| async python which breaks up a subroutine into multiple
| subroutines and has an event loop manage who calls who.
| andrewmcwatters wrote:
| I haven't actually seen it in the wild yet, just talked about
| in technical talks from engineers at different studios, but I'm
| interested in designs in which there isn't a traditional main
| thread anymore.
|
| Instead, everything is a job, and even what is considered the
| main thread is no longer an orchestration thread, but just
| another worker after some nominal set up between scaffolding
| enough threads, usually minus one, to all serve as lockless,
| work-stealing worker threads.
|
| Conventional async programming relies too heavily on a critical
| main thread.
|
| I think it's been so successful though, that unfortunately
| we'll be stuck with it for much longer than some of us will
| want.
|
| It reminds me of how many years of inefficient programming we
| have been stuck with because cache-unfriendly traditional
| object-oriented programming was so successful.
| hinkley wrote:
| I know that it depends on how much you disentangle your network
| code from your business logic. The question is the degree. Is
| it enough, or does it just dull the pain?
|
| If you give your business logic the complete message or send it
| a stream, then the flow of ownership stays much cleaner. And
| the unit tests stay substantially easier to write and more
| importantly, to maintain.
|
| I know too many devs who don't see when they bias their
| decisions to avoid making changes that will lead to conflict
| with bad unit tests and declare that our testing strategy is
| Just Fine. It's easier to show than to debate, but it still
| takes an open mind to accept the demonstration.
| MRson wrote:
| I don't understand, why is allocation not part of IO? This seems
| like effect oriented programming with a kinda strange grouping:
| allocation, and the rest (io).
| andyferris wrote:
| Most effect-tracking languages have a GC and
| allocations/deallocations are handled implicitly.
|
| To perform what we normally call "pure" code requires an
| allocator but not the `io` object. Code which accepts neither
| is also allocation-free - something which is a bit of a
| challenge to enforce in many languages, but just falls out here
| (from the lack of closures or globally scoped
| allocating/effectful functions - though I'm not sure whether
| `io` or an allocator is now required to call arbitrary extern
| (i.e. C) functions or not, which you'd need for a 100%
| "sandbox").
| thefaux wrote:
| I find the direction of zig confusing. Is it supposed to be a
| simple language or a complex one? Low level or high level? This
| feature is to me a strange mix of high and low level
| functionality and quite complex.
|
| The io interface looks like OO but violates the Liskov
| substitution principle. For me, this does not solve the function
| color problem, but instead hides it. Every function with an IO
| interface cannot be reasoned about locally because of unexpected
| interactions with the io parameter input. This is particularly
| nasty when IO objects are shared across library boundaries. I now
| need to understand how the library internally manages io if I
| share that object with my internal code. Code that worked in one
| context may surprisingly not work in another context. As a
| library author, how do I handle an io object that doesn't behave
| as I expect?
|
| Trying to solve this problem at the language level fundamentally
| feels like a mistake to me because you can't anticipate in
| advance all of the potential use cases for something as broad as
| io. That's not to say that this direction shouldn't be explored,
| but if it were my project, I would separate this into another
| package that I would not call standard.
| mordnis wrote:
| Couldn't the same thing be said about functions that accept
| allocators?
| metaltyphoon wrote:
| I don't think so because the result of calling an allocator
| is either you got memory or you don't, while the IO here will
| be "it depends"
| nrds wrote:
| TL;DR. Zig now has an effect system with multiple handlers for
| alloc and io effects. But the designer doesn't know anything
| about effect systems and it's not algebraic, probably. Will be
| interesting to see how this develops.
| comex wrote:
| It's worth noting that this is _not_ async /await in the sense of
| essentially every other language that uses those terms.
|
| In other languages, when the compiler sees an async function, it
| compiles it into a state machine or 'coroutine', where the
| function can suspend itself at designated points marked with
| `await`, and be resumed later.
|
| In Zig, the compiler used to support coroutines but this was
| removed. In the new design, `async` and `await` are just
| functions. In the threaded implementation used in the demo,
| `await` just blocks the thread until the operation is done.
|
| To be fair, the bottom of the post explains that there are two
| other Io implementations being planned.
|
| One of them is "stackless coroutines", which would be similar to
| traditional async/await. However, from the discussion so far this
| seems a bit like vaporware. As discussed in [1], andrewrk
| explicitly rejected the idea of just (re-)adding normal
| async/await keywords, and instead wants a different design, as
| tracked in issue 23446. But in issue 23446 the seems to be zero
| agreement on how the feature would work, how it would improve on
| traditional async/await, or how it would avoid function coloring.
|
| The other implementation being planned is "stackful coroutines".
| From what I can tell, this has more of a plan and is more
| promising, but there are significant unknowns.
|
| The basis of the design is similar to green threads or fibers.
| Low-level code generation would be identical to normal
| synchronous code, with no state machine transform. Instead, a
| library would implement suspension by swapping out the native
| register state and stack, just like the OS kernel does when
| switching between OS threads. By itself, this has been
| implemented many times before, in libraries for C and in the
| runtimes of languages like Go. But it has the key limitation that
| you don't know how much stack to allocate. If you allocate too
| much stack in advance, you end up being not much cheaper than OS
| threads; but if you allocate too little stack, you can easily hit
| stack overflow. Go addresses this by allocating chunks of stack
| on demand, but that still imposes a cost and a dependency on
| dynamic allocation.
|
| andrewrk proposes [2] to instead have the compiler calculate the
| maximum amount of native stack needed by a function and all its
| callees. In this case, the stack could be sized exactly to fit.
| In some sense this is similar to async in Rust, where the
| compiler calculates the size of async function objects based on
| the amount of state the function and its callees need to store
| during suspension. But the Zig approach would apply to all
| function calls rather than treating async as a separate case. As
| a result, the benefits would extend beyond memory usage in async
| code. The compiler would statically guarantee the absence of
| stack overflow, which benefits reliability in all code that uses
| the feature. This would be particularly useful in embedded where,
| typically, reliability demands are high and memory available is
| low. Right now in embedded, people sometimes use a GCC feature
| ("-fstack-usage") that does a similar calculation, but it's messy
| enough that people often don't bother. So it would be cool to
| have this as a first-class feature in Zig.
|
| But.
|
| There's a reason that stack usage calculators are uncommon. If
| you want to statically bound stack usage:
|
| First, you have to ban recursion, or else add some kind of
| language mechanism for tracking how many times a function can
| possibly recurse. Banning recursion is common in embedded code
| but would be rather annoying for most codebases. Tracking
| recursion is definitely possible, as shown by proof languages
| like Agda or Coq that make you prove termination of recursive
| functions - but those languages have a lot of tools that 'normal'
| languages don't, so it's unclear how ergonomic such a feature
| could be in Zig. The issue [2] doesn't have much concrete
| discussion on how it would work.
|
| Second, you have to ban dynamic calls (i.e. calls to function
| pointers), because if you don't know what function you're
| calling, you don't know how much stack it will use. This has been
| the subject of more concrete design in [3] which proposes a
| "restricted" function pointer type that can only refer to a
| statically known set of functions. However, it remains to be seen
| how ergonomic and composable this will be.
|
| Zooming back out:
|
| Personally, I'm glad that Zig is willing to experiment with these
| things rather than just copying the same async/await feature as
| every other language. There is real untapped potential out there.
| On the other hand, it seems a little early to claim victory, when
| all that works today is a thread-based I/O library that happens
| to have "async" and "await" in its function names.
|
| Heck, it seems early to finalize an I/O library design if you
| don't even know how the fancy high-performance implementations
| will work. Though to be fair, many applications will get away
| just fine with threaded I/O, and it's nice to see a modern I/O
| library design that embraces that as a serious option.
|
| [1]
| https://github.com/ziglang/zig/issues/6025#issuecomment-3072...
|
| [2] https://github.com/ziglang/zig/issues/157
|
| [3] https://github.com/ziglang/zig/issues/23367
| SkiFire13 wrote:
| > tracking how many times a function can possibly recurse.
|
| > Tracking recursion is definitely possible, as shown by proof
| languages like Agda or Coq that make you prove termination of
| recursive functions
|
| Proof languages don't really track how many times a function
| can possibly recurse, they only care that it will eventually
| terminate. The amount of recursive steps can even easily depend
| on the inputs, making it unknown at the moment a function is
| defined.
| pyrolistical wrote:
| Oh boy. Example 7 is a bit of a mindfuck. You get the returned
| string in both in await and cancel.
|
| Feels like this violates zig "no hidden control flow" principle.
| I kinda see how it doesn't. But it sure feels like a violation.
| But I also don't see how they can retain the spirit of the
| principle with async code.
| rdtsc wrote:
| > Feels like this violates zig "no hidden control flow"
| principle.
|
| A hot take here is that the whole async thing is a hidden
| control flow. Some people noticed that ever since plain
| callbacks were touted as a "webscale" way to do concurrency.
| The sequence of callbacks being executed or canceled forms a
| hidden, implicit control flow running concurrently with the
| main control logic. It can be harder to debug and manage than
| threads.
|
| But that said, unless, Zig adds a runtime with its own
| scheduler and turns into a bytecode VM, there is not much it
| can do. Co-routines and green threads have been done before in
| C and C-like languages, but not sure how easily the would fit
| with Zig and its philosophy.
| andyferris wrote:
| Personally I find this really cool.
|
| One thing I like about the design is it locks in some of the
| "platforms" concepts seen in other languages (e.g. Roc), but in a
| way that goes with Zig's "no hidden control flow" mantra.
|
| The downstream effect is that it will be normal to create your
| own non-posix analogue of `io` for wherever you want code to hook
| into. Writing a game engine? Let users interact with a set of
| effectful functions you inject into their scripts.
|
| As a "platform" writer (like the game engine), essentially you
| get to create a sandbox. The missing piece may be controlling
| access to calling arbitrary extern C functions - possibly that
| capability would need to be provided by `io` to create a fool-
| proof guarantees about what some code you call does. (The debug
| printing is another uncontrolled effect).
___________________________________________________________________
(page generated 2025-10-30 23:00 UTC)