[HN Gopher] Asynchrony is not concurrency
       ___________________________________________________________________
        
       Asynchrony is not concurrency
        
       Author : kristoff_it
       Score  : 115 points
       Date   : 2025-07-18 19:21 UTC (3 hours ago)
        
 (HTM) web link (kristoff.it)
 (TXT) w3m dump (kristoff.it)
        
       | threatofrain wrote:
       | IMO the author is mixed up on his definitions for concurrency.
       | 
       | https://lamport.azurewebsites.net/pubs/time-clocks.pdf
        
         | tines wrote:
         | Can you explain more instead of linking a paper? I felt like
         | the definitions were alright.
         | 
         | > Asynchrony: the possibility for tasks to run out of order and
         | still be correct.
         | 
         | > Concurrency: the ability of a system to progress multiple
         | tasks at a time, be it via parallelism or task switching.
         | 
         | > Parallelism: the ability of a system to execute more than one
         | task simultaneously at the physical level.
        
           | threatofrain wrote:
           | Concurrency is the property of a program to be divided into
           | partially ordered or completely unordered units of execution.
           | It does not describe how you actually end up executing the
           | program in the end, such as if you wish to exploit these
           | properties for parallel execution or task switching. Or maybe
           | you're running on a single thread and not doing any task
           | switching or parallelism.
           | 
           | For more I'd look up Rob Pike's discussions for Go
           | concurrency.
        
             | tines wrote:
             | The article understands this.
        
               | threatofrain wrote:
               | > Concurrency: the ability of a system to progress
               | multiple tasks at a time, be it via parallelism or task
               | switching.
               | 
               | Okay, but don't go with this definition.
        
               | michaelsbradley wrote:
               | Is the hang up on _" at a time"_? What if that were
               | changed to something like _" in a given amount of time"_?
               | 
               | For single threaded programs, whether it is JS's event
               | loop, or Racket's cooperative threads, or something
               | similar, if Dt is small enough then only one task will be
               | seen to progress.
        
           | andsoitis wrote:
           | > Asynchrony: the possibility for tasks to run out of order
           | and still be correct.
           | 
           | Asynchrony is when things don't happen at the same time or in
           | the same phase, i.e. is the opposite of Synchronous. It can
           | describe a lack of coordination or concurrence in time, often
           | with one event or process occurring independently of another.
           | 
           | The correctness statement is not helpful. When things happy
           | asynchronously, you do not have guarantees about order, which
           | may be relevant to "correctness of your program".
        
             | w10-1 wrote:
             | > The correctness statement is not helpful
             | 
             | But... that's everything, and why it's included.
             | 
             | Undefined behavior from asynchronous computing is not worth
             | study or investment, except to avoid it.
             | 
             | Virtually all of the effort for the last few decades (from
             | super-scalar processors through map/reduce algorithms and
             | Nvidia fabrics) involves enabling non-SSE operations that
             | are correct.
             | 
             | So yes, as an abstract term outside the context of
             | computing today, asynchrony does not guarantee correctness
             | - that's the difficulty. But the only asynchronous
             | computing we care about offers correctness guarantees of
             | some sort (often a new type, e.g., "eventually
             | consistent").
        
           | Lichtso wrote:
           | Concurrency is parallelism and/or asynchrony, simply the
           | superset of the other two.
           | 
           | Asynchrony means things happen out of order, interleaved,
           | interrupted, preempted, etc. but could still be just one
           | thing at a time sequentially.
           | 
           | Parallelism means the physical time spent is less that the
           | sum of the total time spent because things happen
           | simultaneously.
        
             | jrvieira wrote:
             | careful: in many programming contexts parallelism and
             | concurrency are exclusive concepts, and sometimes under the
             | umbrella of async, which is a term that applies to a
             | different domain.
             | 
             | in other contexts these words don't describe disjoint sets
             | of things so it's important to clearly define your terms
             | when talking about software.
        
               | merb wrote:
               | Yeah concurrency is not parallelism.
               | https://go.dev/blog/waza-talk
        
               | Lichtso wrote:
               | That is compatible with my statement: Not all concurrency
               | is parallelism, but all parallelism is concurrency.
        
               | OkayPhysicist wrote:
               | What people mean by "concurrency is not parallelism" is
               | that they are different problems. The concurrency problem
               | is defining an application such that it has parts that
               | are not causally linked with some other parts of the
               | program. The parallelism problem is the logistics of
               | actually running multiple parts of your program at the
               | same time. If I write a well-formed concurrent system, I
               | shouldn't have to know or care if two specific parts of
               | my system are actually being executed in parallel.
               | 
               | In ecosystems with good distributed system stories, what
               | this looks like in practice is that concurrency is your
               | (the application developers') problem, and parallelism is
               | the scheduler designer's problem.
        
               | Lichtso wrote:
               | yes, some people swap the meaning of concurrency and
               | asynchrony. But, almost all implementations of async use
               | main event loops, global interpreter lock, co-routines
               | etc. and thus at the end of the day only do one thing at
               | a time.
               | 
               | Therefore I think this definition makes the most sense in
               | practical terms. Defining concurrency as the superset is
               | a useful construct because you have to deal with the same
               | issues in both cases. And differentiating asynchrony and
               | parallelism makes sense because it changes the trade-off
               | of latency and energy consumption (if the bandwidth is
               | fixed).
        
             | michaelsbradley wrote:
             | Asynchrony also practically connotes nondeterminism, but a
             | single-threaded concurrent program doesn't _have_ to
             | exhibit nondeterministic behavior.
        
           | jkcxn wrote:
           | Not the OP, but in formal definitions like Communicating
           | Sequential Processes, concurrency means the possibility for
           | tasks to run out of order and still be correct, as long as
           | other synchronisation events happen
        
             | gowld wrote:
             | Concurrency implies asynchrony (two systems potentially
             | doing work at the same time withut waiting for each other),
             | but the converse is not true.
             | 
             | A single process can do work in an unordered (asynchronous)
             | way.
        
               | Zambyte wrote:
               | Parallelism implies concurrency but not does not imply
               | asynchrony.
        
           | ryandv wrote:
           | They're just different from what Lamport originally proposed.
           | _Asynchrony_ as given is roughly equivalent to Lamport 's
           | characterization of distributed systems as _partially
           | ordered_ , where some pairs of events can't be said to have
           | occurred before or after one another.
           | 
           | One issue with the definition for concurrency given in the
           | article would seem to be that no concurrent systems can
           | deadlock, since as defined all concurrent systems can
           | progress tasks. Lamport uses the word concurrency for
           | something else: "Two events are concurrent if neither can
           | causally affect the other."
           | 
           | Probably the notion of (a)causality is what the author was
           | alluding to in the "Two files" example: saving two files
           | where order does not matter. If the code had instead been
           | "save file A; read contents of file A;" then, similarly to
           | the client connect/server accept example, the "save"
           | statement and the "read" statement would not be concurrent
           | under Lamport's terminology, as the "save" causally affects
           | the "read."
           | 
           | It's just that the causal relationship between two tasks is a
           | different concept than how those tasks are composed together
           | in a software model, which is a different concept from how
           | those tasks are physically orchestrated on bare metal, and
           | also different from the ordering of events..
        
             | kazinator wrote:
             | The definition of asynchrony is bad. It's possible for
             | asynchronous requests to guarantee ordering, such that if a
             | thread makes two requests A and B in that order,
             | asynchronously, they will happen in that order.
             | 
             | Asynchrony means that the requesting agent is not blocked
             | while submitting a request in order to wait for the result
             | of that request.
             | 
             | Asynchronous abstractions may provide a synchronous way
             | wait for the asynchronously submitted result.
        
               | ryandv wrote:
               | > The definition of asynchrony is bad. It's possible for
               | asynchronous requests to guarantee ordering, such that if
               | a thread makes two requests A and B in that order,
               | asynchronously, they will happen in that order.
               | 
               | It's true that it's possible - two async tasks can be
               | bound together in sequence, just as with `Promise.then()`
               | et al.
               | 
               | ... but it's not _necessarily_ the case, hence the
               | _partial_ order, and the  " _possibility_ for tasks to
               | run out of order ".
        
               | kazinator wrote:
               | [delayed]
        
           | amelius wrote:
           | > Asynchrony: the possibility for tasks to run out of order
           | and still be correct.
           | 
           | Can't we just call that "independent"?
        
             | skydhash wrote:
             | Not really. There may be some causal relations.
        
               | amelius wrote:
               | Can you give an example?
        
           | ninetyninenine wrote:
           | Doesn't multiple tasks at the same time make it simultaneous?
           | 
           | I think there needs to be a stricter definition here.
           | 
           | Concurrency is the ability of a system to chop a task into
           | many tiny tasks. A side effect of this is that if the system
           | chops all tasks into tiny tasks and runs them all in a sort
           | of shuffled way it looks like parallelism.
        
         | sapiogram wrote:
         | This is why I've completely stopped using the term, literally
         | everyone I talk to seems to have a different understanding. It
         | no longer serves any purpose for communication.
        
         | carodgers wrote:
         | The author is aware that definitions exist for the terms he
         | uses in his blog post. He is proposing revised definitions. As
         | long as he is precise with his new definitions, this is fine.
         | It is left to the reader to decide whether to adopt them.
        
           | WhitneyLand wrote:
           | He's repurposing asynchrony that's different from the way
           | most literature and many developers use it, and that shift is
           | doing rhetorical work to justify a particular Zig API split.
           | 
           | No thanks.
        
       | ioasuncvinvaer wrote:
       | Is there anything new in this article?
        
         | butterisgood wrote:
         | Perhaps not, but sometimes the description from a different
         | angle helps somebody understand the concepts better.
         | 
         | I don't know how many "monad tutorials" I had to read before it
         | all clicked, and whether it ever fully clicked!
        
         | gowld wrote:
         | Most gen ed articles on HN are not new ideas, just articles
         | that could be pages from a textbook.
        
       | ltbarcly3 wrote:
       | The argument about concurrency != parallelism mentioned in this
       | article as being "not useful" is often quoted and rarely a useful
       | or informative, and it also fails to model actual systems with
       | enough fidelity to even be true in practice.
       | 
       | Example: python allows concurrency but not parallelism. Well not
       | really though, because there are lots of examples of parallelism
       | in python. Numpy both releases the GIL and internally uses open-
       | mp and other strategies to parallelize work. There are a thousand
       | other examples, far too many nuances and examples to cover here,
       | which is my point.
       | 
       | Example: gambit/mit-scheme allows parallelism via parallel
       | execution. Well, kindof, but really it's more like python's
       | multiprocess library pooling where it forks and then marshals the
       | results back.
       | 
       | Besides this, often parallel execution is just a way to manage
       | concurrent calls. Using threads to do http requests is a simple
       | example, while the threads are able to execute in parallel
       | (depending on a lot of details) they don't, they spend almost
       | 100% of their time blocking on some socket.read() call. So is
       | this parallelism or concurrency? It's what it is, it's threads
       | mostly blocking on system calls, parallelism vs concurrency gives
       | literally no insights or information here because it's a
       | pointless distinction in practice.
       | 
       | What about using async calls to execute processes? Is that
       | concurrency or parallelism? It's using concurrency to allow
       | parallel work to be done. Again, it's both but not really and you
       | just need to talk about it directly and not try to simplify it
       | via some broken dichotomy that isn't even a dichotomy.
       | 
       | You really have to get into more details here, concurrency vs
       | parallelism is the wrong way to think about it, doesn't cover the
       | things that are actually important in an implementation, and is
       | generally quoted by people who are trying to avoid details or
       | seem smart in some online debate rather than genuinely problem
       | solving.
        
         | throwawaymaths wrote:
         | edit: fixed by the author.
        
           | ltbarcly3 wrote:
           | yes I'm agreeing with the article completely
        
             | throwawaymaths wrote:
             | maybe change 'this' to a non-pronoun? like "rob pikes
             | argument"
        
               | ltbarcly3 wrote:
               | updated, thanks for the feedback
        
               | throwawaymaths wrote:
               | thanks for clarifying!
        
         | chowells wrote:
         | The difference is quite useful and informative. In fact, most
         | places don't seem to state it strongly enough: Concurrency is a
         | programming model. Parallelism is an execution model.
         | 
         | Concurrency is writing code with the appearance of multiple
         | linear threads that can be interleaved. Notably, it's about
         | _writing code_. Any concurrent system could be written as a
         | state machine tracking everything at once. But that 's really
         | hard, so we define models that allow single-purpose chunks of
         | linear code to interleave and then allow the language,
         | libraries, and operating system to handle the details. Yes,
         | even the operating system. How do you think multitasking worked
         | before multi-core CPUs? The kernel had a fancy state machine
         | tracking execution of multiple threads that were allowed to
         | interleave. (It still does, really. Adding multiple cores only
         | made it more complicated.)
         | 
         | Parallelism is running code on multiple execution units. That
         | is _execution_. It doesn 't matter how it was written; it
         | matters how it executes. If what you're doing can make use of
         | multiple execution units, it can be parallel.
         | 
         | Code can be concurrent without being parallel (see async/await
         | in javascript). Code can be parallel without being concurrent
         | (see data-parallel array programming). Code can be both, and
         | often is intended to be. That's because they're describing
         | entirely different things. There's no rule stating code must be
         | one or the other.
        
           | ltbarcly3 wrote:
           | When you define some concepts, those definitions and concepts
           | should help you better understand and simplify the
           | descriptions of things. That's the point of definitions and
           | terminology. You are not achieving this goal, quite the
           | opposite in fact, your description is confusing and would
           | never actually be useful in understanding, debugging, or
           | writing software.
           | 
           | Stated another way: if we just didn't talk about concurrent
           | vs parallel we would have exactly the same level of
           | understanding of the actual details of what code is doing,
           | and we would have exactly the same level of understanding
           | about the theory of what is going on. It's trying to impose
           | two categories that just don't cleanly line up with any real
           | system, and it's trying to create definitions that just
           | aren't natural in any real system.
           | 
           | Parallel vs concurrent is a bad and useless thing to talk
           | about. It's a waste of time. It's much more useful to talk
           | about what operations in a system can overlap each other in
           | time and which operations cannot overlap each other in time.
           | The ability to overlap in time might be due to technical
           | limitations (python GIL), system limitations (single core
           | processor) or it might be intentional (explicit locking), but
           | that is the actual thing you need to understand, and parallel
           | vs concurrent just gives absolutely no information or
           | insights whatsoever.
           | 
           | Here's how I know I'm right about this: Take any actual
           | existing software or programming language or library or
           | whatever, and describe it as parallel or concurrent, and then
           | give the extra details about it that isn't captured in
           | "parallel" and "concurrent". Then go back and remove any
           | mention of "parallel" and "concurrent" and you will see that
           | everything you need to know is still there, removing those
           | terms didn't actually remove any information content.
        
             | chowells wrote:
             | Addition vs multiplication is a bad and useless thing to
             | talk about. It's a waste of time. It's much more useful to
             | talk about what number you get at the end. You might get
             | that number from adding once, or twice, or even more times,
             | but that final number is the actual thing you need to
             | understand and "addition vs multiplication" just gives
             | absolutely no information or insights whatsoever.
             | 
             | They're just different names for different things. Not
             | caring that they're different things makes communication
             | difficult. Why do that to people you intend to communicate
             | with?
        
       | Retr0id wrote:
       | I don't get it - the "problem" with the client/server example in
       | particular (which seems pivotal in the explanation). But I am
       | also unfamiliar with zig, maybe that's a prerequisite. (I am
       | however familiar with async, concurrency, and parallelism)
        
         | koakuma-chan wrote:
         | Example 1
         | 
         | You can write to one file, wait, and then write to the second
         | file.
         | 
         | Concurrency not required.
         | 
         | Example 2
         | 
         | You can NOT do Server.accept, wait, and then do Client.connect,
         | because Server.accept would block forever.
         | 
         | Concurrency required.
        
           | tines wrote:
           | Oh, I see. The article is saying that async is required. I
           | thought it was saying that parallelism is required. The way
           | it's written makes it seem like there's a problem with the
           | code sample, not that the code sample is correct.
        
             | Retr0id wrote:
             | The article later says (about the server/client example)
             | 
             | > Unfortunately this code doesn't express this requirement
             | [of concurrency], which is why I called it a programming
             | error
             | 
             | I gather that this is a quirk of the way async works in
             | zig, because it would be correct in all the async runtimes
             | I'm familiar with (e.g. python, js, golang).
             | 
             | My existing mental model is that "async" is just a
             | syntactic tool to express concurrent programs. I think I'll
             | have to learn more about how async works in zig.
        
               | sunshowers wrote:
               | I think a key distinction is that in many application-
               | level languages, each thing you await exists autonomously
               | and keeps doing things in the background whether you
               | await it or not. In system-level languages like Rust (and
               | presumably Zig) the things you await are generally
               | passive, and only make forward progress if the caller
               | awaits them.
               | 
               | This is an artifact of wanting to write async code in
               | environments where "threads" and "malloc" aren't
               | meaningful concepts.
               | 
               | Rust does have a notion of autonomous existence: tasks.
        
               | Retr0id wrote:
               | Thanks, this clears things up for me.
               | 
               | I suppose I conflated "asynchrony" (as defined in the
               | article) and "async" as a syntax feature in languages I'm
               | familiar with.
        
               | m11a wrote:
               | I think that notion is very specific to Rust's design.
               | 
               | Golang for example doesn't have that trait, where the
               | user (or their runtime) must drive a future towards
               | completion by polling.
        
               | sunshowers wrote:
               | Right, but Go is an application-level language and
               | doesn't target environments where threads aren't a
               | meaningful concept. It's more an artifact of wanting to
               | target embedded environments than something specific to
               | Rust.
        
               | jayd16 wrote:
               | Thank you so much for the added context. This explains
               | the article very well.
        
           | jayd16 wrote:
           | But why is this a novel concept? The idea of starvation is
           | well known and you don't need parallelism for it to effect
           | you already. What does zig actually do to solve this?
           | 
           | Many other languages could already use async/await in a
           | single threaded context with an extremely dumb scheduler that
           | never switches but no one wants that.
           | 
           | I'm trying to understand but I need it spelled out why this
           | is interesting.
        
             | shikon7 wrote:
             | The novel concept is to make it explicit where a non-
             | concurrent scheduler is enough (async), and where it is not
             | (async-concurrent). As a benefit, you can call async
             | functions directly in a synchronous context, which is not
             | possible for the usual async/await, therefore avoiding the
             | need to have both sync and async versions of every
             | function.
        
       | danaugrs wrote:
       | Excellent article. I'm looking forward to Zig's upcoming async
       | I/O.
        
       | butterisgood wrote:
       | It's kind of true...
       | 
       | I can do a lot of things asynchronously. Like, I'm running the
       | dishwasher AND the washing machine for laundry at the same time.
       | I consider those things not occurring at "the same time" as
       | they're independent of one another. If I stood and watched one
       | finish before starting the other, they'd be a kind of synchronous
       | situation.
       | 
       | But, I also "don't care". I think of things being organized
       | concurrently by the fact that I've got an outermost orchestration
       | of asynchronous tasks. There's a kind of governance of
       | independent processes, and my outermost thread is what turns the
       | asynchronous into the concurrent.
       | 
       | Put another way. I don't give a hoot what's going on with your
       | appliances in your house. In a sense they're not synchronized
       | with my schedule, so they're asynchronous, but not so much
       | "concurrent".
       | 
       | So I think of "concurrency" as "organized asynchronous
       | processes".
       | 
       | Does that make sense?
       | 
       | Ah, also neither asynchronous nor concurrent mean they're
       | happening at the same time... That's parallelism, and not the
       | same thing as either one.
       | 
       | Ok, now I'll read the article lol
        
         | didibus wrote:
         | I think asynchronous as meaning out-of-sync, implies that there
         | needs to be synchronicity between the two tasks.
         | 
         | In that case, asynchronous just means the state that two or
         | more tasks that should be synchronized in some capacity for the
         | whole behavior to be as desired, is not properly in-sync, it's
         | out-of-sync.
         | 
         | Then I feel there can be many cause of asynchronous behavior,
         | you can be out-of-sync due to concurent execution or due to
         | parallel execution, or due to buggy synchronization, etc.
         | 
         | And because of that, I consider asynchronous programming as the
         | mechanisms that one can leverage to synchronize asynchronous
         | behavior.
         | 
         | But I guess you could also think of asynchronous as doesn't
         | need to be synchronized.
         | 
         | Also haven't read the article yet lol
        
         | criddell wrote:
         | > I consider those things not occurring at "the same time" as
         | they're independent of one another.
         | 
         | What would it take for you to consider them as running at the
         | same time then?
        
           | kristoff_it wrote:
           | That eventually the server and the client are able to
           | connect.
        
       | throwawaymaths wrote:
       | a core problem is that the term async itself is all sorts of
       | terrible, synchronous usually means "happening at the same time",
       | is _not_ what is happening when you don 't use `async`
       | 
       | its like the whole flammable/inflammable thing
        
       | butterisgood wrote:
       | > Concurrency refers to the ability of a system to execute
       | multiple tasks through simultaneous execution or time-sharing
       | (context switching)
       | 
       | Wikipedia had the wrong idea about microkernels for about a
       | decade too, so ... here we are I guess.
       | 
       | It's not a _wrong_ description but it's incomplete...
       | 
       | Consider something like non-strict evaluation, in a language like
       | Haskell. One can be evaluating thunks from an _infinite_
       | computation, terminate early, and resume something else just due
       | to the evaluation patterns.
       | 
       | That is something that could be simulated via generators with
       | "yield" in other languages, and semantically would be pretty
       | similar.
       | 
       | Also consider continuations in lisp-family languages... or
       | exceptions for error handling.
       | 
       | You have to assume all things could occur simultaneously relative
       | to each other in what "feels like" interrupted control flow to
       | wrangle with it. Concurrency is no different from the outside
       | looking in, and sequencing things.
       | 
       | Is it evaluated in parallel? Who knows... that's a strategy that
       | can be applied to concurrent computation, but it's not required.
       | Nor is "context switching" unless you mean switched control flow.
       | 
       | The article is very good, but if we're going by the "dictionary
       | definition" (something programming environments tend to get only
       | "partially correct" anyway), then I think we're kind of missing
       | the point.
       | 
       | The stuff we call "asynchronous" is usually a subset of
       | asynchronous things in the real world. The stuff we treat as task
       | switching is a single form of concurrency. But we seem to all
       | agree on parallelism!
        
       | raluk wrote:
       | One thing that most languages are lacking is expressing lazy
       | return values. -> await f1() + await f2() and to express this
       | concurently requres manually handing of futures.
        
         | sedatk wrote:
         | Which languages do have such a thing?
        
           | Twey wrote:
           | I suppose Haskell does, as `(+) <$> f1 <*> f2`.
        
           | steveklabnik wrote:
           | Rust does this, if you don't call await on them. You can then
           | await on the join of both.
        
         | zmj wrote:
         | That's because f2's result could depend on whether f1 has
         | executed.
        
         | jayd16 wrote:
         | you mean like?                  await Join(f1(), f2())
         | 
         | Although more realistically                  Promise1 = f1();
         | Promise2 = f2();        await Join(Promise1, Promise2);
         | 
         | But also, futures are the expression of lazy values so I'm not
         | sure what else you'd be asking for.
        
       | kobzol wrote:
       | The new Zig I/O idea seems like a pretty ingenious idea, if you
       | write mostly applications and don't need stackless coroutines. I
       | suspect that writing libraries using this style will be quite
       | error-prone, because library authors will not know whether the
       | provided I/O is single or multi-threaded, whether it uses evented
       | I/O or not... Writing concurrent/async/parallel/whatever code is
       | difficult enough on its own even if you have perfect knowledge of
       | the I/O stack that you're using. Here the library author will be
       | at the mercy of the IO implementation provided from the outside.
       | And since it looks like the IO interface will be a proper kitchen
       | sink, essentially an implementation of a "small OS", it might be
       | very hard to test all the potential interactions and combinations
       | of behavior. I'm not sure if a few async primitives offered by
       | the interface will be enough in practice to deal with all the
       | funny edge cases that you can encounter in practice. To support a
       | wide range of IO implementations, I think that the code would
       | have to be quite defensive and essentially assume the most
       | parallel/concurrent version of IO to be used.
       | 
       | It will IMO also be quite difficult to combine stackless
       | coroutines with this approach, especially if you'd want to avoid
       | needless spawning of the coroutines, because the offered
       | primitives don't seem to allow expressing explicit polling of the
       | coroutines (and even if they did, most people probably wouldn't
       | bother to write code like that, as it would essentially boil down
       | to the code looking like "normal" async/await code, not like Go
       | with implicit yield points). Combined with the dynamic dispatch,
       | it seems like Zig is going a bit higher-level with its language
       | design. Might be a good fit in the end.
       | 
       | It's quite courageous calling this approach "without any
       | compromise" when it has not been tried in the wild yet - you can
       | claim this maybe after 1-2 years of usage in a wider ecosystem.
       | Time will tell :)
        
       | sedatk wrote:
       | Blocking async code is not async. In order for something to
       | execute "out of order", you must have an escape mechanism from
       | that task, and that mechanism essentially dictates a form of
       | concurrency. Async must be concurrent, otherwise it stops being
       | async. It becomes synchronous.
        
         | vitaminCPP wrote:
         | This is exactly what the article is trying to debunk.
        
         | didibus wrote:
         | If you need to do A and then B in that order, but you're doing
         | B and then A. It doesn't matter if you're doing B and then A in
         | a single thread, the operations are out of sync.
         | 
         | So I guess you could define this scenario as asynchronous.
        
           | jayd16 wrote:
           | So wait, is the word they mean by asynchrony actually the
           | word "dependency"?
        
             | Jtsummers wrote:
             | > So wait, is the word they mean by asynchrony actually the
             | word "dependency"?
             | 
             | No, the definition provided for asynchrony is:
             | 
             | >> Asynchrony: the possibility for tasks to run out of
             | order and still be correct.
             | 
             | Which is not dependence, but rather _in_ dependence.
             | Asynchronous, in their definition, is concurrent with no
             | need for synchronization or coordination between the tasks.
             | The contrasted example which is still concurrent but not
             | asynchronous is the client and server one, where the order
             | matters (start the server after the client, or terminate
             | the server before the client starts, and it won't work
             | correctly).
        
               | kristoff_it wrote:
               | > The contrasted example which is still concurrent but
               | not asynchronous is the client and server one
               | 
               | Quote from the post where the opposite is stated:
               | 
               | > With these definitions in hand, here's a better
               | description of the two code snippets from before: both
               | scripts express asynchrony, but the second one requires
               | concurrency.
               | 
               | You can start executing Server.accept and Client.connect
               | in whichever order, but both must be running "at the same
               | time" (concurrently, to be precise) after that.
        
               | Jtsummers wrote:
               | Your examples and definitions don't match then.
               | 
               | If asynchrony, as I quoted direct from your article,
               | insists that order doesn't matter then the client and
               | server are not asynchronous. If the client were to
               | execute before the server and fail to connect (the server
               | is not running to accept the connection) then your system
               | has failed, the server will run later and be waiting
               | forever on a client who's already died.
               | 
               | The client/server example is _not_ asynchronous by your
               | own definition, though it is concurrent.
               | 
               | What's needed is a fourth term, synchrony. Tasks which
               | are concurrent (can run in an interleaved fashion) but
               | where order between the tasks matters.
        
               | kristoff_it wrote:
               | > If the client were to execute before the server and
               | fail to connect (the server is not running to accept the
               | connection) then your system has failed, the server will
               | run later and be waiting forever on a client who's
               | already died.
               | 
               | From the article:
               | 
               | > Like before, the order doesn't matter: the client could
               | begin a connection before the server starts accepting
               | (the OS will buffer the client request in the meantime),
               | or the server could start accepting first and wait for a
               | bit before seeing an incoming connection.
               | 
               | When you create a server socket, you need to call
               | `listen` and after that clients can begin connecting. You
               | don't need to have already called `accept`, as explained
               | in the article.
        
               | jayd16 wrote:
               | > Which is not dependence, but rather independence
               | 
               | Alright, well, good enough for me. Dependency tracking
               | implies independency tracking. If that's what this is
               | about I think the term is far more clear.
               | 
               | > where the order matters
               | 
               | I think you misunderstand the example. The article
               | states:
               | 
               | > Like before, *the order doesn't matter:* the client
               | could begin a connection before the server starts
               | accepting (the OS will buffer the client request in the
               | meantime), or the server could start accepting first and
               | wait for a bit before seeing an incoming connection.
               | 
               | The one thing that must happen is that the server is
               | running while the request is open. The server task must
               | start and remain unfinished while the client task runs if
               | the client task is to finish.
        
               | Jtsummers wrote:
               | I'm about to reply to the author because his article is
               | actually confusing as written. He has contradictory
               | definitions relative to his examples.
        
         | nemothekid wrote:
         | Consider:                   readA.await         readB.await
         | 
         | From the perspective of the application programmer, readA
         | "block" readB. They aren't concurrent.
         | join(readA, readB).await
         | 
         | In this example, the two operations are interleaved and the
         | reads happen concurrently. The author makes this distinction
         | and I think it's a useful one, that I imagine most people are
         | familiar with even if there is no name for it.
        
       | tossandthrow wrote:
       | The author does not seem to have made any non-trivial projects
       | with asynchronicity.
       | 
       | All the pitfalls of concurrency are there - in particular when
       | executing non-idempotent functions multiple times before previous
       | executions finish, then you need mutexes!
        
         | ajross wrote:
         | > All the pitfalls of concurrency are there [in async APIs]
         | 
         | This is one of those "in practice, theory and practice are
         | different" situations.
         | 
         | There is nothing in the async world that looks like a parallel
         | race condition. Code runs to completion until it
         | deterministically yields, 100% of the time, even if the
         | location of those yields may be difficult to puzzle out.
         | 
         | And so anyone who's ever had to debug and reason about a
         | parallel race condition is basically laughing at that
         | statement. It's just not the same.
        
           | kibwen wrote:
           | _> Code runs to completion until it deterministically yields_
           | 
           | No, because async can be (quote often is) used to perform
           | I/O, whose time to completion does not need to be
           | deterministic or predictable. Selecting on multiple tasks and
           | proceeding with the one that completes first is an entirely
           | ordinary feature of async programming. And even if you don't
           | _need_ to suffer the additional nondeterminism of your OS 's
           | thread scheduler, there's nothing about async that says you
           | can't use threads as part of its implementation.
        
             | ajross wrote:
             | And I repeat, if you think effects from unpredictable I/O
             | completion order constitute equivalent debugging thought
             | landscapes to hardware-parallel races, I can only laugh.
             | 
             | Yes yes, in theory they're the same. That's the joke.
        
       | jayd16 wrote:
       | I kind of think the author simply pulled the concept of yielding
       | execution out of the definition of concurrency and into this new
       | "asynchrony" term. Then they argued that the term is needed
       | because without it the entire concept of concurrency is broken.
       | 
       | Indeed so, but I would argue that concurrency makes little sense
       | without the ability to yield and is therefore intrinsic to it.
       | Its a very important concept but breaking it out into a new term
       | adds confusion, instead of reducing it.
        
         | LegionMammal978 wrote:
         | I'd count pure one-to-one parallelism as a form of concurrency
         | that doesn't involve any yielding. But otherwise, I agree that
         | all forms of non-parallel concurrency have to be yielding
         | execution at some cadence, even if it's at the instruction
         | level. (E.g., in CUDA, diverging threads in a warp will
         | interleave execution of their instructions, in case one branch
         | tries blocking on the other.)
        
         | kristoff_it wrote:
         | >I kind of think the author simply pulled the concept of
         | yielding execution out of the definition of concurrency and
         | into this new "asynchrony" term.
         | 
         | Quote from the article where the exact opposite is stated:
         | 
         | > (and task switching is - by the definition I gave above - a
         | concept specific to concurrency)
        
         | omgJustTest wrote:
         | Concurrency does not imply yielding...
         | 
         | Synchronous logic does imply some syncing and yielding could be
         | a way to sync - which is what i expect you mean.
         | 
         | Asynchronous logic is concurrent without sync or yield.
         | 
         | Concurrency and asynchronous logic do not exist - in real form
         | - in von Neumann machines
        
       | kazinator wrote:
       | Asynchrony, in this context, is an abstraction which separates
       | the preparation and submission of a request from the collection
       | of the result.
       | 
       | The abstraction makes it possible to submit multiple requests and
       | only then begin to inquire about their results.
       | 
       | The abstraction allows for, but does not require, a concurrent
       | implementation.
       | 
       | However, the intent behind the abstraction is that there be
       | concurrency. The motivation is to obtain certain benefits which
       | will not be realized without concurrency.
       | 
       | Some asynchronous abstractions cannot be implemented without some
       | concurrency. Suppose the manner by which the requestor is
       | informed about the completion of a request is not a blocking
       | request on a completion queue, but a callback.
       | 
       | Now, yes, a callback can be issued in the context of the
       | requesting thread, so everything is single-threaded. But if the
       | requesting thread holds a non-recursive mutex, that ruse will
       | reveal itself by causing a deadlock.
       | 
       | In other words, we can have an asynchronous request abstraction
       | that positively will not work single threaded;
       | 
       | 1 caller locks a mutex
       | 
       | 2 caller submits request
       | 
       | 3 caller unlocks mutex
       | 
       | 4 completion callback occurs
       | 
       | If step 2 generates a callback in the same thread, then step 3 is
       | never reached.
       | 
       | The implementation must use some minimal concurrency so that it
       | has a thread waiting for 3 while allowing the requestor to reach
       | that step.
        
       | ang_cire wrote:
       | > Asynchrony is not concurrency
       | 
       | This is what I tell my boss when I miss standups.
        
       | dvt wrote:
       | "Asynchrony" is a very bad word for this and we already have a
       | very well-defined mathematical one: commutativity. Some
       | operations are commutative (order does not matter: addition,
       | multiplication, etc.), while others are non-commutative (order
       | does matter: subtraction, division, etc.).                   try
       | io.asyncConcurrent(Server.accept, .{server, io});
       | io.async(Cient.connect, .{client, io});
       | 
       | Usually, ordering of operations in code is indicated by the line
       | number (first line happens before the second line, and so on),
       | but I understand that this might fly out the window in async
       | code. So, my gut tells me this would be better achieved with the
       | (shudder) `.then(...)` paradigm. It sucks, but better the devil
       | you know than the devil you don't.
       | 
       | As written, `asyncConcurrent(...)` is confusing as shit, and
       | unless you memorize this blog post, you'll have no idea what this
       | code means. I get that Zig (like Rust, which I really like fwiw)
       | is trying all kinds of new hipster things, but half the time they
       | just end up being unintuitive and confusing. Either implement
       | (async-based) commutativity/operation ordering somehow (like
       | Rust's lifetimes maybe?) or just use what people are already used
       | to.
        
       | skybrian wrote:
       | The way I like to think about it is that libraries vary in which
       | environments they support. Writing portable libraries that work
       | in any environment is nice, but often unnecessary. Sometimes you
       | don't care if your code works on Windows, or whether it works
       | without green threads, or (in Rust) whether it works without the
       | standard library.
       | 
       | So I think it's nice when type systems let you declare the
       | environments a function supports. This would catch mistakes where
       | you call a less-portable function in a portable library; you'd
       | get a compile error, indicating that you need to detect that
       | situation and call the function conditionally, with a fallback.
        
       ___________________________________________________________________
       (page generated 2025-07-18 23:00 UTC)