[HN Gopher] Why asynchronous Rust doesn't work
       ___________________________________________________________________
        
       Why asynchronous Rust doesn't work
        
       Author : tazjin
       Score  : 561 points
       Date   : 2021-03-10 02:16 UTC (20 hours ago)
        
 (HTM) web link (theta.eu.org)
 (TXT) w3m dump (theta.eu.org)
        
       | thombles wrote:
       | I'm not totally sure what the author is asking for, apart from
       | refcounting and heap allocations that happen behind your back. In
       | my experience async Rust is heavily characterised by tasks
       | (Futures) which own their data. They have to - when you spawn it,
       | you're offloading ownership of its state to an executor that will
       | keep it alive for some period of time that is out of the spawning
       | code's control. That means all data/state brought in at spawn
       | time must be enclosed by move closures and/or shared references
       | backed by an owned type (Arc) rather than & or &mut.
       | 
       | If you want to, nothing is stopping you emulating a higher-level
       | language - wrap all your data in Arc<_> or Arc<Mutex<_>> and
       | store all your functions as trait objects like Box<dyn Fn(...)>.
       | You pay for extra heap allocations and indirection, but avoid
       | specifying generics that spread up your type hierarchy, and no
       | longer need to play by the borrow checker's rules.
       | 
       | What Rust gives us is the option to _not_ pay all the costs I
       | mentioned in the last paragraph, which is pretty cool if you're
       | prepared to code for it.
        
         | valand wrote:
         | ^this
         | 
         | IMO, Rust already provides a decent amount of way to simplify
         | and skip things. Reference counting, async, proc_macro, etc.
         | 
         | In my experience, programming stuffs at a higher-level-language
         | where things are heavily abstracted, like (cough) NodeJS, is
         | easy and simple up to a certain point where I have to do a
         | certain low-level things fast (e.g. file/byte patching) or do a
         | system call which is not provided by the runtime API.
         | 
         | Often times I have to resort to making a native module or
         | helper exe just to mitigate this. That feels like reinventing
         | the wheel because the actual wheel I need is deep under an
         | impenetrable layer of abstraction.
        
         | ragnese wrote:
         | I'm a giant Rust fanboy and have been since about 2016. So, for
         | context, this was literally before Futures existed in Rust.
         | 
         | But, I only work on Rust code sporadically, so I definitely
         | feel the pros and cons of it when I switch back and forth
         | to/from Rust and other languages.
         | 
         | The problem, IMO, isn't about allocations or ownership.
         | 
         | In fact, I think that a lot of the complaints about async Rust
         | aren't even about async Rust or Futures.
         | 
         | The article brings up the legitimate awkwardness of passing
         | functions/closures around in Rust. But it's perfectly fair to
         | say that idiomatic Rust is not a functional language, and
         | passing functions around is just not the first tool to grab
         | from your toolbelt.
         | 
         | I think the _actual_ complaint is not about  "async", but
         | actually about _traits_. Traits are paradoxically one of Rust
         | 's best features and also a super leaky and incomplete
         | abstraction.
         | 
         | Let's say you know a bit of Rust and you're kind of working
         | through some problem. You write a couple of async functions
         | with the fancy `async fn foo(&x: Bar) -> Foo` syntax. Now you
         | want to abstract the implementation by wrapping those functions
         | in a trait. So you try just copy+pasting the signature into the
         | trait. The compiler complains that async trait methods aren't
         | allowed. So now you try to desugar the signature into `fn
         | foo(&x: Bar) -> impl Future<Foo>` (did you forget Send or
         | Unpin? How do you know if you need or want those bounds?). That
         | doesn't work either because now you find out that `impl Trait`
         | syntax isn't supported in traits. So now you might try an
         | associated type, which is what you _usually_ do for a trait
         | with  "generic" return values. That works okay, except that now
         | your implementation has to wrap its return value in Box::pin,
         | which is extra overhead that wasn't there when you just had the
         | standalone functions with no abstraction. You _could_
         | theoretically let the compiler bitch at you until it prints the
         | true return value and copy+paste that into the trait
         | implementation 's associated type, but realistically, that's
         | probably a mistake because you'd have to redo that every time
         | you tweak the function for any reason.
         | 
         | IMO, most of the pain isn't really _caused_ by async /await.
         | It's actually _caused_ by traits.
        
           | vmchale wrote:
           | > The article brings up the legitimate awkwardness of passing
           | functions/closures around in Rust.
           | 
           | That's a hard problem when you have linear/affine types!
           | Closures don't work so neatly as in Haskell; currying has to
           | be different.
        
       | layoutIfNeeded wrote:
       | _[me, a C++ programmer]_ : good, good, let the hate flow through
       | you >:)
        
       | xiphias2 wrote:
       | As I read through the database example, I saw that the compiler
       | just caught a multi-threading bug for the author, and instead of
       | being thankful, he's complaining that Rust is bad.
       | 
       | I think he should use a higher level framework, or wait a few
       | years for them to mature, and use a garbage collected language
       | until then.
        
         | fulafel wrote:
         | There wasn't any multi-threading in that example I think.
        
           | mav3rick wrote:
           | You created a DB on one thread and accessed / modified it on
           | another.
        
           | Cyph0n wrote:
           | Yes, there was: the function was calling the passed in
           | closure from a different thread.
           | 
           | It is just one thread that's actually appending to the Vec,
           | but it's still a multi-threaded example.
        
             | fulafel wrote:
             | Are we talking about the same example?
             | struct Database {         data: Vec<i32>       }       impl
             | Database {          fn store(&mut self, data: i32) {
             | self.data.push(data);          }       }            fn
             | main() {         let mut db = Database { data: vec![] };
             | do_work_and_then(|meaning_of_life| {
             | println!("oh man, I found it: {}", meaning_of_life);
             | db.store(meaning_of_life);         });         // I'd read
             | from `db` here if I really were making a web server.
             | // But that's beside the point, so I'm not going to.
             | // (also `db` would have to be wrapped in an
             | `Arc<Mutex<T>>`)         thread::sleep_ms(2000);       }
             | 
             | No threads are spawned.
        
               | filleduchaos wrote:
               | The `do_work_and_then` function does spawn a thread, it's
               | one of the first things established in the article.
        
               | fulafel wrote:
               | Ah, I thought it just named a block, my mistake.
        
               | [deleted]
        
               | valand wrote:
               | `                   fn do_work_and_then(func: fn(i32)) {
               | thread::spawn(move || {                 // Figuring out
               | the meaning of life...
               | thread::sleep_ms(1000); // gee, this takes time to do...
               | // ah, that's it!                 let result: i32 = 42;
               | // let's call the `func` and tell it the good news...
               | func(result)             });         }
               | 
               | `
               | 
               | you missed a snippet, a thread is spawned.
               | 
               | edit: formatting
        
               | imtringued wrote:
               | If for some magic reason thread::sleep_ms(1000) takes
               | longer than 2000ms the main function would reach its end
               | and deallocate the closure that is about to get called.
               | Basically use after free.
        
               | mav3rick wrote:
               | That is technically a race (sure) but at that point the
               | program is getting killed anyway. Agree on the overall
               | point though. I am a big fan of moving complete ownership
               | / lifetime. It just makes things easier to reason about.
               | In Chromium weak pointers are used to solve this use
               | after free problem.
        
               | richardwhiuk wrote:
               | And that's possible, because the sleep call is a
               | suggestion, not a guarantee. If the CPU is blocked for
               | 2s, then the order of waking the threads is undetermined,
               | and the race will occur.
        
         | gohbgl wrote:
         | What is the bug? I don't see it.
        
           | Cyph0n wrote:
           | The bug is that the closure is mutating the same data
           | structure (a Vec in this case) in a different thread - i.e.,
           | a data race.
           | 
           | In this specific case, only the spawned thread is mutating
           | the Vec, but the Rust compiler is usually conservative, so it
           | marked this as a bug.
           | 
           | The actual bug is one or both of the following:
           | 
           | 1. One of the threads could cause the underlying Vec to
           | reallocate/resize while other threads are accessing it.
           | 
           | 2. One of the threads could drop (free) the Vec while other
           | threads are using it.
           | 
           | In Rust, only one thread can "own" a data structure. This is
           | enforced through the Send trait (edit: this is probably
           | wrong, will defer to a Rust expert here).
           | 
           | In addition, you cannot share a mutable reference (pointer)
           | to the same data across threads without synchronization. This
           | is enforced through the Sync trait.
           | 
           | There are two common solutions here:
           | 
           | 1. Clone the Vec and pass that to the thread. In other words,
           | each thread gets its own copy of the data.
           | 
           | 2. Wrap the Vec in a Mutex and a Arc - your type becomes a
           | Arc<Mutex<Vec<String>>>. You can then clone() the data and
           | pass it to the new thread. Under the hood, this maps to an
           | atomic increment instead of a deep clone of the underlying
           | data like in (1).
           | 
           | The Mutex implements the Sync trait, which allows multiple
           | threads to mutate the data. The Arc (atomic ref count) allows
           | the compiler to guarantee that the Vec is dropped (freed)
           | exactly once.
        
         | Deukhoofd wrote:
         | He's just pointing out that it isn't very ergonomic, not that
         | Rust is bad (in fact he states the opposite multiple times).
         | 
         | Pointing out weaknesses and things that might be done better is
         | what helps something mature, not just praising it.
        
           | filleduchaos wrote:
           | "Ergonomic" is such a nebulous word as to be nearly useless
           | honestly.
           | 
           | I don't see how it is [unduly] inefficient or uncomfortable
           | for a language at Rust's level to ensure that the programmer
           | actually thinks through the execution of the code they are
           | writing. If one doesn't want to think about such things - and
           | there's absolutely nothing wrong with that! - there are
           | plenty of higher level languages that can do that legwork for
           | them with more than good enough performance.
           | 
           | To me it feels a bit like pointing out that the process of
           | baking a cake from scratch isn't very ergonomic, what with
           | all the careful choosing and measuring of ingredients and
           | combining them in the right order using different techniques
           | (whisking, folding, creaming, etc). That is simply what goes
           | into creating baked goods. If that process doesn't work for
           | you, you can buy cake mix instead and save yourself quite a
           | bit of time and energy - but that doesn't necessarily mean
           | there's anything to be done better about the process of
           | making a cake from scratch.
        
             | jnwatson wrote:
             | The whole point of async is entirely ergonomics. Anything
             | you can do in async you can do in continuation-passing
             | style.
             | 
             | I believe the point the author is making is that they added
             | a feature for ergonomics' sake that ended up not being
             | ergonomic; async-style programming usually coincides with
             | first-class functions and closures, and those are painful
             | in Rust.
        
               | marshray wrote:
               | Nothing about Rust async depends on closures or functions
               | being first-class. It's literally just a state machine
               | transformation.
               | 
               | But now that state data that previously lived on a nice
               | last-in-first-out stack has a different lifetime. Since
               | Rust fastidiously encodes lifetimes in the type system,
               | using async can result in having more complicated types
               | to talk about.
               | 
               | To me it's just the price that must be paid to use a
               | language that doesn't constantly spill garbage
               | everywhere, necessitating collection.
        
               | filleduchaos wrote:
               | > async-style programming usually coincides with first-
               | class functions and closures, and those are painful in
               | Rust.
               | 
               | Are they, though?
               | 
               | The example the author gives in the article tries to
               | write a multithreaded program as if it is a
               | singlethreaded one. Of course that is going to be
               | painful, whether you're trying to use futures/promises or
               | you're using callbacks. If you want to use multiple
               | threads safely (i.e. at all) you need to use the right
               | structures for the job - smart pointers, mutexes, etc.
               | This is independent of language really, even dynamically
               | typed languages like Ruby still have mutexes for when you
               | want to use multiple threads (though they do take control
               | away from the programmer in the form of things like
               | global interpreter locks).
               | 
               | If you don't actually want to deal with the complexities
               | of multithreading, then don't use it. For example there
               | are a good number of languages/runtimes that only expose
               | a single-threaded event loop to the programmer (even
               | though they may use thread pools in the background). But
               | I don't think "ergonomic" should necessarily mean
               | "abstract away every single detail" or "things should
               | Just Work even though one is straight-up not approaching
               | the problem correctly".
        
               | throwaway17_17 wrote:
               | It is my understanding that the entire point of the
               | async/await sugar in any language is to explicitly take
               | away the appearance of
               | multithreading/concurrency/parallelism in code. I often
               | see the argument for async/await syntax as being made in
               | the following way, it is not natural or feasible to
               | reason about concurrent and multithreaded code,
               | programmers need to be able to reason about all code as
               | though it were a linear list of instructions no matter
               | what is going on with the implementation, therefore, we
               | need the async/await syntax to enable linear, straight
               | line code appearance for programmer benefit. While I do
               | not agree that doing so is at all a good thing, it is the
               | most prevelant argument I have seen in justifying
               | async/await style sugar. If I am right and this is the
               | primary motivation for adding this to a language, your
               | comment is in direct opposition to the entire point of
               | such constructs. Using this reasoning, I think the author
               | was doing as suggested and writing multithreaded code as
               | though it was just a non threaded imperative program.
               | 
               | Note: I don't really disagree with your view, but it
               | seems that view is in the minority.
        
             | skohan wrote:
             | I actually find ergonomic to be a very clear and intuitive
             | term as it refers to programming language design decisions.
             | 
             | And to use your cake analogy, the fact that a process is
             | complex or laborious is orthogonal to the degree to which
             | it is ergonomic. You could imagine baking a cake in a well
             | organized kitchen with well designed, comfortable tools, or
             | you could imagine having to stand on your toes and reach to
             | the back of a slightly-too-high cupboard every time you
             | need to fetch an ingredient, and having to use a mixer with
             | way too many settings you can never remember, and none that
             | does exactly what you want. I think this is a better
             | analogy for ergonomics in programming languages.
        
               | filleduchaos wrote:
               | > I think this is a better analogy for ergonomics in
               | programming languages.
               | 
               | While it may be a better analogy, it doesn't really
               | reflect the way the term is used. In my (admittedly
               | anecdotal) experience, most people's complaints about
               | ergonomics are more of expecting difficult things to be
               | easier than they inherently are than of the actual
               | quality of the tools they are presented with. People
               | think they are complaining about a mixer with too many
               | settings, when in reality what they are complaining about
               | is the fact that there are _several_ variables that go
               | into mixing batter and dough. They buy a mixer that 's
               | targeted at a baker or chef who wants complete control
               | over how their batter comes out or who even needs a mixer
               | versatile enough to also mill grain or roll pasta, then
               | are predictably lost because it's not as immediately
               | intuitive to use as a simple hand mixer. I don't think
               | that makes the mixer not ergonomic, I think it just makes
               | it not the right tool for that particular person.
        
               | skohan wrote:
               | Or maybe it's a poorly designed mixer, or it's trying to
               | be a mixer and a blender at the same time and not doing a
               | good job at either task.
               | 
               | Sometimes complexity is a necessary consequence of the
               | domain, and sometimes it's simply the result of poor
               | design.
        
         | joubert wrote:
         | > caught a multi-threading bug
         | 
         | The compiler complained: "error[E0308]: mismatched types"
        
           | xiphias2 wrote:
           | That's why it's amazing, type safe efficient multi-threading
           | without dynamic memory allocation with a nice syntax...it's
           | the holy grail of server programming.
        
             | thu2111 wrote:
             | Why would I want to write a server in a language that
             | requires such awkward approaches to basics like coroutines
             | and dynamic dispatch when I could use kotlin on the JVM and
             | get pauseless GC, ultra fast edit/compile/run cycles,
             | efficient and powerful coroutines, and eventually Loom
             | which will eliminate the whole problem of coloured
             | functions completely and let me just forget about
             | coroutines? Multi threading bugs are not so common in real
             | Java/Kotlin programs to justify this epic set of costs, in
             | my view.
        
               | xiphias2 wrote:
               | Good luck getting Google's ad backend with 20ms latency
               | budget and billions of dollars of revenue approved with
               | the JVM.
               | 
               | Therr are many tasks where what you suggests is just too
               | slow and unpredictable.
               | 
               | Don't get me wrong, I think JVM is great, it's just not
               | systems level programming.
        
               | thu2111 wrote:
               | Modern JVM GCs have pause times below 1msec. That's a new
               | capability though so most people aren't yet aware of it.
               | 
               | And Google's ads backend are hardly the definition of
               | server. Their ads _front-end_ for example always used to
               | be in Java. Not sure what it is these days
        
               | karlmdavis wrote:
               | Have you ever tried writing the same application on both
               | a JVM language and in Rust, and then measuring the
               | latency & throughput differentials? Blew my mind the
               | first time.
               | 
               | Of course, if speed isn't a concern for you, then please
               | carry on.
        
               | roca wrote:
               | GC always forces you into a tradeoff between pause time,
               | throughput, and memory overhead.
        
               | ha_jo_331 wrote:
               | Or even better: Go
        
             | andrewflnr wrote:
             | It would be nice if it was explicitly a borrowing-related
             | message. That's the kind of thing I can see happening in a
             | future rustc actually.
        
               | xiphias2 wrote:
               | Yeah, the good thing is that the devs are always happy to
               | take feedback on improving the error messages (like
               | explaining differences between Fn/FnMut/fn here)
        
             | literallyWTF wrote:
             | Yeah I don't get it either. We're supposed to be upset that
             | it did it's job? Lol?
        
           | redis_mlc wrote:
           | Everything is "mismatched types" in Rust, literally it
           | doesn't do any automatic type conversion (casting), so it's
           | not the right language for most people.
        
           | agumonkey wrote:
           | I believe haskellers would love (and maybe did) to encode
           | commutativity and thread-safety in the type system :)
        
         | EugeneOZ wrote:
         | You are to find fancy runtime-only errors in async functions.
         | Their existence outweighs any other compiler's benefits.
        
       | zesterer wrote:
       | The article title is clickbaity as heck, but the content is
       | (reasonably) sound.
       | 
       | A better title would be 'Rust's async model has some flaws that
       | are a product of its desire to ensure safety and make the true
       | cost of certain operations visible'.
       | 
       | Of course, such a title would never hit the front page of hacker
       | news.
       | 
       | The problems the author identifies are real, but they're not
       | unique to Rust. You can get around them fairly easily with
       | dynamic dispatch, which is what basically every competitor to
       | Rust's async model does by default.
       | 
       | I agree with the author that Rust's use of async in the ecosystem
       | is increasingly universal and it's starting to become a little
       | problematic. As the author of Flume and a (soon to be released)
       | concurrent hashmap implementation, I'm trying to work on adding
       | primitives that successfully marry sync and async with one-
       | another. There's no doubt that it's hard though, and function
       | colouring is a problem.
        
       | say_it_as_it_is wrote:
       | You can't make everyone happy and you shouldn't try to. If people
       | enjoy OCaml or Lisp or Haskell, they really ought to continue
       | using them.
        
       | vmchale wrote:
       | >Before, the function we passed to do_work_and_then was pure: it
       | didn't have any associated data, so you could just pass it around
       | as a function pointer (fn(i32)) and all was grand. However, this
       | new function in that last example is a closure
       | 
       | Those are different things! Anything pure (i.e. returns same
       | output) could still require a closure if it captures heap-
       | allocated data - which in Rust would entail free-ing and
       | transferring ownership and all that.
        
       | cormacrelf wrote:
       | > And then fast forward a few years and you have an entire
       | language ecosystem built on top of the idea of making these
       | Future objects that actually have a load of closures inside4, and
       | all of the problems listed above (hard to name, can contain
       | references which make them radioactive, usually require using a
       | generic where clause, etc) apply to them because of how
       | "infectious" closures are.
       | 
       | No, this is straight up not true. Almost the entirety of the
       | futures ecosystem is built explicitly out of nameable structs
       | that library authors implement Future on. They do not typically
       | have a load of closures inside, usually none at all. The
       | unnameable closures are generally in 'user-space', for stringing
       | new futures together with less friction. There are different
       | tradeoffs, but the ecosystem is not built on a bed of
       | indescribable spaghetti quicksand.
        
       | klabb3 wrote:
       | In order to understand why async Rust is difficult to use today,
       | one must understand the background and goals of Rust, in
       | particular Rust's zero-cost abstractions:
       | 
       | The type- and trait system got Rust very, very far, delivering
       | static dispatch and stack allocations by default, while
       | maintaining type safety and a high level of flexibility for a
       | manageable complexity cost. When async Rust was incubating, these
       | goals were upheld dogmatically, and the community set out to
       | preserve all these traits when building async, for better and
       | worse. However, in order to get there, Rust needed to:
       | 
       | - Power-use existing complex features and/or verbose wrappers
       | such `Box`, `Send`, `Arc`, `impl Trait` types and an influx in
       | generic type parameters. (Several of these cause function-
       | coloring issues)
       | 
       | - Introduce new complexity for self-referential structs with
       | `Pin` (and if you dare - check out "pin projection")
       | 
       | - Allow for custom and extensible runtimes (aka executors -
       | responsible for scheduling), including both single- and
       | multithreaded.
       | 
       | The end result is a tremendous technological achievement -
       | essentially all requirements were upheld (to the point where
       | async can be used even in embedded environments). However, for
       | non-experts, it's tremendously complex - which affects almost all
       | users, even those that don't write custom runtimes or advanced
       | combinators.
       | 
       | I'm not the one to judge whether the story of async Rust is a
       | failure, but if it is, there is not a single reason. In broad
       | terms, the project was extremely ambitious and Rust, arguably,
       | wasn't entirely ready for it. A couple of missing features that
       | could possibly have helped:
       | 
       | - Generic Associated Types (currently causes verbose workarounds
       | for combinators, streams etc)
       | 
       | - "first-class" / ergonomic support for state machines (async fns
       | compiles to a state-machine - relevant also for async traits)
       | 
       | - Reliable destructors (`drop` is famously not guaranteed to run
       | in safe Rust, preventing things like borrowing across tasks,
       | which exacerbates the `Arc`-hell)
       | 
       | Additionally, async I/O kernel interfaces were maturing in
       | parallel - e.g. io_uring was released after async Rust was
       | stabilized (afaik), which might have resulted in a different,
       | completion-based design.
        
       | adamch wrote:
       | This is pretty overblown. I write async rust every day for my
       | job, just fine, with no real problems. Probably because I'm
       | consuming other libraries, I'm not trying to write my own. I use
       | well-tested libraries like Actix-Web or occasionally Tokio. I've
       | migrated multiple projects from futures to async/await once the
       | syntax came out. '
       | 
       | The problems the author is describing might apply more to library
       | authors, but for the many Rust programmers like me who are trying
       | to solve problems, not write a framework, async is easy and
       | practical.
        
         | scoutt wrote:
         | > The problems the author is describing might apply more to
         | library authors
         | 
         | I don't know much about Rust, but I found the examples in the
         | article _kind of simple_. As far as I can see, the author is
         | trying to spawn a new thread and share an object between the
         | main thread and the new one. It 's seems like an everyday
         | paradigm.
         | 
         | Perhaps the problems arise when passing complex objects instead
         | of native types (i.e. u32).
        
           | adamch wrote:
           | My job isn't to write a program that shares objects between
           | threads. My job is to write a program that responds to web
           | requests or stores user data or responds to queries or checks
           | the state of certain services.
           | 
           | In some languages and frameworks, I might need to share an
           | object between threads to do this efficiently. In Rust there
           | are other options, e.g. using Arc or a reference or moving
           | the object into a task. In my years of programming Rust I've
           | never wanted to share some object on a thread, because there
           | are frameworks and libraries (Actix-Web, Tokio, Rayon, async-
           | std, others) that give me a higher-level abstraction for
           | doing this.
           | 
           | I can understand using this as a point of comparison if
           | you're not very familiar with the language. But I have to
           | confess that in my three years of working with Rust, I've
           | neer had to worry about threads. I've been able to use a
           | library to solve my concurrency or parallelism problems so
           | that I can go back to worrying about design or reliability
           | concerns.
           | 
           | This won't be everyone's perspective, after all, someone has
           | to write those libraries. But it is mine, and I think users
           | of such libraries far outnumber authors of such libraries.
        
           | littlestymaar wrote:
           | > As far as I can see, the author is trying to spawn a new
           | thread and share an object between the main thread and the
           | new one. It's seems like an everyday paradigm.
           | 
           | And it works very well in Rust _as long as you understand_
           | the core concept of the language (ownership). Rust is
           | special, it has this fundamental concept that you need to
           | understand before going on, and that 's add some inevitable
           | learning curve. But once you know how it works, it just works
           | exactly as you'd expect and in the kinds of things the author
           | is trying to do[1], it doesn't gets in your way at all.
           | 
           | The post just sounds like the author didn't took the time to
           | understand the core concepts, and is trying to brute-force
           | their way through. This way of learning doesn't work very
           | well for Rust, or at least not if you are impatient.
           | (Especially because the error messages when it comes to
           | closures are still far from Rust's overall standards).
           | 
           | [1] there are things where Rust's ownership _is_ a constraint
           | (cyclic datastrutures for instance, or shared memory between
           | your program and a C program (io_uring, or wayland)). But
           | those are situational.
        
             | scoutt wrote:
             | I have no doubt that it works well in Rust (with its core
             | concepts and caveats).
             | 
             | I was really referring to the parent comment which said
             | that the author of the article seems to be complaining
             | about something that applies only to a specific area ( _"
             | The problems the author is describing might apply more to
             | library authors"_).
             | 
             | I said that to me, the examples in the article seemed more
             | like an everyday paradigm ( _mundane_ , creating a thread
             | and sharing an object).
        
         | __s wrote:
         | Agreed, I rewrote a 10k line a js websocket based game
         | server[0] a few months ago into async Rust. It was mostly
         | painless
         | 
         | I didn't have to deal with closures much because I'm
         | comfortable writing a 2000 line match
         | 
         | 0: https://github.com/serprex/openEtG/tree/master/src/rs/server
        
           | adamch wrote:
           | Wow, awesome code. I really respect the 2000 line match, it's
           | a really neat abstraction. Keep some state, have an
           | exhaustive list of everything users can do, and define
           | exactly how each possible user action can change that state.
           | Beautiful.
        
       | spamizbad wrote:
       | I'm surprised Rust went with a Scala-inspired async/await
       | considering it wasn't well-liked outside of PLT enthusiasts.
       | JavaScript adopted it as I don't think you could bolt any other
       | type of sugar on top of the event loop in browsers... but Rust
       | shouldn't suffer from such baggage.
        
         | kelnos wrote:
         | I'm confused... Scala doesn't have async/await.
        
         | nynx wrote:
         | What's an alternative to async/await that's explicit about
         | runtime costs?
        
           | thu2111 wrote:
           | How explicit do you want?
           | 
           | The loom model is VM managed thread stacks with user space
           | context switching. The costs are all quantifiable and a
           | suspended "virtual" thread uses memory directly proportional
           | to the amount of stuff on the stack, same as any coro
           | implementation. However there is no async/await or coloured
           | functions. The VM just makes threads super cheap
        
             | roca wrote:
             | Rust started off with user ("green") threads. The problem
             | is interop with other code via FFI --- it gets really
             | painful and potentially expensive. That's not good for a
             | systems language. So they ripped it out.
        
         | pshc wrote:
         | Do you have a source on people generally disliking async/await?
         | I've become a fan of it in JS.
        
           | spamizbad wrote:
           | There's the function coloring issue:
           | https://journal.stuffwithstuff.com/2015/02/01/what-color-
           | is-...
           | 
           | Where your sync functions have X considerations and your
           | async functions have separate Y considerations - BUT,
           | async/await sugar makes these widely different behaviors look
           | nearly identical - so you get "clean" sync code aesthetics on
           | your asynchronous code and are sort of on your own when
           | tracking the behavior of its implication.
        
         | xiphias2 wrote:
         | Rust was the first language that made it efficient, abstraction
         | without overhead (which took many years of work). It's just a
         | state machine with an enum holding the state, but of course it
         | means that the memory model is harder to understand than with
         | JavScript.
        
         | roca wrote:
         | C# and C++ adopted basically the same model.
        
           | CodeArtisan wrote:
           | Note that C++ offers async as a library and not as a
           | programming language feature which Bjarne Stroustrup is
           | strongly against. He talk more about why in "The Design and
           | Evolution of C++" but i don't have the material at hand. As
           | far as i remember, his main argument was that there is no
           | concurrency model to fit them all and thus it doesn't worth
           | to add new syntax and semantics for a specific model except
           | to make the PL more complex and harder to implement.
        
             | roca wrote:
             | The main piece is stackless coroutines which definitely
             | aren't a library.
        
             | xfer wrote:
             | No without compiler support you can't do async in a
             | library. Rust also does async in library so to speak.
        
             | jcelerier wrote:
             | > Note that C++ offers async as a library and not as a
             | programming language feature which Bjarne Stroustrup is
             | strongly against.
             | 
             | https://www.modernescpp.com/index.php/c-20-coroutines-the-
             | fi...
        
               | tick_tock_tick wrote:
               | coroutines themselves have nothing todo with async/await.
        
               | nly wrote:
               | There's no executor in the standard library for
               | asynchronous work, the C++20 coroutines implementation
               | lets you plug in your own
               | 
               | Dummy event loop example:
               | 
               | https://gcc.godbolt.org/z/fo18xW
        
         | blackoil wrote:
         | Isn't async/await from F#/C# world, or Scala has a different
         | variant?
        
       | ironmagma wrote:
       | > The thing I really want to try and get across here is that
       | *Rust is not a language where first-class functions are
       | ergonomic.*
       | 
       | So... don't use first-class functions so much? It's a systems
       | language, not a functional language for describing algorithms in
       | CS whitepapers. Or use `move` (the article does mention this).
       | 
       | There are easy paths in most programming languages, and harder
       | paths. Rust is no exception. The fact that passing around async
       | closures with captured variables and also verifying nothing gets
       | dropped prematurely and without resorting to runtime GC is
       | bleeding-edge technology, so it should not be surprising that it
       | has some caveats and isn't always easy to do. The same could be
       | said of trying to do reflection in Go, or garbage collection in
       | C. These aren't really the main use case for the tool.
        
         | TheCoelacanth wrote:
         | Yeah, a struct holding four different closures is not a pattern
         | that makes much sense in Rust.
         | 
         | That would look a lot better as a Trait with four methods.
        
         | gkya wrote:
         | > It's a systems language, not a functional language for
         | describing algorithms in CS whitepapers.
         | 
         | This kind of toxicity is why I left programming behind as a
         | career.
        
           | ironmagma wrote:
           | How is saying Rust is one thing but not another thing toxic?
           | I never said it's the author's fault Rust is broken or
           | anything like that. It just has some goals, and being a
           | functional programming language isn't one of them (as far as
           | I know).
        
             | steveklabnik wrote:
             | One way to read you comment, which maybe you didn't intend,
             | is "this is a language for Real Work, not one for those
             | silly academics." I don't personally think you went that
             | far, but I imagine that is how the parent read it.
             | 
             | I think it's the "for" to the end of the sentence that does
             | it.
        
               | ironmagma wrote:
               | That would make sense. On the contrary, though, I quite
               | admire whitepapers' use of FP and would like to learn it
               | someday. But my understanding is that there are already
               | quite a few languages devoted to that, and Rust's focus
               | is something different. After all, if you have the same
               | goals as another language, you just may end up re-
               | creating the same language with different syntax. That
               | said, it would be nice if someday Rust could be as
               | convenient to write FP in as say Lisp or Haskell.
        
         | lmm wrote:
         | > So... don't use first-class functions so much? It's a systems
         | language, not a functional language for describing algorithms
         | in CS whitepapers.
         | 
         | Then maybe it was a mistake to adopt an async paradigm from
         | functional languages that relies heavily on the idea that
         | first-class functions are cheap and easy?
         | 
         | (FWIW I think Rust was right to pick Scala-style async; it's
         | really the only _nice_ way of working with async that I 've
         | seen, in any language. I think the mistake was not realising
         | the importance of first-class functions and prioritising them
         | higher)
        
           | stormbrew wrote:
           | > maybe it was a mistake to adopt an async paradigm from
           | functional languages
           | 
           | > I think Rust was right to pick Scala-style async
           | 
           | I'm confused by this assertion. I'm more aware of procedural
           | language origins of syntactic async/await than functional?
           | The scala proposal in 2016 for async/await even cites C#'s
           | design (which came in C# 5.0 in 2012) as an inspiration[1].
           | 
           | From there, it appears python and typescript added their
           | equivalents in 2015 [2].
           | 
           | If anything, async-await feels like an extremely non-
           | functional thing to begin with, in the sense that in a
           | functional language it should generally be easier to treat
           | the execution model itself as abstracted away.
           | 
           | [1] https://docs.scala-lang.org/sips/async.html
           | 
           | [2] https://en.wikipedia.org/wiki/Async/await
        
             | lmm wrote:
             | > I'm more aware of procedural language origins of
             | syntactic async/await than functional? The scala proposal
             | in 2016 for async/await even cites C#'s design (which came
             | in C# 5.0 in 2012) as an inspiration[1].
             | 
             | The C# version comes from F# (2007) which was in turn
             | inspired by the "Poor Man's Concurrency Monad"
             | implementation for Haskell (1999) (in turn inspired by
             | Concurrent Haskell and, ultimately, Concurrent ML). It's
             | very much a functional lineage.
        
               | stormbrew wrote:
               | I wasn't aware of the F# heritage, that's interesting.
               | I'm curious why the scala proposal wouldn't cite it.
               | Especially surprising since scala's at least
               | superficially looks more like F#'s than it does like
               | C#'s.
               | 
               | I don't dispute (as I have had to say repeatedly in other
               | branches of this) that the roots of futures as a concept
               | are in functional programming, but the path I'm saying I
               | see here is effectively:                   -
               | haskell/monads         |- do-notation         |- monadic
               | futures         |- (this is new to me) F# appears to have
               | added 'types' of do blocks, including a specifically
               | async one?         \-> async-await as a sort of re-
               | integration of quasi-monadic evaluation into procedural
               | languages like C#, python, and eventually javascript and
               | rust.
               | 
               | So what's weird to me is drawing a direct line between
               | scala and rust here when the relevant precedent seems to
               | be procedural languages distilling a thing _from_
               | functional languages into a less flexible syntactic-sugar
               | mechanism we now usually call async-await. Scala seems
               | like a footnote here, where if you want to claim its
               | descent from functional languages you would go farther
               | back.
        
               | lmm wrote:
               | I don't see async/await (at least when built on top of
               | futures/promises) as a procedural thing - the parts of C#
               | where it's used are the least procedural parts of C#, and
               | Python has always been multi-paradigm. I'd say it's
               | mainly a way of doing monadic futures in languages that
               | don't have monads (mainly because of lacking HKT) - hence
               | why F# adopted it first, and then it made its way into
               | functional-friendly languages that either didn't have
               | HKT, or in Scala's case found it to be a useful shortcut
               | anyway. "Functional" is a spectrum rather than a binary,
               | but I don't think it's right to see async/await as being
               | imperative any more than having map/reduce/filter in
               | C#/Python/Javascript makes them an imperative thing. (I
               | would agree that Haskell and Scala, with true monads, are
               | more functional than C#/Python/Javascript - but I'd say
               | that having async/await means C#/Python/Javascript are
               | closer to Haskell/Scala than similar languages that don't
               | have async/await; async/await are making them more
               | functional, not less).
               | 
               | As for why I mentioned Scala specifically, I understand
               | that Rust's Futures are directly based on Scala's. I had
               | assumed this would apply to async/await as well, but it
               | sounds like apparently not? In any case there's not a
               | huge difference between the Scala/C# versions of the
               | concept AFAICS.
        
               | stormbrew wrote:
               | Sure. What I'm getting at is that I see the specific
               | syntax of "async-await" as a synthesis of concepts from
               | functional and procedural heritage. I agree about it
               | being a spectrum and all that, so I think we're mostly
               | just talking past each other about the specifics of how
               | it came to be and using different ways of describing that
               | process.
        
             | dwohnitmok wrote:
             | > If anything, async-await feels like an extremely non-
             | functional thing to begin with
             | 
             | Futures/promises (they mean different things in different
             | languages), like many other things, form monads. In fact
             | async-await is a specialization of various monad syntactic
             | sugars that try to eliminate long callback chains that
             | commonly affect many different sorts of monads.
             | 
             | Hence things like Haskell's do-notation are direct
             | precursors to async-await (some libraries such as Scala's
             | monadless https://github.com/monadless/monadless make it
             | even more explicit, there lift and unlift are exactly
             | generalized versions of async and await).
             | 
             | To see how async-await might be generalized, one could turn
             | to various other specializations of the same syntax, e.g.
             | an async that denotes a random variable and an await that
             | draws once from the random variable.
             | 
             | To see the correspondence with a flatMap method (which is
             | the main component of a monad), it's enough to look at the
             | equivalent callback-heavy code and see that it looks
             | something like                 Future(5)         .flatMap(x
             | ->           doSomethingFutureyWithX(x)
             | .flatMap(lookItsAnotherCallback)         )
        
               | stormbrew wrote:
               | I'm not clear on if this is supposed to be disagreement
               | or elaboration or education.
               | 
               | The fact that in a language like Haskell, you can perform
               | something like async-await with futures (which are
               | absolutely a kind of monad) in a natural way is precisely
               | what I had in mind with what you quoted.
               | 
               | Regardless, the specific heritage of async-await syntax
               | seems rooted in procedural languages (that do borrow much
               | else as well from functional languages, yet are still not
               | functional in any meaningful sense) like C# and python.
               | They are absolutely an attempt to bring some of the power
               | of something like monadic application (including do
               | notation) into a procedural environment as an alternative
               | to threads (green or otherwise), which hide the execution
               | state machine completely.
        
               | runT1ME wrote:
               | You don't need async/await to do monadic comprehension in
               | Scala, it's built into the language from the very
               | beginning with `for`.
               | 
               | This was inspired by do notation, which came about ~1998.
        
               | [deleted]
        
               | dwohnitmok wrote:
               | > the specific heritage of async-await syntax seems
               | rooted in procedural languages
               | 
               | I don't think so. Async-await in both syntax and
               | semantics is pretty firmly rooted in the FP tradition.
               | 
               | For semantics, the original implementation in C#, and as
               | far as I know most of its successors, is to take
               | imperative-looking code and transform it into CPS-ed
               | code. That's a classic FP transformation (indeed it's one
               | of the paradigmatic examples of first-class
               | functions/higher-order functions) and one of the most
               | popular methods of desugaring imperative-looking code in
               | a functional context.
               | 
               | For syntax, the idea of hiding all that CPS behind an
               | imperative-looking syntax sugar is the whole reason why
               | do-notation and its descendants exist. (Indeed, there's
               | an even deeper connection there specifically around CPS
               | and monads: https://www.schoolofhaskell.com/school/to-
               | infinity-and-beyon...)
               | 
               | But my point was simply that there's a pretty straight
               | line from CPS, monads, and do-notation to async-await and
               | so I think it's pretty fair to say that async-await is
               | rooted in the FP tradition.
        
               | [deleted]
        
           | ironmagma wrote:
           | It's always possible there's a better way. If you know of
           | one, maybe you can write a proposal? There is always Rust v2.
           | Rust lacks a BDFL and so it's almost like the language grows
           | itself. Chances are, the async model that was used was picked
           | because it was arrived upon via consensus.
        
             | lmm wrote:
             | I don't think there is a better approach; I think the right
             | thing would have been to lay proper functional foundations
             | to make async practical. I did speak against NLL at the
             | time, and I keep arguing that HKT should be a higher
             | priority, but the consensus favoured the quick hack.
        
               | ragnese wrote:
               | I agree that Rust should have embraced HKTs instead of
               | resisting them every step of the way. It's been my
               | observation that the Rust lang team is pretty much
               | against HKTs as a general feature, which makes me sad.
               | 
               | I'm also happy to meet the only other person who had
               | reservations about NLL! :p I have mixed feelings about
               | it, still. It really is super ergonomic and convenient,
               | but I _really_ value language simplicity and NLL makes
               | the language more complex.
               | 
               | I also don't think there's much Rust can do differently
               | in regards to "functional foundations". With Rust's
               | borrow/lifetime model, what would you even do differently
               | with respect to `fn`, `Fn`, `FnMut`, and `FnOnce`?
        
               | steveklabnik wrote:
               | HKTs were considered, and do not actually solve the
               | problem, in Rust. Rust is not Haskell. The differences
               | matter.
               | 
               | Nobody is against HKTs on some sort of conceptual level.
               | They just literally do not work to solve this problem in
               | Rust.
        
               | ragnese wrote:
               | Which problem are we referring to? I was only making a
               | general statement that I think Rust would have benefited
               | from HKTs instead of doing umpteen ad-hoc implementations
               | of specific higher-kinded types. I'm far from an expert,
               | so please correct me if I'm wrong:
               | 
               | Wouldn't HKTs help us abstract over function/types more
               | easily, including closures?
               | 
               | Aren't GATs a special case of HKTs?
               | 
               | If Rust somehow had HKTs and "real" monads, we could have
               | a do-notation instead of the Try trait+operator and
               | Future trait+async+await, right?
               | 
               | I'm not saying that HKTs would fix (most of) the issues
               | mentioned in the article around tricky closure semantics
               | and whatnot.
        
               | steveklabnik wrote:
               | > Wouldn't HKTs help us abstract over function/types more
               | easily, including closures?
               | 
               | In the sense that HKTs are a higher level abstraction,
               | sure. More later.
               | 
               | > Aren't GATs a special case of HKTs?
               | 
               | My understanding is that GATs can get similar things done
               | to HKTs for some stuff that Rust cares about, but that
               | doesn't give them a subtyping relationship. Haskell has
               | both higher kinded types and type families. That being
               | said, my copy of Pierce is gathering dust.
               | 
               | > If Rust somehow had HKTs and "real" monads, we could
               | have a do-notation instead of the Try trait+operator and
               | Future trait+async+await, right?
               | 
               | It depends on what you mean by "somehow." That is, even
               | if Rust had a monad trait, that does not mean that Try
               | and Future could both implement it. This is because, in
               | Haskell, these things have the same signatures. In Rust,
               | they do not have the same signature. For reference:
               | pub trait Iterator {           type Item;           pub
               | fn next(&mut self) -> Option<Self::Item>;       }
               | pub trait Future {           type Output;           pub
               | fn poll(               self: Pin<&mut Self>,
               | cx: &mut Context<'_>           ) -> Poll<Self::Output>;
               | }
               | 
               | While they are both traits, both with something returning
               | their associated type:
               | 
               | 1. Iterator returns Option, while Future returns
               | something like it. These do have the same shape in the
               | end though, so maybe this is surmountable. (Though then
               | you have backwards compat issues)
               | 
               | 2. poll takes a Pin'd mutable reference to self, whereas
               | iterator does not
               | 
               | 3. Future takes an extra argument
               | 
               | These are real, practical problems that would need to be
               | sorted, and it's not clear how, or even if it's possible
               | to, sort them. Yes, if you handwave "they're sort of the
               | same thing at a high enough level of abstraction!", sure,
               | in theory, this could be done. But it is very unclear
               | how, or if it is even possible to, get there.
        
           | kelnos wrote:
           | > _Rust was right to pick Scala-style async_
           | 
           | Huh, I actually find Rust and Scala to do async quite
           | differently. The only thing in common to me is the monadic
           | nature of Future itself. Otherwise I find there to be a big
           | fundamental difference in how Scala's async is
           | threadpool/execution-context based, while Rust's async is
           | polling-based.
           | 
           | Then there are the syntactic differences around Rust having
           | async/await and Scala... not.
        
           | marshray wrote:
           | > maybe it was a mistake to adopt an async paradigm from
           | functional languages
           | 
           | I always thought of async as higher language sugar for
           | coroutine patterns that I'd seen in use in assembler from the
           | 1980's.
        
         | viktorcode wrote:
         | I think your implication that a system programming language
         | can't have nice stuff is based on system languages of old.
         | Unless a language is absolutely must have manual memory
         | allocations, it can have 1st class functions and closures.
        
           | ironmagma wrote:
           | I never implied it can't be nice, just that it currently
           | isn't and that a solution to this blog post is, "don't do
           | that, at least for now." I mean, the title is that async Rust
           | doesn't work, yet I use several successful codebases that use
           | it, so it actually does work, just not the way the author
           | wants it to. Another solution could be to submit a proposal
           | to improve async support in the desired way.
        
         | de6u99er wrote:
         | Wanted to write something similar, just not as elaborate as you
         | did (native German speaker here).
         | 
         | Another good example is Java, and languages running on top of
         | the JVM.
        
       | topspin wrote:
       | > projects that depend on like 3 different versions of tokio and
       | futures
       | 
       | I ended up here recently.
       | 
       | This part is truly awful. I can forgive a lot, and much of what
       | appears in this blog post falls under than umbrella, but the
       | tokio/std futures schism is offensively discouraging. I strongly
       | suspect it is a symptom of some great dysfunction.
        
         | jedisct1 wrote:
         | I abandoned some projects due to that as well. That, and
         | constant breaking changes in major libraries such as `hyper`.
         | 
         | Maintaining Rust code is expensive. More than completing the
         | first iteration of a project. Language such as Go or C are
         | boring, but don't have that issue.
        
           | carols10cents wrote:
           | You can choose to not upgrade your crates to avoid this cost.
        
       | croes wrote:
       | I don't know rust good enough but the article sounds like 'I
       | can't write it like I want, therefore it's bad'. Couldn't he just
       | pass a named function instead of an anonymous one?
        
         | zbentley wrote:
         | No, he could not. The problem emerges when trying to use state
         | (a database connection, in the author's example) in the
         | callback function. That requires a closure (or a managed
         | global, but that's not zero-cost).
        
           | croes wrote:
           | Thanks for the clarification
        
       | hardwaresofton wrote:
       | This was a great article, very easy to understand without leaving
       | out the fundamental pieces (the saga that is async implementation
       | difficulty). I think I can even boil this situation down to the
       | simple thought that design & implementation of async in Rust has
       | driven it to it's first "monad" moment. The question is now
       | largely whether there exists a way that is better than monads to
       | handle this problem.
       | 
       | It's in this (HN) comment thread and a bunch of informed comments
       | on the original article:
       | 
       | https://github.com/eeeeeta/blog-comments/issues/10#issuecomm...
       | 
       | https://github.com/eeeeeta/blog-comments/issues/10#issuecomm...
       | 
       | https://github.com/eeeeeta/blog-comments/issues/10#issuecomm...
       | 
       | The comments are correct in my view -- Rust can't/shouldn't do
       | what Haskell did, which was to create use a general purpose
       | abstraction that is essentially able to carry "the world" (as
       | state) along with easy-to-trade chained functions on that state
       | (whenever it gets realized). Haskell might have solved the
       | problem, but it has paid a large price in language difficulty
       | (perceived or actual) because of it, not mentioning the
       | structural limitations of what Rust can do and it's constraints.
       | The trade-off just isn't worth it for the kind of language Rust
       | aims to be.
       | 
       | Realistically, I think this issue is big but not enough to write
       | off rust for me personally (as the author seems to have) -- I'd
       | just do the move + Arc<T> shenanigans because if you've been
       | building java applications with IoC and/or other patterns that
       | generally require global-ish singletons (given example was a DB
       | being used by a server), this isn't the worst complexity trade-
       | off you've had to make, though the Rust compiler is a lot more
       | ambitious, and Rust has a cleaner, better, more concise type
       | system as far as I'm concerned.
       | 
       | I think another thing I've gained from this article is another
       | nice little case where Haskell (if you've taken the time to grok
       | it sufficiently, which can be a long time) offers a bit of a
       | nicer general solution than Rust, assuming you were in a world
       | where those two were actually even substitutes. In the much more
       | likely world where you might compare Rust and Go2, this might be
       | a win for Go2, but the rest of the language features would put
       | rust on top for me.
        
       | ha_jo_331 wrote:
       | Of course a fiber/green threads based solution would be easier to
       | use, but Rust is supposed to provide zero cost abstractions for
       | concurrency: Maximum performance and efficiency (and safety)
       | coming first, usability second. THat is perfectly fine.
       | 
       | Whereas others (Go for example) sacrifice some efficiency and
       | performance to ergonomics.
        
         | jedisct1 wrote:
         | Is it still zero cost if the price to pay is a steep increase
         | of code complexity?
         | 
         | The current model also still requires a lot of boxing and
         | cloning, so it is far from being "zero cost" as advertised.
        
           | steveklabnik wrote:
           | > Is it still zero cost if the price to pay is a steep
           | increase of code complexity?
           | 
           | Yes, because the "cost" in "zero-cost" only refers to runtime
           | cost.
        
       | devpbrilius wrote:
       | A main developer on site with paid job is necessarily its
       | consumer, not exact producer.
        
       | jedisct1 wrote:
       | If Rust had to be rewritten from scratch today, it would probably
       | be done differently, learning from experience since it was
       | invented.
       | 
       | Cargo, macros, const generics and `#[cfg]` wouldn't exist any
       | more, replaced with comptime evaluation.
       | 
       | The standard library would be half its current size (considering
       | its redundant functions, deprecated functions, or constructions
       | that Clippy wants to be replaced with other constructions).
       | 
       | IDE integration would be considered immediately.
       | 
       | Per-function allocators wouldn't be an afterthought (the current
       | proposal is really bad, but it's too late to do something
       | better).
       | 
       | And async would be part of the language and the standard library,
       | "colorless" and completion-based.
       | 
       | The problem is that it's too late. The Rust language itself is
       | mature, and cannot afford major breaking changes any more. So,
       | improvements can only be made on top of what's already there,
       | limiting their integration, and making the whole thing more and
       | more bloated and complicated.
        
         | littlestymaar wrote:
         | I see you're a fan of Zig's design, but I don't really think
         | Rust would be as you describe even if it was made today, for
         | several reasons:
         | 
         | - comptime evaluation aren't something Rust designers are
         | interested in, as it doesn't fit their functional programming
         | mindset[1], (and it's not a new concept to Zig, D has had
         | something similar for long).
         | 
         | - An IDE-first compiler is a good idea (and a must have for a
         | production language today) but I'm not sure how feasible it is
         | to start with something like that. If it's going to slow down
         | early development, then it's definitely not a good move to go
         | there first, but I don't know.
         | 
         | - per function allocation is a pain in the ass from a usability
         | perspective. In fact it recreates the same "function color"
         | problem you'd like to avoid. Just think of your non-allocating
         | functions as blue and allocating ones as red functions, you
         | can't call a red function from a blue one.
         | 
         | - Zig's "colorblind" asynchronous function rely on
         | automagically switching between async and sync functionn at
         | compile-time, and I really don't see that fitting with Rust's
         | thrive for explicitness.
         | 
         | [1]: https://twitter.com/pcwalton/status/1369114008045772804
        
           | roca wrote:
           | rust-analyzer is remarkably good.
           | 
           | I am curious as to how Zig comptime is going to interact with
           | IDE features.
        
         | navaati wrote:
         | > [async] "colorless"
         | 
         | I'm super interested that you think that's something that can
         | exist, how would it work, what would it look like in your mind
         | ?
        
           | robinei wrote:
           | Zig does it (https://kristoff.it/blog/zig-colorblind-async-
           | await/).
           | 
           | Basically using "await" just means you are declaring
           | opportunity for concurrency at the call site, but it is still
           | valid to await a synchronous function. And it is valid to
           | call a function synchronously, even if it internally supports
           | concurrency via await.
        
             | kristoff_it wrote:
             | Yes, that's a very clear and concise way of putting it,
             | thanks.
        
           | littlestymaar wrote:
           | I think the GP refers to Zig's approach[1], which is just "do
           | mark functions as sync or async in there signature, just
           | switch between two modes at compile-time using a global
           | flag", which I don't think fits well into Rust's values of
           | being explicit about what is happening.
           | 
           | [1]: https://kristoff.it/blog/zig-colorblind-async-await/
        
       | lmm wrote:
       | Async isn't really the problem - the same issue pops up with
       | error handling, with resource management, with anything where you
       | want to pass functions around. The real problem is that Rust's
       | ownership semantics and limited abstractions mean it doesn't
       | really have first-class functions: there are three different
       | function types and the language lacks the power to abstract over
       | them, so you _can 't_ generally take an expression and turn it
       | into a function. NLL was actually a major step backwards that has
       | made this worse in the medium term: it papers over a bunch of
       | ownership problems _if_ the code is written inline, but as soon
       | as you try to turn that inlined code into a closure all those
       | ownership problems come back.
       | 
       | IMO the only viable way out is through: Rust needs to get the
       | ability to properly abstract over lifetimes and functions that it
       | currently lacks (HKT is a necessary part of doing this cleanly).
       | A good litmus test would be reimplementing all the language
       | control flow keywords as plain old functions; done right this
       | would obviate NLL.
       | 
       | But yeah that's going to take a while, and in the meantime for
       | 99% of things you should just write OCaml. Everyone thinks they
       | need a non-GC language because "muh performance" but those
       | justifications rarely if ever hold up.
        
         | zelly wrote:
         | Rust programs can theoretically be fast, but most of the ones
         | I've used are slow. I tried two high profile implementations of
         | the same type of software, one in Rust and one in Java. The
         | Java one was faster and used less memory.
         | 
         | Rust programmers tend to do all kinds of little hacks here and
         | there to make the borrow checker happy. It can add up. The
         | borrow checker is perfectly happy when you copy everything.
         | 
         | Rust is becoming one giant antipattern.
         | 
         | (I'm sure highly experienced Rust programmers can get it to
         | work, but there are probably less than 1000 people on this
         | planet that can write good Rust that outperforms C++ so does it
         | really count.)
        
           | doteka wrote:
           | Interesting, could you elaborate on this software and what it
           | does? At work, we also offer two backends for the same kind
           | of functionality. One is the industry standard Java
           | implementation, one is a homegrown Rust implementation. We
           | find that for this usecase (essentially text processing,
           | string manipulation and statistical algorithms), the rust
           | version is much faster while using less memory. However, it
           | is not as feature-rich.
        
             | moonchild wrote:
             | Not the OP, but my guess is that rust will do well for the
             | sort of code that would be fast in c anyway: linear
             | operations large, contiguous arrays. For pointer-chasing
             | code, or code with lots of small objects, the little tricks
             | you use in c aren't available (or at least, not as
             | accessible), and the lack of gc and flexibility wrt
             | references harms you.
        
               | rcxdude wrote:
               | Code with lots of small short-lived objects is often by
               | default faster in something like rust, because stack
               | allocation is very efficient (even compared to the bump
               | allocation in a generational GC, which is much more
               | efficient than heap allocation but worse than stack). If
               | you have a lot of objects which can exist on the stack in
               | your program, java will tend to do poorly in comparison.
               | 
               | (and in general java will trade off memory for speed:
               | modern GCs can be very efficient in execution time but
               | will use 2x-3x as much memory in return. If you want low
               | memory usage your code is going to run slower).
        
               | mratsim wrote:
               | Does the borrow checker interfere with memory and object
               | pools implementation?
               | 
               | Because if you require heap allocation for many short-
               | lived objects, I expect this would be one of Java
               | strengths unless you use an object pool.
        
               | steveklabnik wrote:
               | "interfere" is a funny word, but you could do this in
               | Rust. It's not super common, though the related technique
               | of "arenas" can be, depending on domain.
        
             | [deleted]
        
           | jokethrowaway wrote:
           | > Rust programmers tend to do all kinds of little hacks here
           | and there to make the borrow checker happy. It can add up.
           | The borrow checker is perfectly happy when you copy
           | everything.
           | 
           | Do you mind sharing an example? If you're talking about using
           | to_owned or clone without reason, then it's fully on the
           | developer. Some more pitfalls to avodi:
           | https://llogiq.github.io/2017/06/01/perf-pitfalls.html
           | 
           | What you say definitely doesn't match my experience. I would
           | say Java code and Rust code are roughly in the same order of
           | processing speed but the JVM "wastes" some memory. You also
           | have garbage collection complicating performance in some
           | scenarios.
           | 
           | I'm pretty sure you can get in the same processing speed
           | ballpark with careful programming in both languages.
           | 
           | Outperforming C++ is definitely harder but Java should be
           | doable.
        
             | pjmlp wrote:
             | Until Java finally supports value types, like .NET, D, Nim,
             | Eiffel and plenty of other GC enabled languages.
        
           | imtringued wrote:
           | I can't reduce JVM memory usage below 100MB. It simply is
           | impossible. Meanwhile with rust I can easily write 20 times
           | more memory efficient applications.
        
             | thu2111 wrote:
             | Try Graal native image. It precompiles all code ahead of
             | time and drops all metadata that you don't explicitly say
             | you need. The results start as fast as a C program would
             | and uses 5-10x less memory. The trade-off is lower runtime
             | performance: hotspot is using that memory to make your app
             | run faster at peak.
        
           | the_mitsuhiko wrote:
           | I will agree that Rust programs can be surprisingly slow.
           | That said, in most cases I have experienced this it came down
           | to very simple to detect situations that are usually resolved
           | by enclosing some large things in smart pointers.
           | 
           | I consider this a plus because that's a pretty simple change.
        
         | joubert wrote:
         | > anything where you want to pass functions around.
         | 
         | You should read the article! The author goes into this in
         | fascinating detail.
        
           | lmm wrote:
           | I did read the article; it's overly focused on async
           | (especially the headline). The body quite correctly analyses
           | the problem with functions, but misses that this is much more
           | general than async; just adopting a different async model
           | isn't a solution.
        
             | joubert wrote:
             | Well, the article's focus is on asynchrony. Naturally then,
             | it talks about closures within the context of asynchrony
             | and to the extent that it is relevant.
             | 
             | The author could have made the topic about closures, but
             | that's not what they wanted to talk about.
        
               | lmm wrote:
               | The author's focus on async leads them to the wrong
               | conclusion. Not adopting async would not have solved the
               | fundamental underlying problem; the perennial issues with
               | error handling in Rust are another manifestation of the
               | same problem, and as soon as people start trying to do
               | things like database transaction management they'll hit
               | the same problem again. Ripping async out of Rust isn't a
               | solution; this problem will come up again and again
               | unless and until Rust implements proper first-class
               | functions.
        
         | janjones wrote:
         | > Async isn't really the problem - the same issue pops up with
         | error handling, with resource management
         | 
         | There are ways to handle that, though. It's called _algebraic
         | effects_ and they are nicely implemented in Unison language[1]
         | (although there they 're called abilities[2]). It's very
         | interesting language; I recommend reading [3] for good
         | overview.
         | 
         | [1] https://www.unisonweb.org/
         | 
         | [2] https://www.unisonweb.org/docs/abilities
         | 
         | [3] https://jaredforsyth.com/posts/whats-cool-about-unison/
        
         | [deleted]
        
         | hctaw wrote:
         | Nit, you can already abstract over lifetimes. But I really
         | agree with the HKT comment, in principle. In practice it's not
         | so bad. Abstracting over async, mutability, and borrows would
         | be more important to smooth over a few of these issues (there
         | are ways around both, but it's not always obvious).
         | 
         | Most of these issues arise for library authors. I think a lot
         | of folks in the ecosystem today are writing libraries instead
         | of applications, which is why we get these blog posts lamenting
         | the lack of HKTs or difficult syntax for doing difficult things
         | that other languages hide from you. I don't think users have
         | the same experience when building things with the libraries.
         | 
         | That said, there's a bit of leakage of complexity. Sometimes
         | it's really hard to write a convenient API (like abstracting
         | over closures) and that leads to gnarly errors for users of
         | those APIs. Closures are particularly bad about this (which
         | some, including myself, would say is good and also easy to work
         | around), and there's always room for improvement. I don't think
         | a lack of HKTs is really holding anything essential back right
         | now.
        
           | lmm wrote:
           | > In practice it's not so bad. Abstracting over async,
           | mutability, and borrows would be more important to smooth
           | over a few of these issues (there are ways around both, but
           | it's not always obvious).
           | 
           | My point is that absent some horrible specific hacks, HKT is
           | a necessary part of doing that. A type that abstracts over
           | Fn, FnMut, or FnOnce _is_ a higher-kinded type.
        
         | kibwen wrote:
         | _> mean it doesn 't really have first-class functions: there
         | are three different function types_
         | 
         | Rust absolutely does have first-class functions, though. Their
         | type is `fn(T)->U`. The "three different function types" that
         | you refer to are traits for closures. And note that closures
         | with no state (lambdas) coerce into function types.
         | higher_order(is_zero);         higher_order(|n| n == 0);
         | fn higher_order(f: fn(i32)->bool) {             f(42);
         | }              fn is_zero(n: i32) -> bool {             n == 0
         | }
         | 
         | It's true that when you have a closure with state then Rust
         | forces you to reason about the ownership of that state, but
         | that's par for the course in Rust.
        
           | lmm wrote:
           | If function literals don't have access to the usual idioms of
           | the language - which, in the case of Rust, means state - then
           | functions are not first-class.
           | 
           | > It's true that when you have a closure with state then Rust
           | forces you to reason about the ownership of that state, but
           | that's par for the course in Rust.
           | 
           | The problem isn't that you have to reason about the state
           | ownership, it's that you can't abstract over it properly.
        
             | adamch wrote:
             | You can't abstract over it in the same way that you can in
             | Haskell, because you have to manage ownership. You can't
             | abstract away ownership as easily, because the language is
             | designed to make you care about ownership.
             | 
             | I write Rust code for my day job, and I frequently use
             | map/reduce/filter. IMO if I can write all my collection-
             | processing code using primitives like that, it's got first-
             | class functions.
        
               | lmm wrote:
               | It's not about "abstracting away" ownership, it's about
               | being polymorphic over it. I want to keep the distinction
               | between Fn, FnOnce, and FnMut. But I want to be able to
               | write a `compose` function that works on all three,
               | returning the correct type in each case. That's not
               | taking away from my ability to manage ownership, it's
               | letting me abstract over it.
        
               | orf wrote:
               | You could do the polymorphic return with an enumeration.
               | But those are distinct types with very different
               | semantics: you can't just say "it returns a function that
               | you can call once or maybe a function you can call more
               | than once. Shrug".
        
               | Dr_Emann wrote:
               | I believe they're asking for "this function returns a
               | function of the same type (Fn/FnMut/FnOnce) as its first
               | argument, but with different arguments. A generic way to
               | write something like `fn bind<T, F: Fn(T,...) >(f: F,
               | arg: T) -> Fn(...)`
        
       | bsder wrote:
       | The big issue is that non-GC concurrency is still effectively a
       | CS research topic.
       | 
       | In a GC language, you can "just" let the GC system clean up after
       | you're done--who owns allocation and cleanup isn't an issue.
       | 
       | In a non-GC language, who allocates and cleans up what and who
       | _pays_ for that suddenly becomes a fundamental part of the
       | domain.
       | 
       | For example, I have seen a zillion "lock-free" data structure
       | libraries in Rust, and I don't think I've seen even _one_ that
       | actually holds to even the basic amortization guarantees like
       | java.util.concurrent.
        
         | aidenn0 wrote:
         | GC languages aren't significantly better, because the GC
         | usually only directly manages the memory[1], and there are
         | many, many other resources that need to be managed correctly.
         | 
         | 1: Many GC languages let you run arbitrary code when the
         | associated memory is freed, but if you have to wait for a full
         | GC pass to happen before e.g. a lock is released or a file
         | descriptor is flushed, that will cause many issues as well.
        
         | hashmash wrote:
         | I'd like to understand what you mean by this. I'm no expert on
         | Rust lock-free designs, but what are the "amortization
         | guarantees" that you refer to? Is there something that I can
         | read that explains this in greater detail?
        
           | bsder wrote:
           | Basically any background about java.util.concurrent (the
           | development mailing list is archived, IIRC). Also, anything
           | by Cliff Click and Azul. Okasaki talks about it in the
           | context of functional data structures rather than concurrent:
           | https://en.wikipedia.org/wiki/Purely_functional_data_structu.
           | ..
           | 
           | But, let's talk about something like CopyOnWriteArrayList.
           | 
           | If I add() a CopyOnWriteArrayList(), a copy of the underlying
           | data comes into existence with the new element. That should
           | certainly get charged to the thread doing the add(). Fine.
           | 
           | Some Iterators are probably still pointing at the old
           | underlying data. Not a big deal.
           | 
           | So, the final Iterator() with references to old data finally
           | finishes and gets deallocated/dropped.
           | 
           | Now, all the underlying data that got copied is no longer
           | relevant and needs to be deallocated/dropped.
           | 
           | So, who does that?
           | 
           | Does that happen on a thread that next calls get()? If so,
           | get() is now O(n) worst case. That certainly doesn't make
           | people happy.
           | 
           | Does that happen on a thread that next calls add()? Well, we
           | can have a _lot_ of iterators that were all working and now
           | ended. So, that add() might be deallocating a _LOT_ of copied
           | stuff. This might be where it has to be done, but the fact
           | that your O() performance is dependent upon the size of the
           | data _and_ the number of copies that got made makes you
           | O(n^2)-ish (O(n*k) to be more precise). That 's not good.
           | 
           | Does the deallocate happen on thread where the last
           | Iterator() got dropped? Having to deallocate the universe
           | because you just happened to be the last one using the
           | underlying data store doesn't seem like a good idea.
           | 
           | Perhaps you do a little bit of deallocation work on every
           | get()? Probably good for throughput, but it sure makes the
           | data structure a lot more complicated.
           | 
           | Perhaps you have an explicit deallocation call. That kind of
           | defeats the whole point of using a language like Rust.
           | 
           | In a GC language, this all gets charged to the GC thread and
           | swept under the carpet. That's what I'm thinking about when I
           | say "amortized".
           | 
           | There's also a slightly different amortized. In C++, for
           | example, when you insert() into an unordered_map(), the
           | "normal" time is O(1), but every now and then you get an O(n)
           | while the structure reallocates. This results in an
           | "average/amortized" cost of O(1) if the reallocation is done
           | intelligently (generally doubling in size every overflow).
        
       | mutatio wrote:
       | I can sympathize with the tone. I recently attempted to move to
       | the latest `tokio-postgres` as the old synchronous version is no
       | longer supported (the new synchronous version uses Tokio under
       | the hood which plays havoc if you use with actix-web [two
       | different executors]), the problem is I'm using it with
       | CockroachDB which effectively requires transaction retries (for
       | savepoints). I've yet to produce a ergonomic tx::retry wrapper
       | that doesn't have lifetime issues (for primitives supporting Copy
       | it's all good).
       | 
       | With the asyncification of the Rust ecosystem it does make life
       | difficult. Maybe the Rust async implementation is fundamentally
       | sound, but the user experience for non-trivial cases isn't quite
       | there.
        
       | atweiden wrote:
       | I never see popol [1] mentioned in these discussions:
       | Popol is designed as a minimal ergonomic wrapper         around
       | poll, built for use cases such as peer-to-peer
       | networking, where you typically have no more than a         few
       | hundred concurrent connections. It's meant to be         familiar
       | enough for those with experience using mio,         but a little
       | easier to use, and a lot smaller.
       | 
       | It shows a lot of promise.
       | 
       | [1]: https://cloudhead.io/popol/
        
       | gsserge wrote:
       | I like to joke that the best way to encounter the ugliest parts
       | of Rust is to implement an HTTP router. Hours and days of boxing
       | and pinning, Futures transformations, no async fn in traits,
       | closures not being real first class citizens, T: Send + Sync +
       | 'static, etc.
       | 
       | I call this The Dispatch Tax. Because any time you want more
       | flexibility than the preferred static dispatch via generics can
       | give you - oh, so you just want to store all these async
       | fn(HttpRequest) -> Result<HttpResponse>, right? - you immediately
       | start feeling the inconvenience and bulkiness of dynamic
       | dispatch. It's like Rust is punishing you for using it. And with
       | async/await taking this to a new level altogether, because you
       | are immediately forced to understand how async funcs are
       | transformed into ones that return Futures, how Futures are
       | transformed into anon state machine structs; how closures are
       | also transformed to anon structs. It's like there's no type
       | system anymore, only structs.
       | 
       | That's one of the reasons, I think, why Go has won the Control
       | Plane. Sure, projects like K8s, Docker, the whole HashiCorp suite
       | are old news. But it's interesting and telling that even solid
       | Rust shops like PingCAP are using Go for their control plane. It
       | seems to me that there's some fundamental connection between
       | flexibility of _convenient_ dynamic dispatch and control plane
       | tasks. And of course having the default runtime and building
       | blocks like net and http in the standard library is a huge win.
       | 
       | That said, after almost three months of daily Rust it does get
       | better. To the point when you can actually feel that some
       | intuition and genuine understanding is there, and you can finally
       | work on your problems instead of fighting with the language. I
       | just wish that the initial learning curve wasn't so high.
        
         | ragnese wrote:
         | Definitely dynamic dispatch + async brings out a lot of pain
         | points.
         | 
         | But I only agree that the async part of that is unfortunate.
         | Making dynamic dispatch have a little extra friction is a
         | feature, not a bug, so to speak. Rust's raison d'etre is "zero
         | cost abstraction" and to be a systems language that should be
         | viable in the same spaces as C++. Heap allocating needs to be
         | explicit, just like in C and C++.
         | 
         | But, I agree that async is really unergonomic once you go
         | beyond the most trivial examples (some of which the article
         | doesn't even cover).
         | 
         | Some of it is the choices made around the async/await design
         | (The Futures, themselves, and the "async model" is fine, IMO).
         | 
         | But the async syntax falls REALLY flat when you want an async
         | trait method (because of a combination-and-overlap of no-HKTs,
         | no-GATs, no `impl Trait` syntax for trait methods) or an async
         | destructor (which isn't a huge deal- I think you can just use
         | future::executor::block_on() and/or use something like the
         | defer-drop crate for expensive drops).
         | 
         | Then it's compounded by the fact that Rust has these "implicit"
         | traits that are _usually_ implemented automatically, like Send,
         | Sync, Unpin. It 's great until you write a bunch of code that
         | compiles just fine in the module, but you go to plug it in to
         | some other code and realize that you actually needed it to be
         | Send and it's not. Crap- gotta go back and massage it until
         | it's Send or Unpin or whatever.
         | 
         | Some of these things will improve (GATs are coming), but I
         | think that Rust kind of did itself a disservice with
         | stabilizing the async/await stuff, because now they'll never be
         | able to break it and the Pin/Unpin FUD makes me nervous. I also
         | think that Rust should have embraced HKTs/Monads even though
         | it's a big can of worms and invites Rust devs to turn into
         | Scala/Haskell weenies (said with love because I'm one of them).
        
           | gsserge wrote:
           | Oh yeah, I can totally relate to the Send-Sync-Unpin
           | massaging, plus 'static bound for me. It's so weird that
           | individually each of them kinda makes sense, but often you
           | need to combine then and all of a sudden the understanding of
           | combinations just does not.. combine. After a minute or two
           | of trying to figure out what should actually go into that
           | bound I give up, remove all of them and start adding them
           | back one by one until the compiler is happy.
        
             | ragnese wrote:
             | Yep. Same. I've been doing Rust for years at this point
             | (not full time, and with long gaps- granted), and it's
             | exactly like you said: individually these things are
             | simple, but then you're trying to figure out where you
             | accidentally let a reference cross an await boundary that
             | killed your automatic Unpin that you didn't realize you
             | needed. Suddenly it feels like you _don 't_ understand it
             | like you thought you did.
             | 
             | The static lifetime bound is annoying, too! I guess it
             | crops up if you take a future and compose it with another
             | one? Both future implementations have to be static types to
             | guarantee they live long enough once passed into the new
             | future.
        
         | ironmagma wrote:
         | In a systems context, where performance and memory ostensibly
         | matter, why wouldn't you want to be made aware of those
         | inefficiencies?
         | 
         | Sure, Go hides all that, but as a result it's also possible to
         | have memory leaks and spend extra time/memory on dynamic
         | dispatch without being (fully) aware of it.
        
           | gsserge wrote:
           | I think Rust is also able to hide certain things. Without
           | async things are fine:                   type Handler =
           | fn(Request<Body>) -> Result<Response<Body>, Error>;
           | let mut map: HashMap<&str, Handler> = HashMap::new();
           | map.insert("/", |req| { Ok(Response::new("hello".into())) });
           | map.insert("/about", |req| {
           | Ok(Response::new("about".into())) });
           | 
           | Sure, using function pointer `fn` instead of one of the Fn
           | traits is a bit of a cheating, but realistically you wouldn't
           | want a handler to be a capturing closure anyway.
           | 
           | But of course you want to use async and hyper and tokio and
           | your favorite async db connection pool. And the moment you
           | add `async` to the Handler type definition - well, welcome to
           | what author was describing in the original blog post. You'll
           | end up with something like this                   type
           | Handler = Box<dyn Fn(Request) -> BoxFuture + Send + Sync>;
           | type BoxFuture = Pin<Box<dyn Future<Output = Result> +
           | Send>>;
           | 
           | plus type params with trait bounds infecting every method you
           | want pass your handler to, think get, post, put, patch, etc.
           | pub fn add<H, F>(&mut self, path: &str, handler: H)
           | where             H: Fn(Request) -> F + Send + Sync +
           | 'static,             F: Future<Output = Result> + Send +
           | 'static,
           | 
           | And for what reason? I mean, look at the definitions
           | fn(Request<Body>) -> Result<Response<Body>, Error>;
           | async fn(Request<Body>) -> Result<Response<Body>, Error>;
           | 
           | It would be reasonable to suggest that if the first one is
           | flexible enough to be stored in a container without any fuss,
           | then the second one should as well. As a user of the
           | language, especially in the beginning, I do not want to know
           | of and be penalized by all the crazy transformations that the
           | compiler is doing behind the scene.
           | 
           | And for the record, you can have memory leaks in Rust too.
           | But that's besides the point.
        
             | nemothekid wrote:
             | > _It would be reasonable to suggest that if the first one
             | is flexible enough to be stored in a container without any
             | fuss, then the second one should as well_
             | 
             | I don't think this a reasonable in Rust (or in C/C++). I
             | 90% of the pain of futures in Rust is most users don't want
             | to care about memory allocation and want Rust to work like
             | JS/Scala/C#.
             | 
             | When using a container containing a function, you only have
             | to think allocating memory for the function pointer, which
             | is almost always statically allocated. However for an async
             | function, there's not only the function, but the future as
             | well. As a user the language now poses a problem to you,
             | where does the memory for the future live.
             | 
             | 1. You could statically allocate the future (ex. type
             | Handler = fn(Request<Body>) -> ResponseFuture, where
             | ResponseFuture is a struct that implemented Future).
             | 
             | But this isn't very flexible and you'd have to hand roll
             | your own Future type. It's not as ergonomic as async fn,
             | but I've done it before in environments where I needed to
             | avoid allocating memory.
             | 
             | 2. You decide to box everything (what you posted).
             | 
             | If Rust were to hide everything from you, then the language
             | could only offer you (2), but then the C++ users would
             | complain that the futures framework isn't "zero-cost".
             | However most people don't care about "zero-cost", and come
             | from languages where the solution is the runtime just boxes
             | everything for you.
        
               | gsserge wrote:
               | Thanks for the suggestion. I didn't think of (1),
               | although it's a pity that it's not as ergonomic as async
               | fn.
               | 
               | I kinda feel like there's this false dichotomy here:
               | either hide and be like Java/Go or be as explicit as
               | possible about the costs like C/C++. Is there maybe a
               | third option, when I as a developer aware of the
               | allocation and dispatch costs, but the compiler will do
               | all the boilerplate for me. Something like `async dyn
               | fn(Request) -> Result<Response>`? :)
        
           | amelius wrote:
           | > In a systems context, where performance and memory
           | ostensibly matter, why wouldn't you want to be made aware of
           | those inefficiencies?
           | 
           | Perhaps, but a bigger problem is that lots of folks are using
           | Rust in a non-systems context (see HN frontpage on any random
           | day).
        
           | blazzy wrote:
           | In this example rust doesn't just make me aware of the
           | tradeoffs. It almost feels like the language is actively
           | standing in the way of making the trade offs I want to make.
           | At least as the language is today. I think a bunch of
           | upcoming features like unsized rvalues and async fns in
           | traits will help.
        
             | [deleted]
        
       | varajelle wrote:
       | Can we please stop using this "color" argument to Rust? The
       | original colouring article was about JavaScript, a dynamically
       | typed language. But rust is a statically typed language, and in a
       | sense, the type is the colour. You can't return an error from a
       | function that does not return a Result<>. And most people agree
       | it is a great thing over dynamic/implicit exceptions. Declaring a
       | function just changes the return type to return a future instead.
       | 
       | Also, it's been many year Pin was stabilized, and that you can
       | write `impl Fn(Xx)` instead of using a where clause, and return
       | closures using that syntax.
        
         | amelius wrote:
         | The only thing a type-system brings to the table in this matter
         | is an error at compile time if you used the wrong color.
         | 
         | It won't solve the color problem.
        
         | pornel wrote:
         | Also Rust async runtimes have `block_on(async)` and
         | `spawn_blocking(sync).await`, so the main problem of call
         | graphs split by "color" doesn't exist in Rust. Rust can merge
         | the two worlds.
         | 
         | Unlike JS, Rust _can_ wait for async code in blocking contexts,
         | and it _can_ run CPU-bound code in async contexts.
        
         | gpderetta wrote:
         | The color problem matters for static languages as much as for
         | dynamic languages.
         | 
         | The essence of the problem is whether you can turn internal
         | iterators into external ones without rewriting them. If you
         | can't then your language has function colors.
        
       | pjmlp wrote:
       | As I already mentioned multiple times, the future are tracing GCs
       | with an ownership concept for niche use cases.
       | 
       | Rust's ideal use case are the scenarios that use stuff like
       | MISRA-C, kernel and drivers code.
       | 
       | The design team has done a wonderful work pushing for affine
       | types into mainstream, however outside of forbidden-GC scenarios,
       | the productivity loss versus any kind of automatic memory
       | management approach, isn't worth it.
        
         | eternalban wrote:
         | The problem may be with the fuzzy conceptual notion of "systems
         | language" that is the starting point of many of these languages
         | that end up with effectively conflicting set of goals.
         | 
         | (I agree with _Animats_ comment elsewhere in this thread
         | blessing all threaing models as  "OK in isolation". It's just
         | they don't play well together in a single language.)
        
           | pjmlp wrote:
           | The OSes designed at Xerox PARC, ETHZ, Microsoft Research,
           | Olivetti, UK Navy, some of which used for quite some time in
           | production, were quite clear, there wasn't any fuzziness on
           | them.
        
         | pornel wrote:
         | Single ownership and deterministic destruction are quite useful
         | on their own though, outside of just memory management. I
         | wouldn't call them niche. Rust makes use of them all the time.
         | 
         | Deterministic destruction is obviously very useful for resource
         | management, and not just files and database handles, but also
         | locks and scope guards.
         | 
         | Single ownership allows working with mutable data without
         | dangers of shared mutability. Without it you either need true
         | immutability, or defensive coding by proactively copying inputs
         | to ensure nobody else can unexpectedly mutate data you have a
         | reference to.
         | 
         | Single ownership also happens to nicely enforce interfaces that
         | require functions to be called only once or that objects can't
         | be used in invalid state, e.g. after a call to .deinit() or
         | after intermediate steps in builders/fluent interfaces/iterator
         | chains.
        
           | pjmlp wrote:
           | Most languages with automatic memory management also offer
           | language features for deterministic destruction.
        
       | roca wrote:
       | This article is a bit confusing. The concrete issues described
       | all relate to closures, but the author says the real problem is
       | async and they liked Rust before that (when it had the closures
       | they have concrete issues with?).
        
         | pierrebai wrote:
         | He says the problem is the proliferation of async libraries,
         | which forces their use and pollute everything they touch in
         | association with the closure problem. It's the combination of
         | both.
        
           | roca wrote:
           | He doesn't give any concrete examples of that though.
        
       | gwbas1c wrote:
       | For a few months while I was between jobs, I decided to learn
       | Rust. Because async code is supposed to be better than threaded
       | code, I spent all my time trying to write async Rust.
       | 
       | Big mistake! I never accomplished what I set out to accomplish.
       | 
       | Most of my experience is in C#, where the difference between
       | async and threaded code is mostly syntax sugar. (Until you get
       | into the details.) (And async code makes writing UI code much
       | cleaner.)
       | 
       | Trying to write async Rust as a novice is downright hostile. The
       | tradeoff of threads makes async code such that it's only worth it
       | if your Rust process will have highly concurrent load; as opposed
       | to C# where refactoring threaded code to async is housekeeping.
        
         | ordu wrote:
         | You had chosen the really hard way to learn Rust. The most of
         | problems for a beginner comes from lifetimes and borrowing.
         | Async code by itself is harder topic when it comes to lifetimes
         | and borrowing. When you use object in a concurrent manner, then
         | you need to have two or more references to it, who would own
         | object? When object has his owner, object lives. When object
         | lost his owner, object dropped. Someone must own it. And
         | borrow-checker need to have a way to prove, that owner will
         | live longer than every one of your threads using this object,
         | otherwise borrow-checker would start to complain and refuse to
         | compile your code. If you are trying to get mutable access to
         | object how you are going to ensure that it would be done in a
         | safe manner without data races? How are you going to explain
         | borrow-checker that it is a safe way?
         | 
         | I'm trying to say, that mostly you problems was lifetimes and
         | borrowing, like with other beginners, but problems were
         | magnified to a level when beginner have no chances to make it
         | work.
        
         | xiphias2 wrote:
         | The problem you were trying to solve (safe system level multi-
         | threaded programming) is really really hard, and actually Rust
         | makes it easier.
         | 
         | If you enjoy it, learn it and have fun doing high performance
         | work at your new job.
         | 
         | If you're not passionate about it, it's OK, there are lots of
         | work in other domains.
        
         | sudeepj wrote:
         | I am guessing you are a point D/D' in the diagram [1] Although
         | the diagram is for the Rust overall, I think something similar
         | is true for async Rust.
         | 
         | All I would say is hang in there. It gets better (from my
         | experience).
         | 
         | Rust is not something you can code along as you think. There is
         | a bit a of upfront thinking what the data-structures & data-
         | flow look like.
         | 
         | At present, I can know in advance without writing code whether
         | implementation I am thinking in Rust will work or not (work =>
         | satisfy Rust compiler)
         | 
         | [1] https://imgur.com/kNkV7jm
        
           | orthoxerox wrote:
           | > Rust is not something you can code along as you think.
           | 
           | This is my biggest gripe with Rust. I want to write code that
           | produces the right result first and make it safe later.
        
             | carols10cents wrote:
             | If you're interested in continuing with Rust, I'd suggest
             | that you think about making code that produces the right
             | result first and make it _fast_ later: use `clone`
             | everywhere you run into borrowing problems, use `Rc` or
             | `Arc` instead of references everywhere you run into
             | ownership problems, make the single threaded version work
             | before adding concurrency, etc.
        
             | majewsky wrote:
             | We already have too much code that "can be made safe
             | later", but never is.
        
         | ragnese wrote:
         | I sympathize. As I said in another comment, the issue isn't
         | really _caused_ by async, IMO. It 's caused by the complicated
         | and leaky abstraction of traits. I'd be willing to bet that
         | your difficulties were more around dealing with the
         | interactions of async + traits than if you had just written a
         | bunch of async functions and then ran them inside an async main
         | (via tokio's tokio::main attribute).
         | 
         | Just a guess.
        
       | brundolf wrote:
       | I could nitpick a few things here (you can use Arc to get around
       | most of the reference-related issues, for example), but I mostly
       | agree with the overall point: some things are just not as
       | ergonomic in Rust, and may never be. I haven't done async stuff
       | in it yet, but I've encountered my own situations where it feels
       | like the language "doesn't really want me to be doing X". And
       | honestly, I think that's okay.
       | 
       | I'm someone who wishes I could use Rust for everything, but I'm
       | coming around to the idea that it isn't really suited to
       | everything. It was designed for the problems faced by low-level
       | programmers, and it continues to be something of a revelation in
       | that area, and it has also really punched above its weight in
       | totally unrelated areas, and that's great. But I don't know if we
       | should expect it to be _great_ at those other areas that are so
       | far outside its core wheelhouse. I think it 's good that Rust has
       | an option for async, but comparing its async ergonomics to Go may
       | just be an apples-and-oranges thing.
        
       | eminence32 wrote:
       | > Was spinning up a bunch of OS threads not an acceptable
       | solution for the majority of situations?
       | 
       | It's worth remembering that this is still a very acceptable thing
       | to do in some cases. And spawning OS threads is sometimes easier
       | to write and easier to read, and easier to reason about than
       | async code.
       | 
       | Just because async is in the language, don't feel like you need
       | to use it everywhere.
        
         | noncoml wrote:
         | > Just because async is in the language, don't feel like you
         | need to use it everywhere.
         | 
         | Except that you will have to because no alternative library
         | exist
        
           | carols10cents wrote:
           | What libraries are you trying to use for what tasks that you
           | can only find async libraries?
        
           | zokier wrote:
           | You know you can just write your own code and not just glue
           | libraries together? In general I find Rust less dependant on
           | libraries because pretty much all the basic building blocks
           | are exposed and available. You can pretty much just start
           | doing syscalls directly if needed
        
           | cjg wrote:
           | It's easy to convert an async function into a blocking one.
           | For example, you could use https://docs.rs/async-
           | std/1.9.0/async_std/task/fn.block_on.h...
           | 
           | So just use the async functions from the library
           | synchronously if you want.
        
             | heavenlyblue wrote:
             | But you can't really do vice-versa.
        
               | cjg wrote:
               | Well, you can use https://docs.rs/async-
               | std/1.9.0/async_std/task/fn.spawn_bloc...
               | 
               | But this is partly the reason why libraries might be more
               | inclined to support async rather than blocking calls.
        
       | hpen wrote:
       | I find rust to be too hard to use for it to ever become huge
        
         | HKH2 wrote:
         | Just turn all &str to String and use .clone() liberally while
         | you get used to the language, and you'll avoid having to think
         | about the borrow checker for most basic applications.
         | 
         | I think Haskell is stuck like you say because if your code is
         | overly wordy, it will run slowly too, yet Rust is still
         | relatively fast even if you do .clone() a lot.
        
           | hpen wrote:
           | Ah maybe that's how I should approach. I was definitely
           | interested in Rust as as personal replacement for C or C++
        
       | joshsyn wrote:
       | Glad to see my VB job wont be going any time soon.
        
         | [deleted]
        
       | boulos wrote:
       | The author admits that they needed Arc/clone in a footnote. So I
       | think the more interesting title would be to rehash/interpret
       | this as "Why asynchronous Rust is too hard".
       | 
       | Having played with this a bit recently, I think folks are going
       | to end up:
       | 
       | - assuming Tokio (now that it's 1.xx)
       | 
       | - mark nearly everything as async
       | 
       | - push callers to use rt.block_on to wait for an async function
       | from non-async code
       | 
       | I definitely miss the _easy_ concurrency of golang the author
       | refers to (more so in the colored functions post), particularly
       | since AFAICT, tokio is basically going to become the default
       | assumption. So Rust is technically pluggable here, but won't be
       | in practice. Does that mean that embedded folks end up having
       | their own !tokio that they prefer? Maybe! But they'll get locked
       | out of the "assume tokio" ecosystem.
        
         | joobus wrote:
         | > AFAICT, tokio is basically going to become the default
         | assumption
         | 
         | I've noticed this as well. And then there are the libs that
         | don't support async, like diesel. I lost time trying to find
         | the right combination of libraries for a web project. There are
         | a lot of things I don't like about Go, but this part is easier.
        
         | skohan wrote:
         | I noticed this a few months ago as well. I was looking for a
         | way to do simple synchronous HTTPS requests in a CLI, and it
         | seemed like using `reqwest` with tokio was more or less the
         | default option in Rust. I ended up using ureq, and it looked
         | like there may have even been ways to avoid it using hyper and
         | an alternative tls-connector but the simplest solution seemed
         | to be "just use tokio".
         | 
         | When something foundational, like networking, gains a defacto
         | dependency on tokio, it seems inevitable that a large footprint
         | of the ecosystem will eventually depend on it.
        
           | boulos wrote:
           | Yeah, I started with reqwest because it was easy, then wanted
           | more control (reqwest has a hardcoded 8? KiB buffer!) and
           | went to hyper directly, and then realized I should replace
           | all reqwest usage with just async usage of hyper.
           | 
           | Doing so then colored/infected everything with async. It
           | wasn't actually bad in the end, but if you're just trudging
           | along trying to do your job, you suddenly walk into a "and
           | now update all of this". Staring at the innards of
           | reqwest::blocking though made me realize "if I care about
           | performance at all, I really shouldn't use this". One of my
           | main reasons was so that I'd _know_ there was only my
           | inclusion of tokio /the version I wanted (reqwest lagged a
           | bit, so then I had hyper w/ tokio1.0 and reqwest with an
           | older hyper with an older tokio... and I was worried about
           | understanding that interaction).
        
       | RantyDave wrote:
       | Again, feeling old, I'm not seeing a lot wrong with threads. A
       | thread pool, if you insist, but explicit workflows as opposed to
       | chaining together callbacks and contexts ... just seems really
       | easy.
        
         | pornel wrote:
         | Things that are easier with Rust's async:
         | 
         | * Timeouts and cancellation. Rust's async code can be aborted
         | from the outside (with correct cancellation and cleanup of
         | everything in progress). OTOH if you run imperative blocking
         | code, you need every blocking function to cooperate.
         | 
         | * Full utilization of CPU and I/O in workloads that mix both.
         | Async executor balances threadpools/polling queues for these
         | automatically. OTOH if you just use regular blocking calls with
         | no state machines/continuations, it will be hard to separate
         | these two. A single mixed-use threadpool will oscillate between
         | I/O-bound threads and underutilizing CPU, and CPU-bound threads
         | and idling I/O.
        
         | adwn wrote:
         | I think you're spot on. For 99% of the use cases, traditional
         | threads are the way to go. It's only when you have a truly
         | massive number of logical threads (>100k, or even >1M) not
         | doing anything most of the time, that OS threads reach their
         | limits due to their aggregate stack size. In other words: web
         | services, where each server thread either waits for input from
         | the client or for a database reply, and only if they have to
         | handle hundreds of thousands of concurrent clients.
         | 
         | This is not a concern for most Rust users, but it is a concern
         | for large companies that are potential patrons of an up-and-
         | coming programming language. I'm not saying it's wrong to cater
         | to their needs, rather that it's not something over which
         | regular Rust users should lose sleep.
        
           | akvadrako wrote:
           | You're right but even most big players can probably just use
           | threads. If you are serving 1 million clients per VM you are
           | probably taxed for other resources besides threads unless
           | 99.99% of your clients are guaranteed to idle.
        
         | kimundi wrote:
         | Fully agree with that, and Rust makes that very easy still. Its
         | 5-10 lines to spawn multiple threads that communicate with each
         | other either via shared state or message passing, and I solve
         | most paralleism problems that way to this day in Rust. :D
        
       | hctaw wrote:
       | I don't think any of this is actually a reason why async rust
       | doesn't work. The function color problem is a bit overblown, it
       | hasn't broken nodejs yet either. Most code is sync, sync can call
       | async and vice versa, and the ecosystem hasn't come tumbling down
       | yet (since most Rust isn't whichever async application runtime
       | for web apps is currently in vogue - its a systems language after
       | all).
       | 
       | Either way there are bigger function "color" problems in Rust.
       | 
       | If you want a target for criticism in async Rust, it should be
       | the lack of easy conversion from generators to async functions
       | (and back) and the cognitive difficulty of implementing a custom
       | executor.
       | 
       | I also think the tangent on closures is nonsense, closures are
       | syntactically and semantically complex because closures are
       | _hard_ to keep safe without a garbage collector. Reference
       | radioactivity is a feature, not a bug.
        
         | TheCoelacanth wrote:
         | Why is it a problem that custom executors are difficult to
         | implement? Most other languages that I'm aware of they wouldn't
         | even be possible to implement.
        
           | hctaw wrote:
           | It's not really a problem worth caring about. Just if you
           | wanted to gripe about the complexity of async Rust it's the
           | most obvious to me. "But pretty much no one even lets you do
           | that" is a great counter argument.
           | 
           | But, it is kind of an anti feature. It's incredible that Rust
           | allows custom async executors and the surface area for them
           | is tiny! That said, it's kind of black magic, even for Rust.
           | I'm willing to bet there's fewer than 20 people in the world
           | that could make a production executor today (not that it's
           | going to stay that way - don't get me wrong).
           | 
           | If there's one reason to write an async application today in
           | Rust it's because you can write a custom executor and give it
           | a ridiculous amount of information about how the application
           | is running by communicating with it, while wrapping both
           | callback and polling backed APIs from system libraries in C.
           | 
           | But that's really hard to do if writing an executor is harder
           | than just writing callbacks or polling code manually.
        
             | thu2111 wrote:
             | Is it really that incredible? Look at how Kotlin
             | implemented coroutines. The language support is tiny -
             | nearly everything is implemented in libraries that you can
             | compete with. In particular the language rejected the
             | async/await model: you still mark functions as async
             | (suspend) but there's no language level equivalent of
             | await. Just lots of library level APIs to compare
             | coroutines in different ways. It's really nice.
        
             | ArchOversight wrote:
             | Steve Klabnik did a talk about async/await and building
             | your own executor:
             | https://www.infoq.com/presentations/rust-async-await/
             | 
             | Might be worth a listen if you are intent on building your
             | own!
        
         | xfer wrote:
         | I really don't see a problem with function color in a typed
         | language. Every function has color and compiler enforces you
         | pass right color. Async is just another type that may have some
         | convenient syntax. It might be inconvenient in untyped
         | languages like js since you couldn't compose them.
        
           | CyberRabbi wrote:
           | It's not a problem of being error-prone, the problem is that
           | the ecosystem is split. As a crate author, do I implement my
           | functions for color A or color B? Every crate author will
           | eventually encounter this question and the inevitable fall
           | out from making the wrong decision. Presently, the only way
           | to deal with this is implement your functionality in both
           | colors.
        
             | hctaw wrote:
             | It's only a question if the function needs to call
             | something else that could be either color. The vast
             | majority of code does not.
             | 
             | It would take a couple months of bike shedding syntax and
             | discussing internals but I feel like there's a version of
             | this (applied to other function colors) that allowed a
             | library author to annotate/decorate a function for an
             | inclusive or exclusive set of these decorations the
             | compiler can pick to call from a calling context, and allow
             | the function author to specialize over the combination of
             | decorators that actually get called. Since the set of
             | decorators is small (I can only think of 2-3, including
             | async) this feature would be straightforward to implement
             | albeit potentially verbose.
             | 
             | There's a fancy compiler pass to make async work though.
             | You need a special control flow analysis to figure out
             | yield points in async bodies, since this new syntax would
             | have no await.
             | 
             | It's probably too late for that. It might be implemented in
             | terms of HKT.
        
             | xfer wrote:
             | If your function represents async workflow then you can
             | just implement the async side and the caller can just block
             | on it if they want that semantics. The problem with the
             | ecosystem in Rust is there are various runtime libraries
             | and no standard way to factor it out. You shouldn't need to
             | implement the blocking semantics since it should be
             | trivial.
        
           | ben509 wrote:
           | It's a problem in that you can't arbitrarily compose
           | synchronous and asynchronous functions.
           | 
           | Many languages have a distinction between statements and
           | expressions, and some language designers thought that was
           | ugly and designed languages where everything is an
           | expression.
           | 
           | But because of how people think, an application typically has
           | a rough hierarchy to it by the designer's intent. So the fact
           | that this becomes a bit more explicit because a language
           | marks statements or async functions as having a specific
           | place in that hierarchy isn't generally a bad thing.
        
             | xfer wrote:
             | You can't arbitrarily compose 2 functions with mismatched
             | types. So i don't think this argument holds in a typed
             | language.
        
               | akvadrako wrote:
               | The problems are
               | 
               | (1) either you write two versions of almost every
               | function or you can only support one use case (sync or
               | async). If you could write generic functions which work
               | in both cases it would be better.
               | 
               | (2) it creates a lot of noise writing async/await. It's
               | like if we had to always write _x = call f()_ in the sync
               | case.
        
               | int_19h wrote:
               | If you write your functions as async, it's trivial to
               | make a sync wrapper out of it.
        
               | VMG wrote:
               | not in every language (JS)
        
               | [deleted]
        
               | akvadrako wrote:
               | If you feel like wrapping every function call.
        
         | int_19h wrote:
         | > The function color problem is a bit overblown, it hasn't
         | broken nodejs yet either.
         | 
         | I agree; and I would also point at C#, which is the originator
         | of the async/await syntax, and has had it for over 8 years now,
         | with the ecosystem embracing it for most of that period. It
         | works.
        
           | orthoxerox wrote:
           | I've been waiting for an async Oracle ADO.NET driver for over
           | 8 years now.
        
         | ash wrote:
         | > The function color problem is a bit overblown, it hasn't
         | broken nodejs yet either.
         | 
         | It didn't break Node because Node avoids function color problem
         | altogether. Nobody writes blocking code in Node due to its
         | single-threaded nature.
         | 
         | Function color problem is the difficulty of mixing blocking and
         | async functions. It's especially annoying in Python, where you
         | have to spawn a thread to call blocking functions from async
         | functions, and welcome to mutithreading bugs.
         | 
         | In Node the problem doesn't exist, because you don't block in
         | its blocking functions. You either do `await f()` or
         | `f().then()`.
        
         | valand wrote:
         | I agree on generator<->async conversion.
         | 
         | I'm not an expert but I can't shake the thought that Rust had
         | it backward with async and generator. Generator looks like a
         | good bridge between sync and async. Before reading about
         | implementing async executor and digging under what tokio and
         | async-std is doing (it's been months and I haven't finished), I
         | thought implementing async executor is just as easy as running
         | looping a generator.resume() until it yields
         | GeneratorState::Complete until I realize generator is still on
         | nightly!
         | 
         | It may be because I'm a noob in Rust, but I can't write easily:
         | 
         | ``` let some_handle_arc = Arc::new(some_handle);
         | future::block_on([ // this future will stop running midway if
         | some_handle is modified in some way
         | make_future(some_handle_arc),                 //
         | make_future_stopper will modify some_handle_arc when receiving
         | // a certain input from a certain interface
         | make_future_stopper(some_handle_arc)
         | 
         | ]); ```
         | 
         | My initial intuition is if async is generator-based, there
         | should be an escape hatch where the generator-runner just exit,
         | drop the generator, without waiting for the generator to yield
         | Generator::Complete. But again it might be from my noobness
         | being a simple Rust hobbyist.
         | 
         | Sorry for the rant
        
         | pron wrote:
         | The issue is much more general. I also don't see a particular
         | problem with Rust here, but I do see with Rust the same false
         | hope I saw in the late eighties and early nineties with C++
         | (hell, I fell victim to it myself back then): the belief that
         | with enough features, cleverness, and language complexity we
         | could have the cake and eat it too -- have all the control of a
         | low-level language with the same productivity as a high-level
         | language. Low-level languages, whether they try to appear as
         | high-level ones on the page or not, all suffer from low-
         | abstraction, i.e. the inability to hide internal implementation
         | details behind an API and isolate them from consumers. This
         | makes changing implementation details hard and the program much
         | more rigid, and so increases maintenance cost. The problems
         | with C++ weren't apparent at first, because writing the program
         | the first time is as easy as with a high-level language; the
         | problems start when you change the program over the years.
         | There is no doubt Rust cleans up C++ and solves some particular
         | problems with it, but it doesn't solve this fundamental
         | problem.
         | 
         | There are many situations where a low-level language and the
         | control it affords are absolutely necessary, but low-level
         | programmers are aware of the non-negligible price they pay.
        
       | bfrog wrote:
       | I'd say the entire opposite of this author, async rust works
       | well, it solves 99% of the problems I want it to solve in a
       | practical and useful way, right now. And even better the
       | resulting program is faster than anything I could have cooked up
       | on my own.
       | 
       | Probably the only true comparable might be the scylladb seastar
       | framework. And it's just that, with all the burdens that come
       | with C++ coding. You won't find every client and server protocol
       | under the sun implemented with it either which arguable async
       | rust is building towards.
        
       | KingOfCoders wrote:
       | Rust chose to build on Futures and async.
       | 
       | Author wants to have node style callbacks
       | 
       | "I'd like to make a simple function that does some work in the
       | background, and lets us know when it's done by running another
       | function with the results of said background work."
       | 
       | Author is unhappy with Rust.
       | 
       | What I would like to have out of Rust async runtimes is the Go
       | version: Coroutines and moving waiting IO automatically to a
       | thread queue.
        
       | paulsutter wrote:
       | We all start out loving callbacks, eventually progress to
       | coroutines, and finally end up writing polling loops. It took me
       | more than 30 years to figure this out.
       | 
       | I understand completely why most people disagree - I always
       | disagreed even louder.
        
       | deathanatos wrote:
       | > _Was spinning up a bunch of OS threads not an acceptable
       | solution for the majority of situations? Could we have explored
       | solutions more like Go, where a language-provided runtime makes
       | blocking more of an acceptable thing to do?_
       | 
       | I think this is the real fundamental disagreement here (well, at
       | least with the async stuff). (And the comments slightly earlier,
       | regarding "the color of your function".)
       | 
       | The way I think about the various async stories out there (like,
       | all of them, Go's, Rust's, Python's, C w/ OS threads), is that
       | they boil down to roughly three or so primitives: (I haven't
       | formalized this ... and someday I should probably write a blog
       | post on it, so it's not going to be perfect.)                 1.
       | separate executions of code (threads, green or not, goroutines,
       | etc.)       2. selection (the ability to block on **multiple**
       | threads, simultaneously,          but return after *one* is
       | finished or ready. It's not join().)       3. cancellation (the
       | ability to interrupt and cancel a thread)
       | 
       | "Was [...] OS threads not an acceptable solution?" No: the
       | selection story is terrible (without involving epoll. You'll
       | pretty much have to build this out around conditions, and it's
       | painful), and the cancellation story is close to non-existent
       | (you have to somehow transmit to the thread that it should
       | cancel, and then that thread needs to obey it). In combination,
       | some things are outright impossible: if you're blocking on a
       | socket recv in a thread, how do you get notified if you were
       | cancelled? (You literally can't, with the primitives that most
       | OSes offer, without resorting to hacks, or epoll.) (Some OSes
       | have calls to kill threads. They are basically one-way tickets to
       | UB.)
       | 
       | Golang's solutions get closer, but I don't think they're
       | appropriate for a systems language that cares about low-level
       | performance & memory layout to the degree that Rust does.1 But
       | even golang struggles here: the cancellation story is missing.
       | Golang's approach here is retroactive: you have a context object,
       | and you must thread it through function calls to anywhere it
       | might be required. If your API didn't think to do that, you're
       | SOL as a consumer, but worse, even as a coder, adding it is going
       | to be a right PITA.
       | 
       | This isn't to say Rust's story is perfect here, either: it is
       | tempting to think that dropping a future is cancellation, and it
       | is very, very close (it provides a decent default). But sometimes
       | cancellation also needs to propagate across an I/O boundary
       | (usually, across a network socket to a server, to cancel a
       | request) and those are async. (I think there is some work being
       | done in Rust with asynchronous drops.)
       | 
       | These problems show up in JavaScript -- whose Promise lacks
       | cancellation -- and Python (which has cancellation via raising
       | CancelledError; also, Python really, IMO, messed up the
       | terminology. Between futures, coroutines, tasks, and awaitables,
       | there's ~2 _real_ types (futures and tasks), and the rest are ...
       | IDK, weird distinctions that are hard to keep straight.)
       | 
       | 1Now, this is the heart of the other part of the article, the
       | half about closures and fn types. Rust cares about ownership (as
       | it allows programs to avoid not just memory issues, but a whole
       | host of bugs resulting from ownership confusion that happen even
       | in memory-safe languages like Python) and it cares about the
       | performance implications of things like dynamic dispatch, or
       | separate allocations for the data associated with the callback
       | that you want to pass. And between trait types & generics, it
       | lets you choose, but that does result in some verbosity. You can
       | also choose Arc, but some of us don't want that in all cases.
       | 
       | (I don't agree with the "color of your function"
       | argument/article, either: async functions in Rust are
       | fundamentally still functions, they just return a Future<T>
       | instead of a T. That distinction is _important_ ; we're not just
       | after the T, and the future is something we're going to
       | block/wait on; it is its own type, with its own operations --
       | specifically, and more or less, the ones above! --, that are
       | crucial to how the code functions.)
        
         | cjg wrote:
         | Rust async doesn't quite solve the cancellation problem either.
         | 
         | Sure, you can stop polling the future at any time, but that
         | doesn't mean that you left the system state consistent.
         | 
         | Careful thought is needed on every call to await in an async
         | function. It might be the last thing that async function ever
         | does.
         | 
         | That's probably something that should be true in a well-
         | implemented system anyway. After all, processes could be killed
         | at any point.
        
         | aidenn0 wrote:
         | With standard library support, green threads can be just fine.
         | I've worked with many, many C programs that were nbio and green
         | threads. Cancellation is trivial when you are writing the
         | scheduler! In these cases, I used multiprocessing when I needed
         | parallelism. The answer to "If you're blocking on a socket recv
         | in a thread..." is to not ever do that; the library should
         | provide a green recv that appears to be blocking to the code,
         | but is implemented with non-blocking calls underneath. With the
         | privilege of being the standard library, there are many ways to
         | enforce this.
         | 
         | N to M is much harder to get cancellation right, but again if
         | you write the scheduler, you can define the cancellation
         | points, so you are not beholden to undefined behavior.
         | Instantly cancelling a thread is just not solvable, and for
         | Rust, which cares about low-level performance &c. then long-
         | running computations could explicitly check for cancellation
         | when it is needed. Threads that perform I/O regularly will
         | naturally hit cancellation points.
         | 
         | Cancellation is a _huge_ problem in concurrency, and the usual
         | answer to  "how do I cancel a thread" is "you don't want to
         | cancel a thread" which is rather unsatisfying. As you point
         | out, it's not like using async and/or promises automatically
         | solves the problem either; the only way to get it right is to
         | build it in from the start.
        
       | KingOfCoders wrote:
       | From an aesthetic point of view I'd like something that does not
       | show the "what colour is your function" aesthetic.
       | 
       | From writing a large SaaS companys code in async Scala, function
       | color was none of my problems.
        
       | jstrong wrote:
       | I've been primarily coding in rust since 2018. I never cared for
       | async/await, and I've never used it. (at some point, coding event
       | loops became very natural/comfortable for me, and I have no
       | trouble writing "manual" epoll code with mio/mio_httpc).
       | 
       | one nice thing about rust's async/await is, you don't have to use
       | it, and if you don't, you don't pay for it in any way. sure, I
       | run into crates that expect me to bring in a giant runtime to
       | make a http request or two (e.g. rusoto), but for the most part I
       | have not had any issues productively using sans-async rust during
       | the last few years.
       | 
       | so, if you don't like async/await, just don't use it! it doesn't
       | change anything about the rest of rust.
        
         | ntr-- wrote:
         | I don't know how anybody can say this with a straight face.
         | 
         | Even in a systems context I think it's pretty reasonable to
         | want to either perform or receive a HTTP request, as soon as
         | you do that in Rust you are funneled into Hyper or something
         | built on top of it (like reqwest) and instantly are dependent
         | on tokio/mio.
         | 
         | The very first example in the reqwest readme^1 has tokio
         | attributes, async functions AND trait objects. It's impossible
         | that a beginner attempting to use the language to do anything
         | related to networking won't be guided into the async nightmare
         | before they have even come to grips with the borrow checker.
         | 
         | 1. https://crates.io/crates/reqwest
        
           | iso8859-1 wrote:
           | jstrong didn't mention anything about beginners or how a
           | typical user is nudged. It's just a description of how they
           | work. They even pointed out which HTTP library they use.
           | There is nothing in their post that requires a curved face.
        
           | thijsc wrote:
           | Try https://github.com/algesten/ureq, it's very nice and not
           | async.
        
           | ogoffart wrote:
           | Reqwest lets you choose between async or not. It has a
           | "blocking" module with a similar API, but no async functions.
           | 
           | https://docs.rs/reqwest/0.11.2/reqwest/blocking/index.html
           | 
           | (Maybe this uses async rust under the hood, but you don't
           | have to care about it)
        
             | iso8859-1 wrote:
             | It is not a question of whether something is blocking or
             | not, blocking is easy. It is a question of whether you can
             | have asynchronicity without await/async. And you can, using
             | mio, as jstrong suggested. You'll manually poll the event
             | loop.
        
           | rapsey wrote:
           | You can avoid async ecosystem with mio_httpc for http client.
           | tiny_http for http server. Both work well.
        
         | omginternets wrote:
         | Mmmyeah, they said this about python too...
        
           | jstrong wrote:
           | rust has avoided many pitfalls that python fell into. one
           | example is the way that rust 2015 edition code works
           | seamlessly with rust 2018 edition code, unlike the python2 ->
           | python3 debacle. another is cargo vs pip/poetry/this month's
           | python package manager. so, using python as an example isn't
           | very convincing to me.
        
             | omginternets wrote:
             | >so, using python as an example isn't very convincing to
             | me.
             | 
             | That's because you're focusing on precisely the points
             | where Rust has done better than Python, and conspicuously
             | glossing over Python's asyncio debacle ... which looks
             | _exactly_ like Rust 's.
             | 
             | In a nutshell, it works like this:
             | 
             | 1. New syntax is introduced
             | 
             | 2. We are told it's optional
             | 
             | 3. The ecosystem adopts it, and it be comes _de facto_
             | required
             | 
             | So right back at ya -- I am not convinced in the least by
             | Rust's argumentation ;)
             | 
             | It's unclear to me how leadership can fix this. The cat is
             | out of the bag, and the language must now contend with a
             | second-order embedded syntax. Avoiding this becomes
             | increasingly difficult as it is adopted in the ecosystem;
             | just ask the C++ guys how they feel about exceptions.
        
         | theopsguy wrote:
         | +1000 in this. You can just use rust like c/c++ if you don't
         | want to buy into async.
        
         | joseluisq wrote:
         | +2
         | 
         | Yes! Using `sans-async` must NOT diminish our productivity in
         | comparison with `async` counterpart.
         | 
         | > at some point, coding event loops became very
         | natural/comfortable for me
         | 
         | In fact it is, I'm very happy with it too.
         | 
         | > one nice thing about rust's async/await is, you don't have to
         | use it, and if you don't, you don't pay for it in any way.
         | 
         | You've said it!
        
         | boulos wrote:
         | Maybe it's just the hyper and related crates universe, but my
         | perception (see my sibling comment) is that more and more of
         | the ecosystem will be "async only".
         | 
         | Like you, I think that's sort of fine, because you can always
         | roll your own. But it's also kind of sad: one of the best
         | things about the Rust ecosystem is that you can so effortlessly
         | use a small crate in a way you really can't in C++
         | (historically, maybe modules will finally improve this).
        
           | lalaithion wrote:
           | I mean, you can always just use an async crate and use
           | async_std::task::block_on.
        
             | boulos wrote:
             | _Only_ if no other crate in your executable uses tokio or
             | another executor, right?
             | 
             | Mixing executors is approximately fatal, AFAICT.
        
               | cjg wrote:
               | I think that's too strong.
               | 
               | You could have two executors with no problem.
               | 
               | What might cause issues is if code running on one
               | executor needed close interaction with the other - such
               | as running a task on the other executor.
               | 
               | Most of the time this is just avoided by using only one
               | executor. But that's not the only way.
        
               | boulos wrote:
               | That's a good clarification. What matters is if tasks
               | "running on" one executor end up spawning / blocking on a
               | task "on another executor". You can definitely have "side
               | by side, who cares". The danger though is that you're
               | just using some library, and if it suddenly assumes it
               | can do tokio::task::block_in_place but you were using
               | some other executor you get "Hey! No runtime!".
        
       | jen20 wrote:
       | This is, frankly, rubbish. Asynchronous Rust works very well -
       | and the implementation of async/await is more ergonomic than any
       | of the progenitor languages.
        
       | pjlegato wrote:
       | The Clojure dialect of Lisp comes with a software transactional
       | memory (STM). This basically provides relational database
       | atomicity semantics around in-memory variable access. 99% of the
       | pain of concurrent programming goes away.
       | 
       | At a high level, you provide a pure function that takes the
       | current value. The system re-runs it as many times as needed to
       | sort out collisions with other threads, then stores the new value
       | -- which then becomes the argument for the pure update functions
       | running in all the other threads.
       | 
       | Has anyone considered an STM for Rust?
        
         | steveklabnik wrote:
         | There have been some vague discussions, but STM is (as you
         | mention), tough in an impure language. It is also not clear
         | that it achieves Rust's efficiency goals.
        
       | jlrubin wrote:
       | I've been working on a reasonably complicated project that is
       | using rust, and I think about de-asyncing my code somewhat often
       | because there are some nasty side effects you can see when you
       | try to connect async & non async code. E.g., tokio makes it very
       | difficult to correctly & safely (in a way the compiler, not
       | runtime) catches launch an async task in a blocking function
       | (with no runtime) that itself may be inside of a async routine.
       | It makes using libraries kind of tough, and I think you end up
       | with a model where you have a thread-per-library so the library
       | knows it has a valid runtime, which is totally weird.
       | 
       | All that said, the author's article reads as a bit daft. I think
       | anyone who has tried building something complicated in C++ / Go
       | will look at those examples and marvel at how awesome Rust's
       | ability to understand lifetimes is (i.e. better than your own)
       | and keep you from using resources in an unintended way. E.g., you
       | want to keep some data alive for a closure _and_ locally? Arc.
       | Both need to writable? Arc Mutex. You are a genius and can
       | guarantee this Fn will _never_ leak and it 's safe to have it not
       | capture something by value that is used later in the program and
       | you really need the performance of not using Arc? Cast it to a
       | ptr and read in an unsafe block in the closure. Rust doesn't stop
       | you from doing whatever you want in this regard, it just makes
       | you explicitly ask for what you want rather than doing something
       | stupid automatically and making you have a hard to find bug later
       | down the line.
        
       | pumpkinpal wrote:
       | Would anyone be able to make some brief comparisons between
       | Rust's asynchronous model and the model followed by Asio in C++
       | (on which C++23 executors/networking is based) and if there are
       | any parallels with the sender/receiver concept
       | (https://www.youtube.com/watch?v=h-ExnuD6jms)?
       | 
       | I've seen a few comments talking about the choice of Rust
       | adopting a poll model as opposed to a completion model. Am I
       | correct in assuming these are the same things as a reactor
       | (kqeue/epoll etc.) vs a proactor (IOCP) and that a proactor can
       | be implemented in terms of a reactor.
        
         | steveklabnik wrote:
         | I don't know enough about Asio to make a good comparison, but
         | 
         | > Am I correct in assuming these are the same things as a
         | reactor (kqeue/epoll etc.) vs a proactor (IOCP)
         | 
         | Close enough, yes.
         | 
         | > a proactor can be implemented in terms of a reactor.
         | 
         | Yes.
        
       | dcow wrote:
       | What's wrong with async_std? The author calls it spicy. I've been
       | using Surf and Tide which are built atop it and things have been
       | a breeze. It is essentially a standard lib where all functions
       | are red (a solution as I'd interpret things based on the linked
       | rant). In my experience rust async's only wart is the incredibly
       | confusing function signatures you get with Box<Pin<... which I
       | would think can be cleaned up.
        
         | topspin wrote:
         | > What's wrong with async_std?
         | 
         | The problem is that it exists. Or rather, that multiple async
         | libraries exist and every attempt to employ asynchronous
         | programming in Rust inevitably leaves you with the problem of
         | aligning these different, redundant, incompatible, incomplete,
         | evolving async libraries, which is an unmitigated disaster.
         | 
         | Rust, all by itself, is a non-trivial hill climb for
         | developers. They're willing to bust out the climbing tools and
         | make the attempt because they see the value. But then they get
         | hit in the head by debris like futures::io::AsyncRead <->
         | tokio::io::AsyncRead and that is it; they come down off the
         | hill and go find something else to scale. Worse yet they don't
         | talk about it; they love Rust, the idea of Rust, and they want
         | it to thrive so they avoid being critical.
         | 
         | IO and async is fundamental. It's too fundamental to suffer
         | multiple, partial, incompatible, ever evolving realizations.
         | This is the number one issue that threatens Rust. People have
         | only so much patience to spare; only so many attempts they're
         | will to make climbing the same hill.
         | 
         | I have hope. Companies like Amazon and Microsoft are making
         | serious commitments to Rust because they too see the value.
         | They're going to get knocked in the head by the same falling
         | debris. And so there is a chance that while this is all still
         | young and the MBAs and lawyers and rent seekers aren't yet
         | savvy, Rust will receive the sort of tedious, expensive labor
         | it so desperately needs to mature.
        
       | lmc wrote:
       | thread::spawn (or equivalent in other langs) is an inappropriate
       | way to do async in pretty much any situation. Very sceptical
       | about this article.
        
       | cryptica wrote:
       | Funny how the Rust community used to regard JavaScript with
       | contempt.
       | 
       | Yet now when you consider how well JS was able to add support for
       | asynchronous programming, it's pretty clear that JavaScript was
       | actually a great programming language all along.
       | 
       | JavaScript was designed to evolve. That's the difference between
       | intelligence and wisdom. Unlike intelligence, wisdom takes into
       | account the unknown.
        
         | lukebitts wrote:
         | The Rust community regards the JS community with contempt?
         | Where does that opinion come from?
        
           | cryptica wrote:
           | Rust was often touted as a superior alternative to Node.js
           | writing servers. Perhaps this element of contempt is no
           | longer there but it certainly was there before due to the
           | competition. Similar competition existed between Node.js and
           | Golang and there was similar contempt from Go community.
        
             | pornel wrote:
             | In some performance-critical tasks Rust is doing much
             | better than Node.js, but I don't see this as a contempt of
             | the JS language or its community. Rust has made certain
             | trade-offs for zero-cost abstractions and no-GC memory
             | management, and this pays off in certain situations.
             | 
             | Async in Rust uses a radically different model than JS or
             | golang, but that's not to say the other languages are bad.
             | It's because Rust has different priorities, and needed an
             | async model that works well without a garbage collector.
        
       | Animats wrote:
       | I'm coming around to the position that pure "async" is OK, and
       | pure threading is OK, and green threads (as in Go goroutines) are
       | OK, but having more than one of those in a language is not OK.
       | They do not get along well.
        
         | int_19h wrote:
         | The problem with green threads is that, unless they're a part
         | of the platform ABI, they don't really get along with anything
         | else - including green threads in other languages/frameworks!
         | This makes cross-language/runtime interop unnecessarily
         | difficult, and as I understand, it's partly why Go apps tend to
         | be "pure Go", and even their stdlib insists on using syscalls
         | on platforms where you're supposed to go via libc (like BSDs,
         | and previously also macOS).
         | 
         | But there doesn't seem to be any concerted effort to
         | standardize green threads. Win32 has had fibers for a while,
         | but nobody's actually using them, and runtimes generally don't
         | support them. Even .NET tried to do it in 1.x, but then dropped
         | support in later versions.
        
           | Animats wrote:
           | _This makes cross-language /runtime interop unnecessarily
           | difficult, and as I understand, it's partly why Go apps tend
           | to be "pure Go"._
           | 
           | Yes. Fortunately Go has enough well-exercised standard
           | libraries that this usually isn't a big problem.
        
         | chubot wrote:
         | Well I'd say you have to take a lot of care to make them
         | compose well.
         | 
         | Just like it takes care to compose thread- and process-based
         | concurrency, it takes care to compose threads and async
         | correctly and comprehensibly.
         | 
         | If you care about utilization you MUST use threads (or
         | processes, which are a different story). So if you use async,
         | then you have the problem of composing them, which is indeed
         | hard. There are some C++ codebases that do this well but
         | they're notable for being exceptions ...
         | 
         | There does appear to be overuse of async as a premature
         | optimization. The default should to use threads and explicitly
         | shared state -- basically the idioms in Go, although I'm sure
         | ownership complicates that a lot.
        
         | socialdemocrat wrote:
         | Isn't the Go solution green threads on top of hardware threads?
         | 
         | Lots of green threads get load balanced on top of a thread
         | pool. Isn't that the whole unique aspect of Go? So in essence
         | they do both.
        
           | justicezyx wrote:
           | I think the idea is that they do not present as competing
           | paradigms.
           | 
           | Go green threads or go routine are the only visible
           | constructs to users. Whereas the hardware threads are not
           | usable to users.
           | 
           | Edit: From the perspective of mutli-paradigm, C++ is truely a
           | unique language. It not only has multiple paradigms, and
           | largely contains them in a relatively harmony state.
        
             | Animats wrote:
             | Right. The key point being that in Go, you can block or use
             | some compute time without stalling out the async system.
        
       | newpavlov wrote:
       | A bigger problem in my opinion is that Rust has chosen to follow
       | the poll-based model (you can say that it was effectively
       | designed around epoll), while the completion-based one (e.g. io-
       | uring and IOCP) with high probability will be the way of doing
       | async in future (especially in the light of Spectre and
       | Meltdown).
       | 
       | Instead of carefully weighing advantages and disadvantages of
       | both models, the decision was effectively made on the ground of
       | "we want to ship async support as soon as possible" [1].
       | Unfortunately, because of this rush, Rust got stuck with a poll-
       | based model with a whole bunch of problems without a clear
       | solution in sight (async drop anyone?). And instead of a proper
       | solution for self-referencing structs (yes, a really hard
       | problem), we did end up with the hack-ish Pin solution, which has
       | already caused a number of problems since stabilization and now
       | may block enabling of noalias by default [2].
       | 
       | Many believe that Rust async story was unnecessarily rushed.
       | While it may have helped to increase Rust adoption in the mid
       | term, I believe it will cause serious issues in the longer term.
       | 
       | [1]: https://github.com/rust-
       | lang/rust/issues/62149#issuecomment-... [2]:
       | https://github.com/rust-lang/rust/issues/63818
        
         | comex wrote:
         | > Instead of carefully weighing advantages and disadvantages of
         | both models, the decision was effectively made on the ground of
         | "we want to ship async support as soon as possible" [1].
         | 
         | That is not an accurate summary of that comment. withoutboats
         | may have been complaining about someone trying to revisit the
         | decision made in 2015-2016, but as the comment itself points
         | out, there were good reasons for that decision.
         | 
         | Mainly two reasons, as far as I know.
         | 
         | First, Rust prefers unique ownership and acyclic data
         | structures. You can make cyclic structures work if you use
         | RefCell and Rc and Weak, but you're giving up the static
         | guarantees that the borrow checker gives you in favor of a
         | bunch of dynamic checks for 'is this in use' and 'is this still
         | alive', which are easy to get wrong. But a completion-based
         | model essentially requires a cyclic data structure: a parent
         | future creates a child future (and can then cancel it), which
         | then calls back to the parent future when it's complete. You
         | might be able to minimize cyclicity by having the child own the
         | parent and treating cancellation as a special case, but then
         | you lose uniqueness if one parent has multiple children.
         | 
         | (Actually, even the polling model has a bit of cyclicity with
         | Wakers, but it's kept to an absolute minimum.)
         | 
         | Second, a completion-based model makes it hard to avoid giving
         | each future its own dynamic allocation, whereas Rust likes to
         | minimize dynamic allocations. (It also requires indirect calls,
         | which is a micro-inefficiency, although I'm not convinced that
         | matters very much; current Rust futures have some significant
         | micro-inefficiencies of their own.) The 2016 blog post linked
         | in the comment goes into more detail about this.
         | 
         | As you might guess, I find those reasons compelling, and I
         | think a polling-based model would still be the right choice
         | even if Rust's async model was being redesigned from scratch
         | today. Edit: Though to be fair, the YouTube video linked from
         | withoutboats' comment does mention that _mio_ decided on
         | polling simply because that 's what worked best on Linux at the
         | time (pre-io_uring), and that had some influence on how Futures
         | ended up. But only some.
         | 
         | ...That said, I do agree Pin was rushed and has serious
         | problems.
        
           | yxhuvud wrote:
           | > First, Rust prefers unique ownership and acyclic data
           | structures. You can make cyclic structures work if you use
           | RefCell and Rc and Weak, but you're giving up the static
           | guarantees that the borrow checker gives you in favor of a
           | bunch of dynamic checks for 'is this in use' and 'is this
           | still alive',
           | 
           | One way to get around that is to instead of doing it like the
           | very structureless async way actually impose lifetime
           | restrictions on the lifetimes of async entities. For example,
           | if you use the ideas of the structured concurrency movement
           | ([x], for example but it has since been picked up by kotlin,
           | swift and other projects), then the parent is guaranteed to
           | live longer than any child thus solving most of the problem
           | that way.
           | 
           | [x] https://vorpus.org/blog/notes-on-structured-concurrency-
           | or-g...
        
             | mratsim wrote:
             | The structured parallelism movement predates structured
             | concurrency by years and needs a similar push.
             | 
             | It's from 2004 with the X10 parallel language and then
             | Habanero in Java via their "async-finish" construct
             | 
             | I wonder if your quote (structured XYZ or ABC considered
             | harmful) is inspired by
             | 
             | - https://conf.researchr.org/details/etaps-2019/places-2019
             | -pa...
        
           | newpavlov wrote:
           | >That is not an accurate summary of that comment.
           | 
           | How is to so, if he explicitly writes:
           | 
           | > Suggestions that we should revisit our underlying futures
           | model are suggestions that we should revert back to the state
           | we were in 3 or 4 years ago, and start over from that point.
           | <..> Trying to provide answers to these questions would be
           | off-topic for this thread; the point is that answering them,
           | and proving the answers correct, is work. What amounts to a
           | solid decade of labor-years between the different
           | contributors so far would have to be redone again.
           | 
           | How should I read it except like "we did the work on the
           | poll-based model, so we don't want for the results to go down
           | the drain in the case if the completion-based model will turn
           | to be superior"?
           | 
           | I don't agree with your assertion regarding cyclic structures
           | and the need of dynamic allocations in the completion-based
           | model. Both models result in approximately the same cyclisity
           | of task states, no wonders, since task states are effectively
           | size-bound stacks. In both models you have more or less the
           | same finite state machines. The only difference is in how
           | those FSMs interact with runtime and in the fact that in the
           | completion-based model you usually pass ownership of a task
           | state part to runtime during task suspension. So you can not
           | simply drop a task if you no longer need its results, you
           | have to explicitly request its cancellation from runtime.
        
             | comex wrote:
             | > How is to so, if he explicitly writes:
             | 
             | There's a difference between "we decided this 3 years ago"
             | and "we rushed the decision". At this point, it's no longer
             | possible to weigh the two models on a neutral scale,
             | because changing the model would cause a huge amount of
             | ecosystem churn. But that doesn't mean they weren't
             | properly weighed in the first place.
             | 
             | Regarding cyclicity... well, consider something like a task
             | running two sub-tasks at the same time. That works out
             | quite naturally in a polling-based model, but in a
             | completion-based model you have to worry about things like
             | 'what if both completion handlers are called at the same
             | time', or even 'what if one of the completion handlers ends
             | up calling the other one'.
             | 
             | Regarding dynamic allocations... well, what kind of
             | desugaring are you thinking of? If you have
             | async fn foo(input: u32) -> String;
             | 
             | then a simple desugaring could be                   fn
             | foo(input: u32, completion: Arc<FnOnce(String)>);
             | 
             | but then the function has to responsible for allocating its
             | own memory.
             | 
             | Sure, there are alternatives. We could do...
             | struct Foo { /* state */ }         impl Foo {
             | fn call(self: Arc<Self>, input: u32, completion:
             | Arc<FnOnce(String)>);         }
             | 
             | Which by itself is no better; it still implies separate
             | allocations. But then I suppose we could have an
             | `ArcDerived<T>` which acts like `Arc<T>` but can point to a
             | part of a larger allocation, so that `self` and
             | `completion` could be parts of the same object.
             | 
             | However, in that case, how do you deal with borrowed
             | arguments? You could rewrite them to Arc, I suppose. But if
             | you must use Arc, performance-wise, ideally you want to be
             | moving references around rather than actually bumping
             | reference counts. You can usually do that if there's just
             | `self` and `completion`, but not if there are a bunch of
             | other Arcs.
             | 
             | Also, what if the implementation misbehaved and called
             | `completion` without giving up the reference to `self`?
             | That would imply that any further async calls by the caller
             | could not use the same memory. It's possible to work around
             | this, but I think it would start to make the interface
             | relatively ugly, less ergonomic to implement manually.
             | 
             | Also, `ArcDerived` would have to consist of two pointers
             | and there would have to be at least one `ArcDerived` in
             | every nested future, bloating the future object. But really
             | you don't want to mandate one particular implementation of
             | Arc, so you need a vtable, but that means indirect calls
             | and more space waste.
             | 
             | Most of those problems could be solved by making the
             | interface unsafe and using something with more complex
             | correctness requirements than Arc. But the fact that
             | current async fns desugar to a safe interface is a
             | significant upside. (...Even if the safety must be provided
             | with a bunch of macros, thanks to Pin not being built into
             | the language.)
        
               | newpavlov wrote:
               | >There's a difference between "we decided this 3 years
               | ago" and "we rushed the decision".
               | 
               | As far as I understand the situation, the completion-
               | based API simply was not on the table 3 years ago. io-
               | uring was not a thing and there was a negligible interest
               | in properly supporting IOCP. So when a viable alternative
               | has appeared right before stabilization of the developed
               | epoll-centric API, the 3 year old decision has not been
               | properly reviewed in the light of the changed environment
               | and instead the team has pushed forward with the
               | stabilization.
               | 
               | >because changing the model would cause a huge amount of
               | ecosystem churn.
               | 
               | No, the discussion has happened before the stabilization
               | (it's literally in the stabilization issue). Most of the
               | ecosystem at the time was on futures 0.2.
               | 
               | Regarding your examples, I think you simply look at the
               | problem from a wrong angle. In my opinion compiler should
               | not desugar async fns into usual functions, instead it
               | should construct explicit FSMs out of them. So no need
               | for Arcs, the String would be stored directly in the
               | "output" FSM state generated for foo. Yes, this approach
               | is harder for compiler, but it opens some optimization
               | capabilities, e.g. regarding the trade-off between FSM
               | "stack" size and and number of copies which state
               | transition functions have to do. AFAIK right now Rust
               | uses "dumb" enums, which can be quite sub-optimal, i.e.
               | they always minimize the "stack" size at the expense of
               | additional data copies and they do not reorder fields in
               | the enum variants to minimize copies.
               | 
               | In your example with two sub-tasks a generated FSM could
               | look like this (each item is a transition function):
               | 
               | 1) initialization [0 -> init_state]: create requests A
               | and B
               | 
               | 2) request A is complete [init_state -> state_a]: if
               | request B is complete do nothing, else mark that request
               | A is complete and request cancellation of task B, but do
               | not change layout of a buffer used by request B.
               | 
               | 3) cancellation of B is complete [state_a -> state_c]:
               | process data from A, perform data processing common for
               | branches A and B, create request C. It's safe to
               | overwrite memory behind buffer B in this handler.
               | 
               | 4) request B is complete [init_state -> state_b]: if
               | request A is complete do nothing, else mark that request
               | B is complete and request cancellation of task A, but do
               | not change layout of a buffer used by request A.
               | 
               | 5) cancellation of A is complete [state_b -> state_c]:
               | process data from A, perform data processing common for
               | branches A and B, create request C. It's safe to
               | overwrite memory behind buffer A in this handler.
               | 
               | (This FSM assumes that it's legal to request cancellation
               | of a completed task)
               | 
               | Note that handlers 2 and 4 can not be called at the same
               | time, since they are bound to the same ring and thus
               | executed on the same thread. Other completion handler
               | simply can not call another handler, since they are part
               | of the same FSM and only one FSM transition function can
               | be executed at a time. At the first glance all those
               | states and transitions look like an unnecessary
               | complexity, but I think that it's how a proper select
               | should work under the hood.
        
               | [deleted]
        
               | the_mitsuhiko wrote:
               | > As far as I understand the situation, the completion-
               | based API simply was not on the table 3 years ago
               | 
               | Completion APIs were always considered. They are just
               | significantly harder for Rust to support.
        
               | newpavlov wrote:
               | Can you provide any public sources for that? From that
               | I've seen Rust async story was always developed primarily
               | around epoll.
        
               | withoutboats2 wrote:
               | Alex Crichton started with a completion based Future
               | struct in 2015. It was even (unstable) in std in 1.0.0:
               | 
               | https://doc.rust-
               | lang.org/1.0.0/std/sync/struct.Future.html
               | 
               | Our async IO model was based on the Linux industry
               | standard (then and now) epoll, but that is not at all
               | what drove the switch to a polling based model, and the
               | polling based model presents no issues whatsoever with
               | io-uring. You do not know what you are talking about.
        
               | newpavlov wrote:
               | >Our async IO model was based on the Linux industry
               | standard (then and now) epoll, but that is not at all
               | what drove the switch to a polling based model
               | 
               | Can you provide a link to a design document or at the
               | very least to a discussion with motivation for this
               | switch outside of the desire to be as compatible as
               | possible with the "Linux industry standard"?
               | 
               | >the polling based model presents no issues whatsoever
               | with io-uring
               | 
               | There are no issues with io-uring compatibility to such
               | extent that you wrote about a whole blog post about those
               | issues: https://boats.gitlab.io/blog/post/io-uring/
               | 
               | IIUC the best solutions right now are either to copy data
               | around (bye-bye zero-cost) or to use another Pin-like
               | awkward hack with executor-based buffer management,
               | instead of using simple and familiar buffers which are
               | part of a future state.
        
               | withoutboats2 wrote:
               | https://aturon.github.io/blog/2016/09/07/futures-design/
               | 
               | The completion based futures that Alex started with were
               | also based on epoll. The performance issues it presented
               | had nothing to do any sort of impedence mismatch between
               | a completion based future and epoll, because there is no
               | impedence issue. You are confused.
        
               | newpavlov wrote:
               | Thank you for the link! But immideately we can see the
               | false equivalence: completion based API does not imply
               | the callback-based approach. The article critigues the
               | latter, but not the former. Earlier in this thread I've
               | described how I see a completion-based model built on top
               | of FSMs generated by compiler from async fns. In other
               | words, the arguments presented in that article do not
               | apply to this discussion.
               | 
               | >The performance issues it presented had nothing to do
               | any sort of impedence mismatch between a completion based
               | future and epoll
               | 
               | Sorry, but what? Even aturon's article states zero-cost
               | as one of the 3 main goals. So performance issues with
               | strong roots in the selected model is a very big problem
               | in my book.
               | 
               | >You do not know what you are talking about.
               | 
               | >You are confused.
               | 
               | Please, tone down your replies.
        
               | steveklabnik wrote:
               | > Please, tone down your replies.
               | 
               | You cannot literally make extremely inflammatory comments
               | about people's work, and accuse them of all sorts of
               | things, and then get upset when they are mad about it.
               | You've made a bunch of very serious accusations on
               | multiple people's hard work, with no evidence, and with
               | arguments that are shaky at best, on one of the largest
               | and most influential forums in the world.
               | 
               | I mean, you _can_ get mad about it, but I don 't think
               | it's right.
        
               | mst wrote:
               | I found it highly critical but not inflammatory - though
               | I'm not sure if I'd've felt the same way had they been
               | being similarly critical of -my- code.
               | 
               | However, either way, responding with condescension (which
               | is how the 'industry standard' thing came across) and
               | outright aggression is never going to be constructive,
               | and if that's the only response one is able to formulate
               | then it's time to either wait a couple hours or ask
               | somebody else to answer on your behalf instead (I have a
               | number of people who are kind enough to do that for me
               | when my reaction is sufficiently exothermic to make
               | posting a really bad idea).
               | 
               | boats-of-a-year ago handled a similar situation much more
               | graciously here -
               | https://news.ycombinator.com/item?id=22464629 - so it's
               | entirely possibly a lockdown fatigue issue - but
               | responding to calmly phrased criticism with outright
               | aggression is still pretty much never a net win and
               | defending that behaviour seems contrary to the tone the
               | rust team normally tries to set for discussions.
        
               | withoutboats2 wrote:
               | Of course I was more gracious to pornel - that remark was
               | uncharacteristically flippant from a contributor who is
               | normally thoughtful and constructive. pornel is not in
               | the habit of posting that my work is fatally flawed
               | because I did not pursue some totally unviable vaporware
               | proposal.
        
               | newpavlov wrote:
               | I am not mad, it was nothing more than an attempt to urge
               | a more civil tone from boats. If you both think that such
               | tone is warranted, then so be it. But it does affect my
               | (really high) opinion about you.
               | 
               | I do understand the pain of your dear work to be harshly
               | criticized. I have experienced it many times in my
               | career. But my critique intended as a tough love for the
               | language in which I am heavily invested in. If you see my
               | comments as only "extremely inflammatory"... Well, it's a
               | shame I guess, since it's not the first case of the Rust
               | team unnecessarily rushing something (see the 2018
               | edition debacle), so I guess such attitude only increases
               | rate of mistake accumulation by Rust.
        
               | steveklabnik wrote:
               | I do not doubt that you care about Rust. Civility,
               | though, is a two-way street. Just because you phrase
               | something in a way that has a more neutral tone does not
               | mean that the underlying meaning cannot be inflammatory.
               | 
               | "Instead of carefully weighing advantages and
               | disadvantages of both models," may be written in a way
               | that more people would call "civil," but is in practice a
               | direct attack on both the work, and the people doing the
               | work. It is extremely difficult to not take this as a
               | slightly more politely worded "fuck you," if I'm being
               | honest. In some sense, that it is phrased as being
               | neutral and "civil" makes it _more_ inflammatory.
               | 
               | You can have whatever opinion that you want, of course.
               | But you should understand that the stuff you've said here
               | is exactly that. It may be politely worded, but is
               | ultimately an extremely public direct attack.
        
               | nemothekid wrote:
               | > _Earlier in this thread I 've described how I see a
               | completion-based model built on top of FSMs generated by
               | compiler from async fns. In other words, the arguments
               | presented in that article do not apply to this
               | discussion._
               | 
               | I've been lurking your responses, but now I'm confused.
               | If you are not using a callback based approach, then what
               | are you using? Rust's FSM approach is predicated on
               | polling; In other words if you aren't using callbacks,
               | then how do you know that Future A has finished? If the
               | answer is to use Rust's current systems, then that means
               | the FSM is "polled" periodically, and then you still have
               | "async Drop" problem as described in withoutboat's
               | notorious article and furthermore, you haven't really
               | changed Rust's design.
               | 
               | Edit: As I've seen you mention in other threads, you need
               | a sound design for async Drop for this to work. I'm not
               | sure this is possible in Rust 1.0 (as Drop isn't
               | currently required to run in safe Rust). That said it's
               | unfair to call async "rushed", when your proposed design
               | wouldn't even work in Rust 1.0. I'd be hesitant to call
               | the design of the entire language rushed just because it
               | didn't include linear types.
        
               | newpavlov wrote:
               | I meant the callback based approach described in the
               | article, for example take this line from it:
               | 
               | >Unfortunately, this approach nevertheless forces
               | allocation at almost every point of future composition,
               | and often imposes dynamic dispatch, despite our best
               | efforts to avoid such overhead.
               | 
               | It clearly does not apply to the model which I've
               | described earlier.
               | 
               | Of course, the described FSM state transition functions
               | can be rightfully called callbacks, which adds a certain
               | amount of confusion.
               | 
               | I can agree with the argument that a proper async Drop
               | can not be implemented in Rust 1.0, so we have to settle
               | with a compromise solution. Same with proper self-
               | referential structs vs Pin. But I would like to see this
               | argument to be explicitly stated with sufficient backing
               | of the impossibility statements.
        
               | nemothekid wrote:
               | > _Of course, the described FSM state transition
               | functions can be rightfully called callbacks, which adds
               | a certain amount of confusion._
               | 
               | No, I'm not talking about the state transition functions.
               | I'm talking about the runtime - the thing that will call
               | the state transition function. In the current design,
               | abstractly, the runtime polls/checks every if future if
               | it's in a runnable state, and if so executes it. In an
               | completion based design the future itself tells the
               | runtime that the value is ready (either driven by a
               | kernel thread, another thread or some other callback).
               | (conceptually the difference is, in an poll based design,
               | the future calls waker.wake(), and in a completion one,
               | the future just calls the callback fn). Aaron has already
               | described why that is a problem.
               | 
               | The confusion I have is that _both_ would have problems
               | integrating io_uring into rust (due to the Drop problem;
               | as Rust has no concept of the _kernel_ owning a buffer),
               | but your proposed solution seems strictly worse as it
               | requires async Drop to be sound which is not guaranteed
               | by Rust; which would make it useless for programs that
               | are being written today. As a result, I 'm having trouble
               | accepting that your criticism is actually valid - what
               | you seem to be arguing is that async/await should have
               | _never_ been stabilized in Rust 1.0, which I believe is a
               | fair criticism, but it isn 't one that indicates that the
               | current design has been rushed.
               | 
               | Upon further thought, I think your design ultimately
               | requires futures to be implemented as a language feature,
               | rather than a library (ex. for the future itself to
               | expose multiple state transition functions without
               | allocating is not possible with the current Trait
               | system), which wouldn't have worked without forking Rust
               | during the prototype stage.
        
               | newpavlov wrote:
               | >In an completion based design the future itself tells
               | the runtime that the value is ready
               | 
               | I think there is a misunderstanding. In a completion-
               | based model (read io-uring, but I think IOCP behaves
               | similarly, though I am less familiar with it) it's a
               | runtime who "notifies" tasks about completed IO requests.
               | In io-uring you have two queues represented by ring
               | buffers shared with OS. You add submission queue entries
               | (SQE) to the first buffer which describe what you want
               | for OS to do, OS reads them, performs the requested job,
               | and places completion queue events (CQEs) for completed
               | requests into the second buffer.
               | 
               | So in this model a task (Future in your terminology)
               | registers SQE (the registration process may be proxied
               | via user-space runtime) and suspends itself. Let's assume
               | for simplicity that only one SQE was registered for the
               | task. After OS sends CQE for the request, runtime finds a
               | correct state transition function (via meta-information
               | embedded into SQE, which gets mirrored to the relevant
               | CQE) and simply executes it, the requested data (if it
               | was a read) will be already filled into a buffer which is
               | part of the FSM state, so no need for additional syscalls
               | or interactions with the runtime to read this data!
               | 
               | If you are familiar with embedded development, then it
               | should sound quite familiar, since it's roughly how
               | hardware interrupts work as well! You register a job
               | (e.g. DMA transfer), dedicated hardware block does it,
               | and notifies a registered callback after the job was
               | done. Of course, it's quite an oversimplification, but
               | fundamental similarity is there.
               | 
               | >I think your design ultimately requires futures to be
               | implemented as a language feature, rather than a library
               | 
               | I am not sure if this design would have had a Future type
               | at all, but you are right, the advocated approach
               | requires a deeper integration with the language compared
               | to the stabilized solution. Though I disagree with the
               | opinion that it would've been impossible to do in Rust 1.
        
               | ben0x539 wrote:
               | What happens if you drop the task between 1 and 2? Does
               | dropping block until the cancellation of both tasks is
               | complete?
        
               | newpavlov wrote:
               | As I've mentioned several times, in this model you can
               | not simply "drop the task" without running its
               | asynchronous Drop. Each state in FSM will be generated
               | with a "drop" transition function, which may include
               | asynchronous cancellation requests (i.e. cleanup can be
               | bigger than one transition function and may represent a
               | mini sub-FSM). This would require introducing more
               | fundamental changes to the language (same as with proper
               | self-referential types) be it either some kind of linear
               | type capabilities or a deeper integration of runtimes
               | with the language (so you will not be able to manipulate
               | FSM states directly as any other data structure), since
               | right now it's safe to forget anything and destructors
               | are not guaranteed to run. IMO such changes would've maid
               | Rust a better language in the end.
        
               | anp wrote:
               | "Rust would have been a better language by breaking its
               | stability guarantees" is just saying "Rust would have
               | been a better language by not being Rust." Maybe true,
               | but not relevant to the people whose work you've blanket
               | criticized. Rust language designers have to work within
               | the existing language and your arguments are in bad faith
               | if you say "async could have been perfect with all this
               | hindsight and a few breaking language changes".
        
               | newpavlov wrote:
               | I do not think that impossibility of a reliable async
               | Drop in Rust 1 is a proven thing (prior to the
               | stabilization of async in the current form). Yes, it may
               | require some unpleasant additions such as making Futures
               | and async fns more special than they are right now and
               | implementing it with high probability would have required
               | a lot of work (at least on the same scale as was invested
               | into the poll-based model), but it does not make it
               | impossible automatically.
        
               | anp wrote:
               | I don't agree with this analysis TBH - async drop has
               | been revisited multiple times recently with no luck.
               | Without a clear path there I don't know why that would
               | seem like an option for async/await two years ago. Do you
               | actually think the language team should have completely
               | exhausted that option in order to try to require an
               | allocator for async/await?
               | 
               | Async drop would still not address the single-allocation-
               | per-state-machine advantage of the current design that
               | you've mostly not engaged with in this thread.
        
               | newpavlov wrote:
               | >I don't agree with this analysis TBH
               | 
               | No worries, I like when someone disagrees with me and
               | argues his or her position well, since it's a chance for
               | me to learn.
               | 
               | >async drop has been revisited multiple times recently
               | with no luck
               | 
               | The key word is "recently", meaning "after the
               | stabilization". It's exactly my point: this problem was
               | not sufficiently explored in my opinion prior
               | stabilization. I would've been fine with a well argued
               | position "async Drop is impossible without breaking
               | language changes, so we will not care about it", but now
               | we try to shoehorn async Drop on top of the stabilized
               | feature.
               | 
               | >Async drop would still not address the single-
               | allocation-per-state-machine advantage of the current
               | design that you've mostly not engaged with in this
               | thread.
               | 
               | I don't think you are correct here, please see this
               | comment: https://news.ycombinator.com/item?id=26408524
        
         | zelly wrote:
         | Why did polling have to be baked into the language? Seems
         | bizarre for a supposedly portable language to assume the
         | functionality of an OS feature which could change in the
         | future.
         | 
         | Meanwhile C and C++ can easily adopt any async system call
         | style because it made no assumptions in the standards about how
         | that would be done.
         | 
         | Rust also didn't solve the colored functions problem. Most
         | people think that's an impossible problem to solve without a
         | VM/runtime (like Java Loom), but people also thought garbage
         | collection was impossible in a systems language until Rust
         | solved it. It could have been a great opportunity for them.
        
           | newpavlov wrote:
           | >Why did polling have to be baked into the language?
           | 
           | See this comment:
           | https://news.ycombinator.com/item?id=26407440
           | 
           | >Meanwhile C and C++ can easily adopt any async system call
           | style because it made no assumptions in the standards about
           | how that would be done.
           | 
           | Do you know about co_await in C++20? AFAIK (I only have a
           | very cursory knowledge about it, so I may be wrong) it also
           | makes some trade-offs, e.g. it requires allocations, while in
           | Rust async tasks can live on stack or statically allocated
           | regions of memory.
           | 
           | Also do not forget that Rust has to ensure memory safety at
           | compile time, while C++ can be much more relaxed about it.
        
             | zelly wrote:
             | C++20 coroutines are not async in the standard. They are
             | just coroutines. Actually they have no implementation. The
             | user has to write classes to implement the promise type and
             | the awaitable type. You could just as easily write a
             | coroutine library wrapping epoll as you could io_uring. The
             | only thing it does behind your back (other than compile to
             | stackless coroutines) is allocate memory, which also goes
             | for a lot of other things.
        
               | pjmlp wrote:
               | Which makes them quite powerful, as they allow for other
               | kinds of patterns.
        
               | saurik wrote:
               | Is this not also true of Rust? Are you saying Rust in
               | some sense hardcodes an implementation to await in a way
               | C++ doesn't? (I am not a Rust programmer, but I am very
               | very curious about this and would appreciate any insight;
               | I do program in C++ with co_await daily, with my own
               | promise/task classes.)
        
               | kibwen wrote:
               | Rust's async/await support is not intended as a general
               | replacement of coroutines. In fact, async/await is built
               | on top of coroutines (what Rust calls "generators"), but
               | these are not yet stable. https://github.com/rust-
               | lang/rust/issues/43122
        
               | steveklabnik wrote:
               | You may want to watch/read my talk:
               | https://www.infoq.com/presentations/rust-2019/
               | 
               | I also did a follow up, walking through how you would
               | implement all of the bits:
               | https://www.infoq.com/presentations/rust-async-await/
               | 
               | TL;DR: rust makes you bring some sort of executor along.
               | You can write your own, you can use someone else's. I
               | have not done enough of a deep dive into what made it
               | into the standard to give you a great line-by-line
               | comparison.
        
             | mratsim wrote:
             | It requires allocation if the coroutine outlives the scope
             | that created it.
             | 
             | Otherwise compiler are free to implement heap allocation
             | elision (which is done in Clang).
             | 
             | Now compared to Rust, assuming you have a series of
             | coroutines to process a deferred event, Rust will allocate
             | once for the whole series while C++ would allocate once per
             | coroutine to store them in the reactor/proactor.
        
               | steveklabnik wrote:
               | Rust _never_ implicitly allocates, even with async
               | /await. I have written Rust programs on a microcontroller
               | with no heap, using async/await for it.
        
               | mratsim wrote:
               | I don't think I've implied that allocation in Rust was
               | implicit but that's a fair point.
        
               | steveklabnik wrote:
               | You said
               | 
               | > Rust will allocate once for the whole series
               | 
               | which, it will not.
               | 
               | It is true that some executors will do a single
               | allocation for the whole series, but that is not done by
               | Rust, nor is required. That's all!
        
           | moonchild wrote:
           | > people also thought garbage collection was impossible in a
           | systems language until Rust solved it
           | 
           | No, they didn't. Linear typing for systems languages had
           | already been done in ats, cyclone, and clean, the latter two
           | of which were a major inspiration for rust.
           | 
           | Venturing further into gc territory: long before rust was
           | even a twinkle in graydon hoare's eye, smart pointers were
           | happening in c++, and apple was experimenting with objective
           | c for drivers.
        
             | pjmlp wrote:
             | Apple wasn't experimenting with Objective-C for drivers,
             | NeXTSTEP drivers were written in Objective-C.
             | 
             | macOS IO Kit replacement, Driver Kit, is an homage to
             | NeXTSTEP Driver Kit name, the Objective-C framework.
        
             | roca wrote:
             | Perhaps more accurate to say "safe reclamation of dynamic
             | allocations without GC was not known to be possible in a
             | practical programming language, before Rust".
             | 
             | The problem with languages like ATS and Cyclone is that you
             | need heavy usage in real-world applications to prove that
             | your approach is actually usable by developers at scale.
             | Rust achieved that first.
        
               | benibela wrote:
               | I have always thought Pascal solved that in practise with
               | automated reference counting on arrays
               | 
               | A good optimizer could then have removed the counting on
               | non-escaping local variables
        
               | steveklabnik wrote:
               | If you squint, this is sorta what Swift is.
        
               | moonchild wrote:
               | Cyclone was a c derivative (I believe it was even
               | backwards compatible), and ats a blend of ml and c. Ml
               | and c are both certainly proven.
               | 
               | Cyclone was, and ats is, a research project; not
               | necessarily intended to achieve widespread use. And
               | again, obj-c was being used by apple in drivers, which is
               | certainly a real-world application.
               | 
               | > without GC
               | 
               | I don't know what you mean by this. GC is a memory
               | management policy in which the programmer does not need
               | to manually end the lifetimes of objects. Rust is a
               | garbage collected language. How many manual calls to
               | 'drop' or 'free' does the average rust program have?
        
               | roca wrote:
               | Cyclone wasn't backwards-compatible with C.
               | 
               | ATS is not just "a blend of ML and C", it has a powerful
               | proof system on top.
               | 
               | You can't just say "well, these languages were derived
               | from C in part, THEREFORE they must be easy to adopt at
               | scale", that doesn't follow at all.
               | 
               | Yes, Cyclone and ATS were research projects, that's why
               | they were never able to accumulate the real-world
               | experience needed to demonstrate that their ideas work at
               | scale.
               | 
               | Objective-C isn't memory safe.
               | 
               | By "GC" here I meant memory reclamation schemes that
               | require runtime support and object layout changes ...
               | which is the way most people use it. If you use the term
               | "garbage collection" in a more expansive way, so that you
               | say Rust "is a garbage collected language", then most
               | people are going to misunderstand you.
        
           | pjmlp wrote:
           | > but people also thought garbage collection was impossible
           | in a systems language until Rust solved it?
           | 
           | What?
           | 
           | Several OSes have proven their value written in GC enabled
           | systems programming languages.
           | 
           | They aren't as mainstream as they should due to UNIX cargo
           | cult and anti-GC Luddites.
           | 
           | Rust only proved that affine types can be easier than cyclone
           | and ATS.
        
           | kibwen wrote:
           | _> Meanwhile C and C++ can easily adopt any async system call
           | style because it made no assumptions in the standards about
           | how that would be done._
           | 
           | This is comparing apples to oranges; Rust's general, no-
           | assumptions-baked-in coroutines feature is called
           | "generators", and are not yet stable. It is this feature that
           | is internally used to implement async/await.
           | https://github.com/rust-lang/rust/issues/43122
        
           | ilammy wrote:
           | > _people also thought garbage collection was impossible in a
           | systems language until Rust solved it_
           | 
           | Only if you understand "garbage collection" in a narrow sense
           | of memory safety, no explicit free() calls, a relatively
           | readable syntax for passing objects around, and acceptable
           | amount of unused memory. This comes with a non-negligible
           | amount of small text for Rust when compared to garbage-
           | collected languages.
        
         | the_duke wrote:
         | I agree that Rust async is currently in a somewhat awkward
         | state.
         | 
         | Don't get me wrong, it's usable and many projects use it to
         | great effect.
         | 
         | But there are a few important features like async trait methods
         | (blocked by HKT), async closures, async drop, and (potentially)
         | existential types, that seem to linger. The unresolved problems
         | around Pin are the most worrying aspect.
         | 
         | The ecosystem is somewhat fractured, partially due to a lack of
         | commonly agreed abstractions, partially due to language
         | limitations.
         | 
         | There also sadly seems to be a lack of leadership and drive to
         | push things forward.
         | 
         | I'm ambivalent about the rushing aspect. Yes, async was pushed
         | out the door. Partially due to heavy pressure from
         | Google/Fuchsia and a large part of the userbase eagerly
         | .awaiting stabilization.
         | 
         | Without stabilizing when they did, we very well might still not
         | have async on stable for years to come. At some point you have
         | to ship, and the benefits for the ecosystem can not be denied.
         | It remains to be seen if the design is boxed into a suboptimal
         | corner; I'm cautiously optimistic.
         | 
         | But what I disagree with is that polling was a mistake. It is
         | what distinguishes Rusts implementation, and provides
         | significant benefits. A completion model would require a
         | heavier, standardized runtime and associated inefficiencies
         | like extra allocations and indirection, and prevent
         | efficiencies that emerge with polling. Being able to just
         | locally poll futures without handing them off to a runtime, or
         | cheaply dropping them, are big benefits.
         | 
         | Completion is the right choice for languages with a heavy
         | runtime. But I don't see how having the Rust dictate completion
         | would make io_uring wrapping more efficient than implementing
         | the same patterns in libraries.
         | 
         | UX and convenience is a different topic. Rust async will never
         | be as easy to use as Go, or async in languages like
         | Javascript/C#. To me the whole point of Rust is providing as
         | high-level, _safe_ abstractions as possible, without
         | constraining the ability to achieve maximum efficiency . (how
         | well that goal is achieved, or hindered by certain design
         | patterns that are more or less dictated by the language design
         | is debatable, though)
        
           | ash wrote:
           | Is there a good explanation on the difference between polling
           | model and completion model? (not Rust-specific)
        
             | ash wrote:
             | Further down the thread:
             | https://news.ycombinator.com/item?id=26407770
        
           | newpavlov wrote:
           | >A completion model would require a heavier, standardized
           | runtime and associated inefficiencies like extra allocations
           | and indirection, and prevent efficiencies that emerge with
           | polling.
           | 
           | You are not the first person who uses such arguments, but I
           | don't see why they would be true. In my understanding both
           | models would use approximately the same FSMs, but which would
           | interact differently with a runtime (i.e. instead of
           | registering a waker, you would register an operation on a
           | buffer which is part of the task state). Maybe I am missing
           | something, so please correct me if I am wrong in a reply to
           | this comment: https://news.ycombinator.com/item?id=26407824
        
         | zamalek wrote:
         | > A bigger problem in my opinion is that Rust has chosen to
         | follow the poll-based model
         | 
         | This is an inaccurate simplification that, admittedly, their
         | own literature has perpetuated. Rust uses informed polling: the
         | resource can wake the scheduler at any time and tell it to
         | poll. When this occurs it is virtually identical to completion-
         | based async (sans some small implementation details).
         | 
         | What informed polling brings to the picture is opportunistic
         | sync: a scheduler may choose to poll before suspending a task.
         | This helps when e.g. there is data already in IO buffers (there
         | often is).
         | 
         | There's also some fancy stuff you can do with informed polling,
         | that you can't with completion (such as stateless informed
         | polling).
         | 
         | Everything else I agree with, especially Pin, but informed
         | polling is really elegant.
        
           | pas wrote:
           | Could you explain what is informed and stateless informed
           | polling? I haven't really found anything on the web. Thanks!
        
             | zamalek wrote:
             | So stateful async would be writing IO. You've passed in a
             | buffer, the length to copy from the buffer. In the
             | continuation, you'd need to know which original call you
             | were working with so that you can correlate it with those
             | parameters you passed through.                   var state
             | = socket.read(buffer);         while (!state.poll()) {}
             | state.bytesRead...
             | 
             | Stateless async is accepting a connection. In 95% of
             | servers, you just care that a connection was accepted; you
             | don't have any state that persists across the continuation:
             | while (!listeningSocket.poll()) {}         var socket =
             | listeningSocket.accept();
             | 
             | Stateless async skirts around many of the issues that Rust
             | async can have (because Pin etc. has to happen because of
             | state).
        
             | Diggsey wrote:
             | I believe they mean that when you poll a future, you pass
             | in a context. The future derives a "waker" object from this
             | context which it can store, and use to later trigger itself
             | to be re-polled.
             | 
             | By using a context with a custom "waker" implementation,
             | you can learn _which_ future specifically needs to be re-
             | polled.
             | 
             | Normally only the executor would provide the waker
             | implementation, so you only learn which top-level future
             | (task) needs to be re-polled, but not what specific future
             | within that task is ready to proceed. However, some future
             | combinators also use a custom waker so they can be more
             | precise about which specific future within the task should
             | be re-polled.
        
           | amelius wrote:
           | > the resource can wake the scheduler at any time and tell it
           | to poll
           | 
           | Isn't that called interrupting?
           | 
           | The terminology seems a little off here, but perhaps that is
           | only my perception.
        
             | marcosdumay wrote:
             | No, it's not. Interrupting is when you call the scheduler
             | at any time, even when it's doing some other work. When
             | it's idle, it can not be interrupted.
             | 
             | Interruptions are something one really tries to restrict to
             | the hardware - kernel layers. Because when people write
             | interruption handlers, they almost always write them wrong.
        
         | [deleted]
        
         | eximius wrote:
         | Could Rust switch? More importantly, would a completion based
         | model alleviate the problems mentioned?
        
           | newpavlov wrote:
           | Without introducing Rust 2? Highly unlikely.
           | 
           | I should have worded my message more carefully. Completion-
           | based model is not a silver bullet which would magically
           | solve all problems (though I think it would help a bit with
           | the async Drop problem). The problem is that Rust async was
           | rushed without careful deliberation, which causes a number of
           | problems without a clear solution in sight.
        
             | kibwen wrote:
             | _> The problem is that Rust async was rushed without
             | careful deliberation_
             | 
             | As someone who observed the process, this couldn't be
             | further from the truth. Just because one disagrees with the
             | conclusion does not mean that the conclusion was made in
             | haste or in ignorance.
             | 
             |  _> Without introducing Rust 2? Highly unlikely._
             | 
             | This is incorrect. async/await is a leaf node on the
             | feature tree; it is supported by other language features,
             | but does not support any others. Deprecating or removing it
             | in favor of a replacement would not be traumatic for the
             | language itself (less so for the third-party async/await
             | ecosystem, of course). But this scenario is overly
             | dramatic: the benefits of a completion-based model are not
             | so clear-cut as to warrant such actions.
        
               | artursapek wrote:
               | I was going to say... even as a casual observer I
               | remember the finalization of async took a loooong time.
        
               | newpavlov wrote:
               | >Just because one disagrees with the conclusion does not
               | mean that the conclusion was made in haste or in
               | ignorance.
               | 
               | Believe me, I do understand the motivation behind the
               | decision to push async stabilization in the developed
               | form (at least I think I do). And I do not intend to
               | argue in a bad faith. My point is that in my opinion the
               | Rust team has chosen to get the mid-term boost of Rust
               | popularity at the expense of the long-term Rust health.
               | 
               | Yes, you are correct that in theory it's possible to
               | deprecate the current version of async. But as you note
               | yourself, it's highly unlikely to happen since the
               | current solution is "good enough".
        
               | ben0x539 wrote:
               | > My point is that in my opinion the Rust team has chosen
               | to get the mid-term boost of Rust popularity at the
               | expense of the long-term Rust health.
               | 
               | I don't think a conscious decision of that sort was made?
               | My impression is that at the time the road taken was
               | understood to be the correct solution and not a
               | compromise. Is that wrong?
        
               | newpavlov wrote:
               | The decision was made 3 years ago and at the time it was
               | indeed a good one, but the situation has changed and the
               | old decision was not (in my opinion) properly reviewed.
               | See this comment:
               | https://news.ycombinator.com/item?id=26408524
        
               | fwip wrote:
               | Does anything about the implementation of async prevent a
               | future completion-based async feature? Say it's called
               | bsync/bwait.
        
               | newpavlov wrote:
               | Yes, it's possible, but it will be a second way of doing
               | async, which will split the ecosystem even further. So
               | without a REALLY good motivation it simply will not
               | happen. Unfortunately, the poll-based solution is "good
               | enough"... I guess, some may say that "perfect is the
               | enemy of good" applies here, but I disagree.
        
               | jzoch wrote:
               | I and many others would disagree that they made the
               | decision "at the expense of the long-term Rust health".
               | You aren't arguing in good faith if you put words in
               | their mouth. There is no data to suggest the long-term
               | health of rust is at stake because of the years long path
               | they took in stabilizing async today. There are merits to
               | both models but nothing is as clear-cut as you make it to
               | be - completion-based futures are not definitively better
               | than poll-based and would have a lot of trade-offs. To
               | phrase this as "Completion based is totally better and
               | the only reason it wasn't done was because it would take
               | too long and Rust needed popularity soon" is ridiculous
        
               | newpavlov wrote:
               | I do not put words in their mouth or have you missed the
               | "in my opinion" part?
               | 
               | The issues with Pin, problems around noalias, inability
               | to design a proper async Drop solution, not a great
               | compatibility with io-uring and IOCP. In my eyes they are
               | indicators that Rust health in the async field has
               | suffered.
               | 
               | >Completion based is totally better and the only reason
               | it wasn't done was because it would take too long and
               | Rust needed popularity soon
               | 
               | And who now puts words into other's mounts? Please see
               | this comment:
               | https://news.ycombinator.com/item?id=26407565
        
               | artursapek wrote:
               | Isn't it very likely that going the other route would
               | also result in a different but equally long list of
               | issues?
        
               | newpavlov wrote:
               | Yes, it's a real possibility. But the problem is that the
               | other route was not properly explored, so we can not
               | compare advantages and disadvantages. Instead Rust went
               | all-in on a bet which was made 3 years ago.
        
             | FridgeSeal wrote:
             | > The problem is that Rust async was rushed without careful
             | deliberation, which causes a number of problems without a
             | clear solution in sight.
             | 
             | Are we talking about the same Rust? I remember the debate
             | and consideration over async was enormous and involved. It
             | was practically the polar opposite of "without careful
             | deliberation".
        
         | vlovich123 wrote:
         | Since you clearly have expertise, I'm curious if you might
         | provide some insight into what would roughly be different in an
         | async completion-based model & why that might be at a
         | fundamental odds with the event-based one? Like is it an
         | incompatibility with the runtime or does it change the actual
         | semantics of async/await in a fundamental way to the point
         | where you can't just swap out the runtime & reuse existing
         | async code?
        
           | newpavlov wrote:
           | It's certainly possible to pave over the difference between
           | models to a certain extent, but the resulting solution will
           | not be zero-cost.
           | 
           | Yes, there is a fundamental difference between those models
           | (otherwise we would not have two separate models).
           | 
           | In a poll-based model interactions between task and runtime
           | look roughly like this:
           | 
           | - task to runtime: I want to read data on this file
           | descriptor.
           | 
           | - runtime: FD is ready, I'll wake-up the task.
           | 
           | - task: great, FD is ready! I will read data from FD and then
           | will process it.
           | 
           | While in a completion based model it looks roughly like this:
           | 
           | - task to runtime: I want data to be read into this buffer
           | which is part of my state.
           | 
           | - runtime: the requested buffer is filled, I'll wake-up the
           | task.
           | 
           | - task: great requested data is in the buffer! I can process
           | it.
           | 
           | As you can see the primary difference is that in the latter
           | model the buffer becomes "owned" by runtime/OS while task is
           | suspended. It means that you can not simply drop a task if
           | you no longer need its results, like Rust currently assumes.
           | You have either wait for the data read request to complete or
           | to (possibly asynchronously) request cancellation of this
           | request. With the current Rust async if you want to integrate
           | with io-uring you would have to use awkward buffers managed
           | by runtime, instead of simple buffers which are part of the
           | task state.
           | 
           | Even outside of integration with io-uring/IOCP we have use-
           | cases which require async Drop and we currently don't have a
           | good solution for it. So I don't think that the decision to
           | allow dropping tasks without an explicit cancellation was a
           | good one, even despite the convenience which it brings.
        
             | saurik wrote:
             | FWIW, I'd bet almost anything that this problem isn't
             | solvable in any general way without linear types, at which
             | point I bet it would be a somewhat easy modification to
             | what Rust has already implemented. (Most of my development
             | for a long time now has been in C++ using co_await with I/O
             | completion and essentially all of the issues I run into--
             | including the things analogous to "async Drop", which I
             | would argue is actually the same problem as being able to
             | drop a task itself--are solvable using linear types, and
             | any other solutions feel like they would be one-off hacks.)
             | Now, the problem is that the Rust people seem to be against
             | linear types (and no one else is even considering them), so
             | I'm pretty much resigned that I'm going to have to develop
             | my own language at some point (and see no reason to go too
             | deep into Rust in the mean time) :/.
        
               | steveklabnik wrote:
               | > at which point I bet it would be a somewhat easy
               | modification to what Rust has already implemented.
               | 
               | You'd lose that bet:
               | https://gankra.github.io/blah/linear-rust/
        
               | reasonabl_human wrote:
               | I did a double take seeing your username above this
               | comment!
               | 
               | Thank you for your contributions to the jailbreak
               | community, it's what got me started down the programming
               | / tinkering path back in middle school and has
               | significantly shaped the opportunities I have today.
               | Can't believe I'm at the point where I encountered you
               | poking around the same threads on a forum... made my day!
               | :)
        
               | SkiFire13 wrote:
               | I'm not familiar with linear types but as far as I can
               | see they're pretty much the same as rust's ownership
               | rules. Is there something I'm missing?
        
               | littlestymaar wrote:
               | Rust implements _affine_ types, which means every object
               | must be used _at most_ once. You cannot use them twice,
               | but you can discard them and not do anything with them.
               | _Linear_ types means _exactly once_.
               | 
               | but I don't think you can easily move from affine types
               | to linear types in the case of Rust, see leakpocalypse[1]
               | 
               | [1]: https://cglab.ca/~abeinges/blah/everyone-
               | poops/#leakpocalyps...
        
           | stormbrew wrote:
           | I'm also curious about this. Boats wrote some about rust
           | async and io-uring a while ago that's interesting[1], but
           | also points out a very clear path forward that's not actually
           | outside the framework of rust's Future or async
           | implementation: using interfaces that treat the kernel as the
           | owner of the buffers being read into/out of, and that seems
           | in line with my expectations of what should work for this.
           | 
           | But I haven't touched IOCP in nearly 20 years and haven't
           | gotten into io-uring yet, so maybe I'm missing something.
           | 
           | Really the biggest problem might be that _switching out
           | backends_ is currently very difficult in rust, even the 0.x
           | to 1.x jump of tokio is painful. Switching from
           | Async[Reader|Writer] to AsyncBuf[Reader|Writer] might be even
           | harder.
           | 
           | [1] https://boats.gitlab.io/blog/post/io-uring/
        
             | comex wrote:
             | There's a workaround, but it's unidiomatic, requires more
             | traits, and requires inefficient copying of data if you
             | want to adapt from one to the other.
             | 
             | However, I wouldn't call this a problem with a polling-
             | based model.
             | 
             | At least part of the goal here must be to avoid allocations
             | and reference counting. If you don't care about that, then
             | the design could have been to 'just' pass around
             | atomically-reference-counted buffers everywhere, including
             | as the buffer arguments to AsyncRead/AsyncWrite. That would
             | avoid the need for AsyncBufRead to be separate from
             | AsyncRead. It wouldn't prevent some unidiomaticness from
             | existing - you still couldn't, say, have an async function
             | do a read into a Vec, because a Vec is not reference
             | counted - but if the entire async ecosystem used reference
             | counted buffers, the ergonomics would be pretty decent.
             | 
             | But we do care about avoiding allocations and reference
             | counting, resulting in this problem. However, that means a
             | completion-based model wouldn't really help, because a
             | completion-based model essentially requires allocations and
             | reference counting for the futures themselves.
             | 
             | To me, the question is whether Rust could have avoided this
             | with a _different_ polling-based model. It definitely could
             | have avoided it with a model where the allocations for
             | async functions are always managed by the system, just like
             | the stacks used for regular functions are. But that would
             | lose the elegance of async fns being  'just' a wrapper over
             | a state machine. Perhaps, though, Rust could also have
             | avoided it with just some tweaks to how Pin works [1]...
             | but I am not sure whether this is actually viable. _If_ it
             | is, then that might be one motivation for eventually
             | replacing Pin with a different construct, albeit a weak
             | motivation by itself.
             | 
             | [1] https://www.reddit.com/r/rust/comments/dtfgsw/iou_rust_
             | bindi...
        
               | stormbrew wrote:
               | Wouldn't a more 'correct' implementation be moving the
               | buffer into the thing that initiates the future (and
               | thus, abstractly, into the future), rather than
               | refcounting? At least with IOCP you aren't really
               | supposed to even touch the memory region given to the
               | completion port until it's signaled completion iirc.
               | 
               | Ie. to me, an implementation of read() that would work
               | for a completion model could be basically:
               | async read<T: IntoSystemBufferSomehow>(&self, buf: T) ->
               | Result<T, Error>
               | 
               | I recognize this doesn't resolve the early drop issues
               | outlined, and it obviously does require copying to adapt
               | it to the existing AsyncRead trait, or if you want to
               | like.. update a buffer in an already allocated object.
               | It's just what I would expect an api working against iocp
               | to look like, and I feel like it avoids many of the
               | issues you're talking about.
        
               | alexisread wrote:
               | I'm not a rust expert, so I'm not sure how close this
               | proposal is to Composita
               | 
               | http://concurrency.ch/Content/publications/Blaeser_Compon
               | ent...
               | 
               | Essentially each component has a buffered interface (an
               | interface message queue), which static analysis sizes at
               | compile time. This buffer can act as a daemon, ref
               | counter, offline dropbox, cache, cancellation check, and
               | can probably help with cycle checking.
               | 
               | Is this the sort of model which would be useful here?
        
               | withoutboats2 wrote:
               | > I am not sure whether this is actually viable.
               | 
               | Having investigated this myself, I would be very
               | surprised to discover that it is.
               | 
               | The only viable solution to make AsyncRead zero cost for
               | io-uring would be to have required futures to be polled
               | to completion before they are dropped. So you can give up
               | on select and most necessary concurrency primitives. You
               | really want to be able to stop running futures you don't
               | need, after all.
               | 
               | If you want the kernel to own the buffer, you should just
               | let the kernel _own the buffer_. Therefore, AsyncBufRead.
               | This will require the ecosystem to shift where the buffer
               | is owned, of course, and that 's a cost of moving to io-
               | uring. Tough, but those are the cards we were dealt.
        
               | comex wrote:
               | Well, you can still have select; it "just" has to react
               | to one of the futures becoming ready by cancelling all
               | the other ones and waiting (asynchronously) for the
               | cancellation to be complete. Future doesn't currently
               | have a "cancel" method, but I guess it would just be
               | represented as async drop. So this requires some way of
               | enforcing that async drop is called, which is hard, but I
               | believe it's equally hard as enforcing that futures are
               | polled to completion: either way you're requiring that
               | some method on the future be called, and polled on,
               | before the memory the future refers to can be reused. For
               | the sake of this post I'll assume it's somehow possible.
               | 
               | Having to wait for cancellation does sound expensive,
               | especially if the end goal is to pervasively use APIs
               | like io_uring where cancellation can be slow.
               | 
               | But then, in a typical use of select, you don't actually
               | want to cancel the I/O operations represented by the
               | other futures. Rather, you're running select in a loop in
               | order to handle each completed operation as it comes.
               | 
               | So I think the endgame of this hypothetical world is to
               | encourage having the actual I/O be initiated by a Future
               | or Stream created outside the loop. Then within the loop
               | you would poll on `&mut future` or `stream.next()`. This
               | already exists and is already cheaper in some cases, but
               | it would be significantly cheaper when the backend is
               | io_uring.
        
         | KingOfCoders wrote:
         | The poll model has the advantage that you have control when
         | async work starts and therefor is the more predictable model.
         | 
         | I guess that way it fits more the Rust philosophy.
        
         | junon wrote:
         | FWIW I've done a fair bit of researching with io_uring. For
         | file operations it's fast, bit over epoll the speedups are
         | negligible. The creator is a good guy but they're having issues
         | with the performance numbers being skewed due to various
         | deficiencies in the benchmark code, such as skipping error
         | checks in the past.
         | 
         | Also, io_uring can certainly be used via polling. Once the
         | shared rings are set up, no syscalls are necessary afterward.
        
           | ben0x539 wrote:
           | We've briefly been playing with io_uring (in async rust) for
           | a network service that is CPU-bound and seems to be
           | bottlenecked in context switches. In a very synthetic
           | comparison, the io_uring version seemed very promising (as in
           | "it may be worth rewriting a production service targeting an
           | experimental io setup"), we ran out of the allocated time
           | before we got to something closer to a real-world benchmark
           | but I'm fairly optimistic that even for non-file operations
           | there are real performance gains in io_uring for us.
           | 
           | I'm not sure io_uring polling counts as polling since you're
           | really just polling for completions, you still have all the
           | completion-based-IO things like the in-flight operations
           | essentially owning their buffers.
        
             | junon wrote:
             | Yes, I should have specified - _in theory_ io_uring is
             | _much_ faster and less resource intensive. With the right
             | polish, it can certainly be the next iteration of I /O
             | syscalls.
             | 
             | That being said, you have to restructure a lot of your
             | application in order to be io_uring ready in order to reap
             | the most gains. In theory, you'll also have to be a bit
             | pickier with CPU affinities, namely when using SQPOLL
             | (submit queue poll), which creates a kernel thread. Too
             | much contention means such facilities will actually slow
             | you down.
             | 
             | The research is changing weekly and most of the exciting
             | stuff is still on development branches, so tl;dr (for the
             | rest of the readers) if you're on the fence, best stick to
             | epoll for now.
        
         | [deleted]
        
         | johnnycerberus wrote:
         | In all this time, maestro Andrei Alexandrescu was right when he
         | said Rust feels like it "skipped leg day" when it comes to
         | concurrency and metaprogramming capabilities. Tim Sweeney was
         | complaining about similar things, saying about Rust that is one
         | step forward, one step backward. These problems will be evident
         | at a later time, when it will be already too late. I will
         | continue experimenting with Rust, but Zig seems to have some
         | great things going on, especially the colourless functions and
         | the comptime thingy. Its safety story does not dissapoint also,
         | even if it is not at Rust's level of guarantees.
        
           | aw1621107 wrote:
           | In case anyone else was interested in the original sources
           | for the quotes:
           | 
           | > Andrei Alexandrescu was right when he said Rust feels like
           | it "skipped leg day" when it comes to concurrency and
           | metaprogramming capabilities.
           | 
           | https://archive.is/hbBte (the original answer appears to have
           | been deleted [0])
           | 
           | > Tim Sweeney was complaining about similar things, saying
           | about Rust that is one step forward, two steps backward.
           | 
           | https://twitter.com/timsweeneyepic/status/121381423618448588.
           | ..
           | 
           | (He said "Kind of one step backward and one step forward",
           | but close enough)
           | 
           | [0]: https://www.quora.com/Which-language-has-the-brightest-
           | futur...
        
             | johnnycerberus wrote:
             | Thanks for the references, indeed, Tim said one step
             | forward, one backward, my bad. He posted long time ago.
        
           | jedisct1 wrote:
           | And Zap (scheduler for Zig) is already faster than Tokio.
           | 
           | Zig and other recent languages have been invented after Rust
           | and Go, so they could learn from them, while Rust had to
           | experiment a lot in order to combine async with borrow
           | checking.
           | 
           | So, yes, the async situation in Rust is very awkward, and
           | doing something beyond a Ping server is more complicated than
           | it could be. But that's what it takes to be a pioneer.
        
             | the_duke wrote:
             | > And Zap (scheduler for Zig) is already faster than Tokio.
             | 
             | I'm not necessarily doubtful, tokio isn't the fastest
             | implementation of a runtime.
             | 
             | But can you point to a non-trivial benchmark that shows
             | this?
             | 
             | Performance claims should always come with a verifiable
             | benchmark.
        
               | jedisct1 wrote:
               | Check out @kingprotty's Twitter posts and Zig Show
               | presentations.
        
             | winter_squirrel wrote:
             | Are there any benchmarks / links for zap? I tried searching
             | and didn't find much
        
         | devwastaken wrote:
         | Correct me if I'm wrong, but isnt any sort of async support non
         | integral to rust? For example in something like Javscript you
         | can't impliment your own async. But in C, C++ or Rust you can
         | do pretty much anything you want.
         | 
         | So if in the future io-uring and friends become the standard
         | can't that just be a library you could then use?
         | 
         | Similar to how in C you don't need the standard library to do
         | threads or async.
        
           | newpavlov wrote:
           | Yes, you can encode state machines manually, but it will be
           | FAR less ergonomic than the async syntax. Rust has started
           | with a library-based approach, but it was... not great. Async
           | code was littered with and_then methods and it was really
           | close to the infamous JS callback hell. The ergonomic
           | improvements which async/await brings is essentially a raison
           | d'etre for incorporating this functionality into the
           | language.
        
             | eru wrote:
             | Interesting!
             | 
             | For comparison, Haskell went with the library approach but
             | has the syntactic sugar of the equivalent of `and_then`
             | built into the language. (I am talking about Monads and do-
             | notation.)
             | 
             | It's a bit like iterating in Python: for-loops are a
             | convenient syntactic sugar to something that can be
             | provided by a library.
        
               | steveklabnik wrote:
               | It is a little old, but for the general gist of it,
               | http://aturon.github.io/tech/2018/04/24/async-borrowing/
               | is an amazing description of the problem here.
               | 
               | It is a PhD level research problem to know if monads and
               | do notation would be able to work in Rust. The people who
               | are most qualified to look into it (incidentally: a lot
               | of the same crew was who was working on async) believe
               | that it may literally be impossible.
        
               | anp wrote:
               | My impression is that some of them even burnt out in slog
               | of a supposedly rushed process to land async/await!
        
               | newpavlov wrote:
               | Yes, a number of people have suggested introduction of a
               | general do notation (or its analog) instead of usecase-
               | specific async/awayt syntax, but since Rust does not have
               | proper higher kinded types (and some Rust developers say
               | it never will), such proposals have been deemed
               | impractical.
        
               | pdpi wrote:
               | It's not just HKTs. Figuring out how to handle Fn,
               | FnOnce, FnMut is a whole other can of worms.
        
               | smt1 wrote:
               | I think there were some (related, I think!) proposals
               | here: https://github.com/rust-lang/lang-team/issues/23
               | https://github.com/rust-lang/rfcs/issues/3000
        
             | shmerl wrote:
             | Can't Rust come up with a new syntax (that matches io_uring
             | idea better) and deprecate the old one? Or simply replace
             | the implementation keeping the old syntax if it's
             | semantically the same?
        
               | kibwen wrote:
               | It could, although I highly doubt that any deficiencies
               | with the current implementation of async/await are so
               | severe as to warrant anything so dramatic.
        
           | wahern wrote:
           | I agree. Completion-based APIs are more high level, and not a
           | good abstraction at the systems language level. IOCP and
           | io_uring use poll-based interfaces internally. In io_uring's
           | case, the interfaces are basically the same ones available in
           | user space. In Windows case IOCP uses interfaces that are
           | private, but some projects have figured out the details well
           | enough to implement decent epoll and kqueue compatibility
           | libraries.
           | 
           | Application developers of course want much higher level
           | interfaces. They don't want to do a series of reads; they
           | want "fetch_url". But if "fetch_url" is the lowest-level API
           | available, good luck implementing an efficient streaming
           | media server. (Sometimes we end up with things like HTTP Live
           | Streaming, a horrendously inefficient protocol designed for
           | ease of use in programming environments, client- and server-
           | side, that effectively only offer the equivalent of
           | "fetch_url".)
           | 
           | Plus, IOCP models tend to heavily rely on callbacks and
           | closures. And as demonstrated in the article, low-level
           | languages suck at providing ergonomic first-class functions,
           | especially if they lack GC. (It's a stretch to say that Rust
           | even supports first-class functions.) If I were writing an
           | asynchronous library in Rust, I'd do it the same way I'd do
           | it in C--a low-level core that is non-blocking and stateful.
           | For example, you repeatedly invoke something like
           | "url_fetch_event", which returns a series of events (method,
           | header, body chunk) or EAGAIN/EWOULDBLOCK. (It may not even
           | pull from a socket directly, but rely on application to write
           | source data into an internal buffer.) Then you can _wrap_
           | that low-level core in progressively higher-level APIs,
           | including alternative APIs suited to different async event
           | models, as well as fully blocking interfaces. And if a high-
           | level API isn 't to some application developer's liking, they
           | can create their own API around the low-level core API. This
           | also permits easier cross-language integration. You can
           | easily use such a low-level core API for bindings to Python,
           | Lua, or even Go, including plugging into whatever event
           | systems they offer, without losing functional utility.
           | 
           | It's the same principle with OS and systems language
           | interfaces--you provide mechanisms that can be built upon.
           | But so many Rust developers come from high-level application
           | environments, including scripting language environments,
           | where this composition discipline is less common and less
           | relevant.
        
             | MarkSweep wrote:
             | > IOCP models tend to heavily rely on callbacks and
             | closures
             | 
             | While perhaps higher level libraries are written that way,
             | I can't think of a reason why the primitive components of
             | IOCP require callbacks and closures. The "poll for io-
             | readiness and then issue non-blocking IO" and "issue async
             | IO and then poll for completion" models can be implemented
             | in a reactor pattern in a similar manner. It is just a
             | question of whether the system call happens before or after
             | the reactor loop.
             | 
             | EDIT: Reading some of the other comments and thinking a
             | bit, one annoying thing about IOCP is the cancelation
             | model. With polling IO readiness, it is really easy to
             | cancel IO and close a socket: just unregister from epoll
             | and close it. With IOCP, you will have to cancel the in-
             | flight operation and wait for the completion notification
             | to come in before you can close a socket (if I understand
             | correctly).
             | 
             | Anyways, I've been playing around with implementing some
             | async socket APIs on top of IOCP for Windows in Rust [1].
             | Getting the basic stuff working is relatively easy.
             | Figuring our a cancellation model is going to be a bit
             | difficult. And ultimately I think it would be cool if the
             | threads polling the completion ports could directly execute
             | the wakers in such a way that the future could be polled
             | inline, but getting all the lifetimes right is making my
             | head hurt.
             | 
             | [1] https://github.com/AustinWise/rust-windows-io
        
         | withoutboats2 wrote:
         | This post is completely and totally wrong. At least you got to
         | ruin my day, I hope that's a consolation prize for you.
         | 
         | There is NO meaningful connection between the completion vs
         | polling futures model and the epoll vs io-uring IO models.
         | comex's comments regarding this fact are mostly accurate. The
         | polling model that Rust chose is the only approach that has
         | been able to achieve single allocation state machines in Rust.
         | It was 100% the right choice.
         | 
         | After designing async/await, I went on to investigate io-uring
         | and how it would be integrated into Rust's system. I have a
         | whole blog series about it on my website:
         | https://without.boats/tags/io-uring/. I assure you, the
         | problems it present are not related to Rust's polling model AT
         | ALL. They arise from the limits of Rust's borrow system to
         | describe dynamic loans across the syscall boundary (i.e. that
         | it cannot describe this). A completion model would not have
         | made it possible to pass a lifetime-bound reference into the
         | kernel and guarantee no aliasing. But all of them have fine
         | solutions building on work that already exists.
         | 
         | Pin is not a hack any more than Box is. It is the only way to
         | fit the desired ownership expression into the language that
         | already exists, squaring these requirements with other
         | desireable primitives we had already committed to shared
         | ownership pointers, mem::swap, etc. It is simply FUD - frankly,
         | _a lie_ - to say that it will block  "noalias," following that
         | link shows Niko and Ralf having a fruitful discussion about how
         | to incorporate self-referential types into our aliasing model.
         | We were aware of this wrinkle before we stabilized Pin, I had
         | conversations with Ralf about it, its just that now that we
         | want to support self-referential types in some cases, we need
         | to do more work to incorporate it into our memory model. None
         | of this is unusual.
         | 
         | And none of this was rushed. Ignoring the long prehistory, a
         | period of 3 and a half years stands between the development of
         | futures 0.1 and the async/await release. The feature went
         | through a grueling public design process that burned out
         | everyone involved, including me. It's not finished yet, but we
         | have an MVP that, contrary to this blog post, _does_ work just
         | fine, in production, at a great many companies you care about.
         | Moreover, getting a usable async /await MVP was absolutely
         | essential to getting Rust the escape velocity to survive the
         | ejection from Mozilla - every other funder of the Rust
         | Foundation finds async/await core to their adoption of Rust, as
         | does every company that is now employing teams to work on Rust.
         | 
         | Async/await was, both technically and strategically, as well
         | executed as possible under the circumstances of Rust when I
         | took on the project in December 2017. I have no regrets about
         | how it turned out.
         | 
         | Everyone who reads Hacker News should understand that the
         | content your consuming is usually from one of these kinds of
         | people: a) dilettantes, who don't have a deep understanding of
         | the technology; b) cranks, who have some axe to grind regarding
         | the technology; c) evangelists, who are here to promote some
         | other technology. The people who actually drive the
         | technologies that shape our industry don't usually have the
         | time and energy to post on these kinds of things, unless they
         | get so angry about how their work is being discussed, as I am
         | here.
        
           | serverholic wrote:
           | You are awesome. Thank you for clarifying these things.
        
           | api wrote:
           | I've jumped on the Rust bandwagon as part of ZeroTier 2.0
           | (not rewriting its core, but rewriting some service stuff in
           | Rust and considering the core eventually). I've used a bit of
           | async and while it's not as easy as Go (nothing is!) it's
           | pretty damn ingenious for language-native async in a systems
           | programming language.
           | 
           | I personally would have just chickened out on language native
           | async in Rust and told people to roll their own async with
           | promise patterns or something.
           | 
           | Ownership semantics are hairy in Rust and require some
           | forethought, but that's also true in C and C++ and in those
           | languages if you get it wrong there you just blow your foot
           | off. Rust instead tells you that the footgun is dangerously
           | close to going off and more or less prohibits you from doing
           | really dangerous things.
           | 
           | My opinion on Rust async is that it its warts are as much the
           | fault of libraries as they are of the language itself. Async
           | libraries are overly clever, falling into the trap of
           | favoring code brevity over code clarity. I would rather have
           | them force me to write just a little more boilerplate but
           | have a clearer idea of what's going on than to rely on magic
           | voodoo closure tricks like:
           | 
           | https://github.com/hyperium/hyper/issues/2446
           | 
           | Compare that (which was the result of hours of hacking) to
           | their example:
           | 
           | https://hyper.rs/guides/server/hello-world/
           | 
           | WUT? I'm still not totally 100% sure why mine works _and_
           | theirs works, and I don 't blame Rust. I'd rather have seen
           | this interface (in hyper) implemented with traits and
           | interfaces. Yes it would force me to write something like a
           | "factory," but I would have spent 30 minutes doing that
           | instead of three hours figuring out how the fuck
           | make_service_fn() and service_fn() are supposed to be used
           | and how to get a f'ing Arc<> in there. It would also result
           | in code that someone else could load up and easily understand
           | what the hell it was doing without a half page of comments.
           | 
           | The rest of the Rust code in ZT 2.0 is much clearer than
           | this. It only gets ugly when I have to interface with hyper.
           | Tokio itself is even a lot better.
           | 
           | Oh, and Arc<> gets around a lot of issues in Rust. It's not
           | as zero-cost as Rc<> and Box<> and friends but the cost is
           | really low. While async workers are not threads, it can make
           | things easier to treat them that way and use Arc<> with them
           | (as long as you avoid cyclic structures). So if async
           | ownership is really giving you headaches try chickening out
           | and using Arc<>. It costs very very little CPU/RAM and if it
           | saves you hours of coding it's worth it.
           | 
           | Oh, and to remind people: this is a systems language designed
           | to replace C/C++, not a higher level language, and I don't
           | expect it to ever be as simple and productive as Go or as
           | YOLO as JavaScript. I love Go too but it's not a systems
           | language and it imposes costs and constraints that are really
           | problematic when trying to write (in my case) a network
           | virtualization service that's shooting (in v2.0) for tens of
           | gigabits performance on big machines.
        
             | leshow wrote:
             | I skimmed some of this, but are you asking why you need to
             | clone in the closure? Because "async closures" don't exist
             | at the moment, the closest you can get is a closure that
             | returns a future, this usually has the form:
             | <F, Fut> where F: Fn() -> Fut, Fut: Future
             | 
             | i.e. you call some closure f that returns a future that you
             | can then await on. when writing that out it will look like:
             | || {         // closure            async move {
             | // returned future            }        }
             | 
             | `make_service_fn` likely takes something like this and puts
             | it in a struct, then for every request it will call the
             | closure to create the future to process the request. (edit:
             | and indeed it does, it's definition literally takes your
             | closure and uses it to implement the Service trait, which
             | you are free to do also if you didn't want to write it this
             | way https://docs.rs/hyper/0.14.4/src/hyper/service/make.rs.
             | html#...)
             | 
             | The reason you need to clone in the closure is that is what
             | 'closes over' the scope and is able to capture the Arc
             | reference you need to pass to your future. Whenever
             | make_service_fn uses the closure you pass to it, it will
             | call the closure, which can create your Arc references,
             | then create a future with those references "moved" in.
             | 
             | It's a little deceptive as this means the exact same thing
             | as above, just with the first set of curly braces not
             | needed                  || async move {}
             | 
             | This is still a closure which returns a Future. Does all of
             | that make sense? Perhaps they could use a more explicit
             | example, but it also helps to carefully read the type
             | signature.
        
               | api wrote:
               | Wait so you're saying "|| async move {}" is equivalent to
               | "|| move { async move {} }"? If so then mystery solved,
               | but that is not obvious at all and should be documented
               | somewhere more clearly.
               | 
               | In that case all I'm doing vs. their example is
               | explicitly writing the function that returns the promise
               | instead of letting it be "inferred?"
        
           | matdehaast wrote:
           | To hopefully make your day better.
           | 
           | I for one, amongst many people I am sure, am deeply grateful
           | for the work of you and your peers in getting this out!
        
             | vaylian wrote:
             | I second this. Withoutboats has done some incredible work
             | for Rust.
        
           | kristoff_it wrote:
           | For what it's worth, I agree 100% with the premise of
           | withoutboats' post, based on the experience of having worked
           | a little on Zig's event loop.
           | 
           | My recommendation to people that don't see how ridiculous the
           | original post was, is to write more code and look more into
           | things.
        
           | francoisp wrote:
           | Thank you for this post. I have been interested in rust
           | because of matrix, and although I found it a bit more
           | intimidating than go to toy with, I was inclined to try it on
           | a real project over go because it felt like the closest to
           | the hardware while not having the memory risks of C. The co-
           | routines/async was/is the most daunting aspect of Rust, and a
           | post with a sensational title like the grand-parent could
           | have swayed me the other way.
           | 
           | As an aside, It would be great to have some sort of federated
           | cred(meritocratic in some way) in hackernews, instead of a
           | flat democratic populist point system; it would lower the
           | potential eternal September effect.
           | 
           | I would love to see a personal meta-pointing system, it could
           | be on wrapping site: if I downvote a "waste of hackers
           | daytime" article (say a long form article about what is life)
           | in my "daytime" profile, I get a weighted downvoted feed by
           | other users that also downvoted this item--basically using
           | peers that vote like you as a pre-filter. I could have
           | multiple filters, one for quick daytime hacker scan, and one
           | for leisure factoid. One could even meta-meta-vote and give
           | some other hackers' handle a heavier weight...
        
           | newpavlov wrote:
           | Please, calm down. I do appreciate your work on Rust, but
           | people do make mistakes and I strongly belive that in the
           | long term the async stabilization was one of them. It's
           | debatable whether async was essential or not for Rust, I
           | agree it gave Rust a noticeable boost in popularity, but
           | personally I don't think it was worth the long term cost. I
           | do not intend to change your opinion, but I will keep mine
           | and reserve the right to speak about this opinion publicly.
           | 
           | In this thread [1] we have a more technicall discussion about
           | those models, I suggest to continue that thread.
           | 
           | >I assure you, the problems it present are not related to
           | Rust's polling model AT ALL.
           | 
           | I do not agree about all problems, but my OP was indeed
           | worded somewhat poorly, as I've admitted here [2].
           | 
           | >Pin is not a hack any more than Box is. It is the only way
           | to fit the desired ownership expression into the language
           | that already exists
           | 
           | I can agree that it was the easiest solution, but I strongly
           | disagree about the only one. And frankly it's quite
           | disheartening to hear from a tech leader such absolutistic
           | statements.
           | 
           | >It is simply FUD - frankly, a lie - to say that it will
           | block "noalias,
           | 
           | Where did I say "will"? I think you will agree that it at the
           | very least will it will cause a delay. Also the issue shows
           | that Pin was not proprely thought out, especially in the
           | light of other safety issues it has caused. And as uou can
           | see by other comments, I am not the only one who thinks so.
           | 
           | >the content your consuming is usually from one of these
           | kinds of people:
           | 
           | Please, satisfy my curiosity. To which category do I belong
           | in your opinion?
           | 
           | [1]: https://news.ycombinator.com/item?id=26410359
           | 
           | [2]: https://news.ycombinator.com/item?id=26407565
        
             | stjohnswarts wrote:
             | So why did you not present your own solutions to the issues
             | that you criticized or better yet fix it with an RFC rather
             | than declaring a working system as basically a failure (per
             | your title). I think you wouldn't have 10% of the saltiness
             | if you didn't have such an aggressive title to your
             | article.
        
               | newpavlov wrote:
               | I have tried to raise those issues in the stabilization
               | issue (I know, quite late in the game), but it simply got
               | shut down by the linked comment with a clear message that
               | further discussion be it in an issue on in a new RFC will
               | be pointless.
               | 
               | Also please note that the article is not mine.
        
               | Supermancho wrote:
               | You know the F-35 is a disaster of a government project
               | from looking at it, why not submit a better design? That
               | isn't helpful. You might be interested in the discussion
               | from here: https://news.ycombinator.com/item?id=26407770
        
             | HoolaBoola wrote:
             | > Please, calm down.
             | 
             | By the way, that will almost certainly be taken in a bad
             | way. It's never a good idea to start a comment with
             | something like "chill" or "calm down", as it feels
             | incredibly dismissive.
             | 
             | > I do appreciate your work on Rust, but
             | 
             | There's a saying that anything before a "but" is
             | meaningless.
             | 
             | This is not meant to critique the rest of the comment, just
             | point out a couple parts that don't help in defusing the
             | tense situation.
        
               | newpavlov wrote:
               | Thank you for the advice. Yes, I should've been more
               | careful in my previous comment.
               | 
               | I have noticed this comment only after engaging with him
               | in https://news.ycombinator.com/item?id=26410565 in which
               | he wrote about me:
               | 
               | > You do not know what you are talking about.
               | 
               | > You are confused.
               | 
               | So my reaction was a bit too harsh partially due to that.
        
           | hu3 wrote:
           | Is it just me or you're supporting your parent's point of:
           | 
           | > ...the decision was effectively made on the ground of "we
           | want to ship async support as soon as possible" [1].
           | 
           | When you write:
           | 
           | > Moreover, getting a usable async/await MVP was absolutely
           | essential to getting Rust the escape velocity to survive the
           | ejection from Mozilla...
           | 
           | This whole situation saddens me. I wish Mozilla could have
           | given you guys more breathing room to work on such critical
           | parts. Regardless, thank you for your dedication.
        
             | withoutboats2 wrote:
             | That is not a correct reading of the situation. async/await
             | was not rushed, and does not have flaws that could have
             | been solved with more time. async/await will continue to
             | improve in a backward compatible way, as it already has
             | since it was released in 2019.
        
       | scoutt wrote:
       | I started recently learning Rust, and a thing I don't understand
       | is why they decided that threads have to be done this way and not
       | simply wrapping what is available in the OS (I know that at some
       | point, at the core, a Rust thread is the same thing I create with
       | _pthread_create_ ).
       | 
       | Like, in both Windows and Linux, threads exist as a callback
       | function (pointer to function) with its own stack, in which a
       | void* parameter is passed. Can this be done in Rust? Can
       | multithreading be done without closures?
        
         | rcxdude wrote:
         | What you've described with pthread_create is basically a
         | closure, just implemented within the limitations of C and like
         | many things in C, wildly unsafe. You can of course interact
         | with the underlying implementation if you want (Rust is quite
         | capable of calling into libc directly), but then you're
         | responsible for ensuring safety. Rust provides a safe wrapper
         | over this with a closure which is generally easier to use and
         | harder to misuse than the equivilent C.
        
           | scoutt wrote:
           | Thanks.
           | 
           | > a closure which is generally easier to use
           | 
           | I didn't experimented much with closures so I can't discuss
           | about simplicity yet (but the article explains that _"
           | Closures can be pretty radioactive"_).
           | 
           | > within the limitations of C
           | 
           | Threads are exposed like that by the operating system. The C
           | implementation is just a way to have access to what the
           | operating system exposes, so I think it should be more like a
           | limitation of the operating system?
        
             | rcxdude wrote:
             | It's more the limitations of C: C doesn't have the concept
             | of a closure, so if you are passing around a function with
             | its state, you must pass them around separately (and
             | generally the state must be behind a void pointer, because
             | you can't have a generic interface).
             | 
             | In rust and C++ you can make a function with associated
             | state easily and pass it around, both as a raw value with a
             | unique type (though it's difficult to name), and as a
             | reference or heap-allocated value which can be handled
             | generically.
             | 
             | In all these cases you need to make sure that any pointers
             | remain valid until the function has finished executing. In
             | C and C++ the compiler cannot help you, in rust the
             | compiler can catch cases where function could be called
             | after those pointers are no longer valid. This is what
             | causes the 'radioactive' property of closures containing
             | references to other data (they are also radioactive in C++,
             | just at runtime and unpredictably, as opposed to at compile
             | time in rust).
             | 
             | If the example code in the article was written in C++ (and
             | the complaint about the closure type which is also present
             | in C++ was addressed), there would be a subtle concurrency
             | bug, because the database handle could be closed and freed
             | before the closure function actually executes.
             | 
             | In all these cases the solution is either making sure the
             | closure runs before the data it needs becomes invalid
             | (which is not generally easy to do if you're planning to
             | use it to spawn a thread), or you move the data onto the
             | heap, either arranging so the closure frees the data when
             | it executes (and then guaranteeing that it executes), or
             | using some shared reference count system to ensure it only
             | gets freed when all code which might want to use it is
             | done. In Rust and C++ there are mechanisms in the standard
             | library and their closure implementations to achieve this,
             | in C you must do it manually.
        
         | dthul wrote:
         | Note that you don't need a closure in Rust to start a thread.
         | You can just specify a function to be called (because functions
         | implement FnOnce).
        
           | scoutt wrote:
           | Thanks. My question emerged from the fact that all the Rust
           | threads examples I see are in the closure form _|| {}_.
        
             | zesterer wrote:
             | Because the closure form is much more useful. You very
             | rarely want to create a thread without first passing some
             | state to it, and environment-passing with closures is a
             | brilliantly ergonomic way to do this.
        
       | noncoml wrote:
       | > And, as I said at the start, that makes me kinda sad, because I
       | do actually like Rust.
       | 
       | I think that's the most important part of the article. People
       | like Rust but it's becoming more complex than C++. But unlike C++
       | it's more difficult to pick and choose what you use.
       | 
       | Rust's death will be one by thousand cuts. "I really like the
       | language but can't justify all that complexity in my new small
       | and simpke project" is what will cause the popularity to drop.
        
         | roca wrote:
         | It is in no danger of become as complex than C++. Nowhere near.
        
           | 0xdeadfeed wrote:
           | The pace it is going at, it's not that far from it
        
             | rapsey wrote:
             | Majority of Rust releases are quality of life improvements.
             | They make usage simpler and improve the std.
        
             | infogulch wrote:
             | I am curious where you get that impression. From what I
             | could see, recent releases have just been "smoothing over
             | pits" / filling in obvious type holes left over from older
             | changes. Changes in rust edition 2021 are tiny and it seems
             | to only exist in order to establish a regular cadence.
        
               | sai_c wrote:
               | Disclaimer: I've only dabbled in Rust from time to time,
               | but have not written anything serious with it.
               | 
               | Those kind of comments are not based on looking at Rust's
               | features on a theoretical level.
               | 
               | It's more from the viewpoint of "How hard is it to get
               | something (correctly) done". Those impressions are mostly
               | formed either from using the language and failing (or
               | having severe difficulties) to solve a given problem, or
               | being exposed to blog posts describing those kinds of
               | problems.
               | 
               | Rust sure does not have as many features as C++, but it
               | seems that in practice, the combination of available
               | features (and/or the problems those create) is perceived
               | as just as complex as C++. Though not being a fair
               | assessment, the Rust community will have to try fighting
               | this perception, if Rust is not to be labeled way more
               | complex than C++ in the coming years.
        
               | roca wrote:
               | It is hard to stop people perceiving Rust as over-complex
               | if they are motivated to perceive it as over-complex.
        
               | jevgeni wrote:
               | I think the problem here is that Rust exposes a lot of
               | additional problems in a domain that are swept under the
               | rug in other languages. I mean, those problems have
               | always existed, it's just we are used to hoping or
               | assuming that the generic solution within a language
               | runtime will save us from them.
        
               | sai_c wrote:
               | Yes, indeed. Yet, this is the problem to solve.
               | 
               | I don't know if you are using Rust, but what would you
               | say, why is that? Where does that motivation come from?
        
               | roca wrote:
               | I've been developing mostly in Rust for the last five
               | years. C++ for decades before that.
               | 
               | It is always comforting to dismiss something that you
               | haven't made much effort to understand as unnecessarily
               | overcomplex. I think we're all guilty of that at times.
        
               | sai_c wrote:
               | Ah, i see. But the systems crowd (I'm going to lump them
               | all together: microcontroller, kernel, high performance
               | networking, etc.) is supposed to be the main target
               | audience. For me it is hard to believe that so many of
               | them just brush it off as too complex on first sight.
               | 
               | On the other hand, maybe those folks decided, after
               | giving it a serious try, it's not worth it. Mind you,
               | that does not mean they are right. But in the end, if the
               | effort for those people is perceived as too high, then
               | maybe, just maybe, in practice it really is.
               | 
               | After more than 25 years as a developer, I learned the
               | hard way, that programming languages, are, in the end, a
               | matter of taste. You won't get someone to use a language,
               | just by citing the feature list. Programming languages
               | are a product made for developers. If they don't like it
               | (for whatever reasons, and yes, these are at times highly
               | subjective) they won't use it.
        
               | roca wrote:
               | Sure, but we aren't making decisions in a vaccum with no
               | consequences.
               | 
               | If I really like C and I ship a bunch of C code that gets
               | exploited to build a botnet, that decision had
               | consequences. I should have valued safety higher and the
               | fact I didn't "like" safe alternatives was a distraction.
        
           | jedisct1 wrote:
           | Traits, generics, lifetimes, macros and endless, and
           | incomprehensible backtraces when using Futures already makes
           | it more complex.
           | 
           | Rust is difficult to read and maintain, due to the common use
           | of abstractions over abstractions over abstractions. Just
           | looking at a structure, it's already hard to understand what
           | functions are available (hidden in traits, themselves
           | behaving different according to cargo features...). And of
           | course, it also has things such as operator overloading, and
           | hidden heap memory allocations, so there's a lot of hidden
           | control flow happening.
           | 
           | It's also even slower than C++ to compile, especially when
           | using macros. And it doesn't have static analysis too as C++
           | does. Both of these also contribute to "complexity" as in
           | "how much effort it takes to write an application".
        
             | roca wrote:
             | The fact that Rust has traits and generics doesn't make it
             | more complex than C++, because they substitute for C++
             | features that are even more complex (classes, virtual
             | methods, templates, concepts).
             | 
             | I believe C++ is far more complex because, having
             | programmed in C++ for 30 years and hanging out with other
             | developers with similar levels of experience, I still trip
             | over weird C++ features that I never knew existed. My
             | latest example: Argument Dependent Lookup, which means f(a)
             | and (f)(a) can wind up calling completely different
             | functions. How many C++ developers know that?
             | 
             | In my experience, Rust has fewer complexities like that.
        
         | coopierez wrote:
         | > But unlike C++ it's more difficult to pick and choose what
         | you use.
         | 
         | Can you expand on this point?
        
         | lutoma wrote:
         | As a C developer who has never really used C++ or Rust I sort
         | of agree. Looking at Rust introductory guides I increasingly
         | get the same feeling as looking at C++ things, where there's
         | just _too much_ of everything. I know it's a subjective feeling
         | that is probably unfair to the language, but it's still the
         | impression I get.
         | 
         | I'm confident if I actually put in the work to learn Rust I
         | would end up liking it and I plan on doing that at some point,
         | but it certainly doesn't help my motivation.
        
           | ruph123 wrote:
           | > As a C developer who has never really used C++ or Rust I
           | sort of agree.
           | 
           | So you are saying:
           | 
           | "As someone who has never really used A or B I agree with the
           | comparison of A and B."
           | 
           | Pretty much sums up some of the comments here. People who
           | don't spent much (or any) time with Rust make (or support)
           | wild statements like this.
           | 
           | Rust is what you can learn in a few weeks, C++ takes a
           | lifetime and then you are dead.
        
       | eximius wrote:
       | None of this was about asynchronous rust and all of it was
       | complaining that Closures are hard.
       | 
       | Closures _are_ hard, but a lot of it 's for good reason. As with
       | all of the hard things Rust does, it's because it's trying to be
       | correct.
        
       | bullen wrote:
       | I'll repeat this until my karma is zero: Erlang, Rust and Go have
       | simple memory models and cannot do Joint (on the same memory)
       | Parallelism efficiently.
       | 
       | They are only fragmenting development.
       | 
       | In my opinion there are only two programming languages worth
       | mastering: C+ (C syntax compiled with cl/g++) on client and Java
       | SE (8u181) on server. That said C++ (namespaces/string/stream)
       | and JavaScript (HTML5) can be useful for client/web.
       | 
       | All other languages are completely garbage! Not only do they have
       | huge opportunity cost, but they make it harder for this
       | civilization to be useful long term!
       | 
       | We have linux/windows and C/Java, instead of making another
       | language in C, make something useful that is open-source instead!
       | 
       | Edit: Most downvoters here will be technology naive; they think
       | everything gets better for eternity, but in reality everything
       | hits it's peak at some point and electron processors and their
       | OS/languages have now explored all viable avenues. Accept that
       | and stop wasting time!
        
         | gher-shyu3i wrote:
         | Why Java 8 as opposed to the latest JDK?
         | 
         | And what about C#? It's in a similar boat as Java.
         | 
         | Also, how is the Java memory model different from Go for this
         | use case? They both allow mutation by sharing.
        
           | bullen wrote:
           | Java 8 is the last free Oracle JDK, nothing added after 8 is
           | really interesting enough to take the complexity hit that
           | Java 9/10/11 etc. mean.
           | 
           | I'm waiting for user space network, that is the last feature
           | that will make the switch worth: my system uses almost as
           | much kernel copy CPU as my user space process!!!
           | 
           | C# is an ok alternative but they went for value types instead
           | of sticking to the VM. Java has atleast 10 years head start
           | on C#, also Microsoft.
           | 
           | Go has no VM but uses GC, thay miss half of the requirements
           | to make Joint Parallelism!
           | 
           | WebAssembly has a VM but no GC, that is more interesting,
           | unfortunately there is no glue code in the reference
           | implementations!
        
             | gher-shyu3i wrote:
             | > Java 8 is the last free Oracle JDK, nothing added after 8
             | is really interesting enough to take the complexity hit
             | that Java 9/10/11 etc. mean.
             | 
             | The low latency GCs and the general performance
             | improvements seem quite good though.
             | 
             | > C# is an ok alternative but they went for value types
             | 
             | Java is also getting value types (project Valhalla).
             | 
             | > Go has no VM but uses GC, thay miss half of the
             | requirements to make Joint Parallelism!
             | 
             | What do you mean by Joint Parallelism? I didn't find
             | relevant resources. And is it bound to having a VM?
        
               | bullen wrote:
               | Well, there was a real-time thing bought by Sun from
               | Sweden but I'm still unclear as to how much of that has
               | made it into the open-source part of the JDK. Time will
               | tell!
               | 
               | Joint Parallelism is a term that I invented, it means
               | loosely defined: if two threads can read and write to the
               | same memory at the same time without it causing too much
               | loss... basically you need atomic concurrency which is
               | only really efficient on a complex memory model VM with
               | GC if you want high level language support:
               | 
               | "While I'm on the topic of concurrency I should mention
               | my far too brief chat with Doug Lea. He commented that
               | multi-threaded Java these days far outperforms C, due to
               | the memory management and a garbage collector. If I
               | recall correctly he said "only 12 times faster than C
               | means you haven't started optimizing"." - Martin Fowler
               | https://martinfowler.com/bliki/OOPSLA2005.html
               | 
               | "Many lock-free structures offer atomic-free read paths,
               | notably concurrent containers in garbage collected
               | languages, such as ConcurrentHashMap in Java. Languages
               | without garbage collection have fewer straightforward
               | options, mostly because safe memory reclamation is a hard
               | problem..." - Travis Downs https://travisdowns.github.io/
               | blog/2020/07/06/concurrency-co...
               | 
               | I'm not an expert, I can't explain why these things are
               | true, because I'm trying to make something useful on top
               | of C/Java instead of going too deep.
               | 
               | But what I do know is that my app. server kicks
               | everythings ass to the stone age in terms of performance
               | and ease of use: https://github.com/tinspin/rupy
               | 
               | It's facinating to see humans struggle with the wrong
               | tools, and defending those same tools without even trying
               | the alternatives (I have also worked with perl, PHP and
               | C# so I know why those are bad).
               | 
               | But I'm sure time will flush the bad apples out
               | eventually, specially when electricity prices go up.
        
               | gher-shyu3i wrote:
               | The low latency GCs I'm referring to are ZGC and
               | Shenandoah, both of which are in the JDK:
               | 
               | * https://openjdk.java.net/projects/zgc/
               | 
               | * https://openjdk.java.net/projects/shenandoah/
               | 
               | Thank you for the rest of the post, it's very insightful
               | :)
        
         | EugeneOZ wrote:
         | Humans waste billions of hours and dollars in casinos - apply
         | your efforts here and you'll help the human race much more.
        
         | oever wrote:
         | Can you give an example that C+ can do efficiently that Rust
         | cannot do efficiently even with 'unsafe'?
        
         | mratsim wrote:
         | There is a reason why people are moving away from shared memory
         | parallelism.
         | 
         | It's not fun to deal with NUMA domains efficiently. For
         | hardware engineers, it's not fun implementing cache coherency
         | protocol efficiently.
         | 
         | There is a reason why even Intel was experimenting with
         | channels back in 2010 and you see more and more "network on
         | chip", "cluster on chip" designs with non cache coherent
         | memory.
         | 
         | As we need more cores, shared-memory synchronization becomes
         | too costly.
         | 
         | Source:
         | 
         | - https://en.wikipedia.org/wiki/Single-chip_Cloud_Computer
         | 
         | PS: I'm not even talking about the difficulty of writing
         | correct concurrent data structures.
         | 
         | Even glibc authors get that wrong
         | https://news.ycombinator.com/item?id=26385761
        
           | bullen wrote:
           | Yes, it's hard, but it's the only way forward.
        
         | dw-im-here wrote:
         | While you were chasing girls I was in my basement mastering
         | java (SE 8u181)
        
       | nynx wrote:
       | That's an evocative title.
        
         | yannikyeo wrote:
         | I would say it's clickbaity.
        
       ___________________________________________________________________
       (page generated 2021-03-10 23:03 UTC)