[HN Gopher] Ruby methods are colorless
       ___________________________________________________________________
        
       Ruby methods are colorless
        
       Author : pmontra
       Score  : 75 points
       Date   : 2024-07-19 04:07 UTC (4 days ago)
        
 (HTM) web link (jpcamara.com)
 (TXT) w3m dump (jpcamara.com)
        
       | thechao wrote:
       | I've implemented coroutines in C and C++; my preferred
       | multitasking environment is message-passing between processes.
       | I'm not quite sure what the async/await stuff is _buying_ us (I
       | 'm thinking C++, here). Like, I get multi-shot _stackless_
       | coroutines, i.e., function objects, but I don 't get why you'd
       | want to orchestrate some sort of temporal Turing pit of async
       | functions bleeding across your code base.
       | 
       | I dunno. Maybe I'm old now?
       | 
       | Anyways; good for Ruby! Async/await just seems very faddish to
       | me: it didn't solve any of the hard
       | multithreading/multiprocessing problems, and introduced a bunch
       | of other issues. My guess is that it was interesting _type
       | theory_ that bled over into Real Life.
        
         | anonymoushn wrote:
         | If you have hundreds or thousands of connection-handlers or
         | scripts-attached-to-game-objects, then it can be useful to
         | write code in those without dividing the code into basic blocks
         | each time I/O might be performed or time might pass in the
         | game.
         | 
         | I generally agree that manually migrating everything to "be
         | async" in order to achieve this exposes at the very least a
         | lack of belief that computers can be used to automate drudge
         | work.
        
         | mikepurvis wrote:
         | I really like the patterns that it enables for situations with
         | a lot of parallel IO. Easy example is any kind of scraper,
         | where you're fetching large numbers of pages and then
         | immediately processing those to generate more requests for the
         | links and assets referenced within.
        
         | skywhopper wrote:
         | My theory is that JavaScript programmers who were forced into
         | thinking this way for decades with their single-threaded
         | runtime have infected other languages with the idea that this
         | style of coding is not only good but furthermore that it needs
         | to be explicit.
         | 
         | Thank goodness we have wiser language developers out there who
         | have resisted this impulse.
        
           | Erem wrote:
           | Didn't async/await originate in c#?
        
             | explaininjs wrote:
             | F# first actually. Then C#. Then Haskell. Then Python. Then
             | TypeScript. Parent just has an axe to grind.
        
         | explaininjs wrote:
         | Coming from a heavy TS background into a go-forward company,
         | I'd say the main thing you get with async is it makes it
         | _incredibly obvious_ when computation can be performed non-
         | sequentially (async...). For example, It's very common to see
         | the below in go code:                  a := &blah{}        rA,
         | err := engine.doSomethingWithA()        b := &bloop{}
         | rB, err := engine.doSomethingWithB()
         | 
         | This might have started out with both the doSomethings being
         | very quick painless procedures. But over time they've grown
         | into behemoth network requests and very thing is slow and
         | crappy. No, it's not exactly hard to spin up a go routine to
         | handle the work concurrently, but it's not trivial either - and
         | importantly, it's not _immediately obvious_ that this would be
         | a good idea.
         | 
         | Contrast to TS:                  let a = {blah}        let [rA,
         | err] = engine.doSomethingWithA()        let b = {bloop}
         | let [rB, err] engine.doSomethingWithB()
         | 
         | Now, time passes, you perform that behemoth slowing down of the
         | doSomethings. You are _forced_ by the type system to change
         | this:                  let a = {blah}        let [rA, err] =
         | await engine.doSomethingWithA()        let b = {bloop}
         | let [rB, err] await engine.doSomethingWithB()
         | 
         | It's now _immediately obvious_ that you _might_ want to run
         | these two procedures concurrently. Obviously you will need to
         | check the engine code, but any programmer worth their salt
         | should at least seek to investigate concurrency when making
         | that change.
         | 
         | I wouldn't be bringing this up if I hadn't made 10x+
         | performance improvements to critical services within a month of
         | starting at a new company in a new language on a service where
         | the handful of experienced go programmers on the team had no
         | idea why their code was taking so long.
        
           | hombre_fatal wrote:
           | Of course, the other nice thing about the JS example compared
           | to Go is that it's trivial at the callsite to do this:
           | const [[rA, errA], [rB, errB]] = await Promise.all([
           | engine.doSomethingWithA(),
           | engine.doSomethingWithB()         ])
           | 
           | At least these days you can ask an LLM to write the WaitGroup
           | boilerplate for you in Go.
        
             | explaininjs wrote:
             | Indeed. And breakpoints and stepping across concurrent
             | context actually works in JS, which is nice.
        
           | jayd16 wrote:
           | Honestly, despite that blog, async coloring is a feature. The
           | pattern enforces implicit critical sections between yields
           | and the coloring is how the dev can know what will yield.
        
             | explaininjs wrote:
             | Yes. That blog has probably done more to negatively impact
             | the industry than any other written work I know.
        
             | throwitaway1123 wrote:
             | This is a really interesting point. You almost never hear
             | async function coloring being conceptualized as a feature
             | rather than a hindrance. Async function coloring is kind of
             | analogous to the borrow checker in Rust. It makes you think
             | about concurrency the same way that the borrow checker
             | makes you think about memory ownership and lifetimes.
        
           | asp_hornet wrote:
           | Only if those doSomething methods were written as
           | asynchronous to begin with.in your original example,
           | doSomethingA was simple, why would it be an async method. If
           | your answer is write every method async for a rainy day, then
           | whats the point.
        
         | DarkNova6 wrote:
         | > Anyways; good for Ruby! Async/await just seems very faddish
         | to me: it didn't solve any of the hard
         | multithreading/multiprocessing problems, and introduced a bunch
         | of other issues. My guess is that it was interesting type
         | theory that bled over into Real Life.
         | 
         | I think what happened what that JavaScript absolutely
         | necessitated the addition of async/await to avoid callback hell
         | (due to its single-threaded mandate)... just to create a new
         | kind of hell all of its own.
         | 
         | But before the long-term consequences within large code bases
         | could be observed, the cool kids like C# & friends jumped on
         | the bandwagon believing that "more syntax equals more better
         | code".
        
         | sqeaky wrote:
         | I think the async/await patterns solve one problem really well:
         | UI latency.
         | 
         | My UI background is web testing and C++ game UIs in C++. There
         | are like 3 patterns in multithreaded games. Thread per
         | responsibility (old/bad), Barriers blocking all threads and
         | give each game system all the threads, or something smart. Few
         | games do something smart, and the thread per responsibility is
         | not ideal and we will ignore it for now.
         | 
         | In games often pausing everything and letting the physics
         | system have all the threads for a few milliseconds is "fast
         | enough". Then the graphics systems will try to use all the
         | threads, and so on so that eventually everything will get all
         | the threads even thought everything else rarely needs it.
         | Sometimes two things are both close to single threaded and have
         | no data contention so they might both be given threads, but
         | this is almost always a manually decided things by experts.
         | 
         | This means that once UI, the buttons, the text, cursors, status
         | bars, etc, gets its turn there won't be any race conditions
         | (good), but if it needs to request something from disk that
         | pause will happen on a thread in the UI system (bad and
         | analogous to web sites making web API calls) so UI latency can
         | be a real problem. If any IO small or some resource system has
         | preloaded it then there isn't a detectable slowdown, but there
         | are still plenty of silly periods of waiting. There is also a
         | lot of time when some single threaded part of the game isn't
         | using N-1 hardware threads and all that IO could have been
         | asynchronous. But often game UIs are a frame behind the rest of
         | the game simulation and there is often detectable latency in
         | the UI like the mouse feeling like it drags or similar.
         | 
         | Allowing IO to run in the back while active events are
         | processed can reduce latency and this is the default in web
         | browsers. IO latency in web pages is worse than in games and
         | other computation seems smaller than games, so the the event
         | loop is close to ideal. A function is waiting? throw it on the
         | stack and grab something else to do! This means that all the
         | work that can be done while waiting on IO is done and when does
         | well makes a UI snappy.
         | 
         | If that were available sensibly in games it could allow a game
         | designed appropriately to span IO across multiple frames and be
         | snappy without stutters. With games using the strategy I
         | described above latency in the game simulation or IO can cause
         | the UI to feel sluggish and vice versa. In games caching UI
         | details and trying to pump the frame rate is "good enough". If
         | the UI is a frame behind but we have 200 frames per second,
         | that isn't really a problem. But when it chugs and the mouse
         | stops responding because the player built the whole game world
         | out of dynamite and set it off the game will not process the
         | mouse until that 30 minutes of physics work is done.
         | 
         | There are better scheduling schemes for games. I am a big fan
         | of doing "something smart" but that usually means scheduling
         | heterogenous work with nuanced dependencies and I have written
         | libraries just for that because it isn't actually that hard.
         | But if you don't have the raw compute demands of a game
         | scheduling IO along side your UI computation is often "fast
         | enough" and is any easy enough mental model for JS devs to grok
         | and allow them freedom to speed things up with their own
         | solutions like caching schemes and reworking their UIs.
        
           | jayd16 wrote:
           | Isn't async/await "scheduling heterogenous work with nuanced
           | dependencies"? Or is that what you were implying?
           | 
           | Although my real guess is ECS but that's more like the
           | "everyone gets every thread for a time."
        
             | sqeaky wrote:
             | TLDR; I hadn't meant it that way, but in web pages it
             | really is enough. Web pages generally don't have
             | computation time to worry about, mostly just IO. This
             | simplifies scheduling because whatever is coordinating the
             | event loop in the browser (or other UI) can just background
             | any amount of independent IO tasks. If there is computation
             | screwing with share mutable state something with internal
             | knowledge needs to be involved and that isn't current event
             | loops, but in the long run it could be.
             | 
             | Sorry for the novel.
             | 
             | I meant those nuanced dependencies as a way of managing
             | shared mutable state and complex computations that really
             | do take serious CPU time. Let's make a simple example from
             | a complex game design. This example is ridiculous but
             | conveys the real nature of the problems with CPU and IO.
             | Consider these pieces of work that might exist in a
             | hypothetical game where NPCs and a player are moving around
             | a simulated environment with some physics calculations and
             | that the physics simulation is the source of truth for
             | locations of items in the game. Here are parts of a game:
             | 
             | Physics broad phase: Trivially parallelizable, depends on
             | previous frame. Produces "islands" of physics objects.
             | Imagine two piles of stuff miles apart, they can't interact
             | except with items in the same pile, each island is just a
             | pile of math to do. Perhaps in this game this might take 20
             | ms of CPU time. Across the minimum target machine with 4
             | cores that is 5ms apiece.
             | 
             | Physics narrow phase: Each physics island is single
             | threaded but threadsafe from each other, depends on broad
             | phase to produce islands. Each island takes unknown and
             | unequal time, likely between 0 and 2 ms of just math.
             | 
             | Graphics scene/render: Might have a scene graph culling
             | that is parallelizable, and converts game state into a
             | series of commands independent of any specific GPU API.
             | Depends on all physics completing because that is what it
             | is drawing. Likely 1 or 2 ms per island.
             | 
             | Graphics draw calls: Single threaded, sends render results
             | to GPU using directx/opengl/vulkan/metal. This converts the
             | independent render commands to API specific commands.
             | Likely less than 1 ms of actual CPU work, but larger wait
             | on GPU because it is IO.
             | 
             | NPC AI: NPCs are independent but light weight so threading
             | makes no sense if there are fewer than hundreds. Depends on
             | physics to know what NPCs are responding to. Wants to add
             | forces to the physics sim next frame. For this game lets
             | say there are many, I don't know maybe this is dynasty
             | warriors or something, so lets say a 1~3 ms.
             | 
             | User input: Single threaded, will to add forces to the
             | physics sim next frame based on user commands. Can't run at
             | the same time as NPCs because both want to mutate the
             | physics state. Less than 1 ms.
             | 
             | We are ignoring: Sound, Network, Environment, Disk IO, OS
             | events (window resize, etc), UI (not buttons or text
             | positioning), and a few other things.
             | 
             | A first attempt at a real game would likely be coded to
             | give all the threads to each piece of work one at a time in
             | some hand picked order, or at least until this was
             | demonstrate to be slow:
             | 
             | Physics Broad -> Physics Narrow -> Graphics render ->
             | Graphic GPU -> NPC AI -> User input -> Wait/Next frame
             | 
             | But that is likely slow, and I picked our hypothetical math
             | to be slow and marginal. Sending stuff to the GPU is a high
             | latency activity it might take 5 ms to respond, and if this
             | is a 60 FPS game then that is like 1/3 of our time. If we
             | simply add our hypothetical times that is frequently more
             | than 16ms making the game slower than 60fps. Even an ideal
             | frame with just a little physics is right at 15 to 16 ms So
             | a practical game studio might do other work while waiting
             | on the GPU to respond:
             | 
             | Physics Broad -> Physics Narrow -> Graphics render ->
             | 
             | At the same time: { Graphics GPU calls (Uses one thread)
             | NPC AI (Uses all but one thread) -> User input } ->
             | 
             | Wait/Next frame
             | 
             | Most of the time something like this is "fast enough". In
             | this example that 5 ms of CPU time wait on the GPU is now
             | running alongside all that NPC AI so we only need to add
             | the larger of the two. If this takes only a few days of
             | engineer time and keeps the game under the 16ms on most
             | machines then maybe the team makes a business decision to
             | raise the minimum specs just a bit (from 4 to 6 cores would
             | reduce physics time by another ms) and now can they ship
             | this game. There are still needless waits and from a purely
             | GigaFLOPS perspective perhaps much weaker computers could
             | do the work but there is so much waiting that it isn't
             | practical. But this compromise gets all target machines to
             | just about 60 FPS.
             | 
             | Alternatively, if the game is smart enough to make new
             | threads of work for each physics islands (actually super
             | complex and not a great idea in real game, but this all
             | hypothetical but there are similar wins to be had in real
             | games) and manage dependencies carefully based on the the
             | simulation state then something more detailed might be
             | possible:
             | 
             | 1. Physics broadphase, create known amount of physics
             | islands.
             | 
             | 2. Start a paused GPU thread waiting on known amount of
             | physics islands to be done rendering. This will start step
             | 5 as soon as the last step 4c completes.
             | 
             | 3. Add the player's input work to the appropriate group of
             | NPCs
             | 
             | 4. Each Physics island gets a thread that does the
             | following: a. Physics narrow phase for this island, b.
             | Partial render for just this island, c. Sets a threadsafe
             | flag this island is done, d. NPC AI is processed for NPCs
             | near this physics island, e. If this is the island with the
             | player process their input.
             | 
             | 5. The GPU thread waits for all physics islands threads to
             | get to step 3c then starts sending commands to the GPU! and
             | 3d gets to keep running.
             | 
             | 6. When all threads from step 4 and 5 complete pause all
             | game threads to hit the desired frame rate (save battery
             | life for mobile gamers!) or advance to next thread if past
             | frame budget or framerate is uncapped.
             | 
             | This moves all the waits to the end of each thread's frame
             | runtime. This means a bunch of nice things. That last
             | thread can likely do some turbo boosting, a feature of most
             | CPUs where they clock up one CPU if it is the only one
             | working. If the NPCs ever took longer than the GPU they
             | still might complete earlier because they get started
             | earlier. If there are more islands than hardware threads
             | this likely results in better utilization because there are
             | no early pauses.
             | 
             | This would likely a take a ton of engineering time. This
             | might move the frame time down a few more ms and maybe let
             | them lower the minimum requirements perhaps even letting
             | the game run on an older console if market conditions
             | support that. Conceptually, it might be a thing that could
             | be done with async/await, but I don't think that is how
             | most game engines are designed. I also think this makes
             | dependencies implicit and scattered through the code, but
             | likely that could be avoided with careful design.
             | 
             | I am a big fan of libraries that let you provide work
             | units, or functors, and say which depend on each other.
             | They all get to read/write to the global state, but with
             | the dependencies there won't be race conditions. Such
             | libraries locate the threading logic in one place. Then if
             | there is some particularly contentious state that many
             | things need to touch it can be wrapped in a mutex.
             | 
             | I suppose this might just be the iterative vs recursive
             | discussion applied to threading strategies. It just happens
             | that most event loops are single threaded, no reason they
             | need to be in the long run. In the long run I could see
             | making that fastest scenario happen in either paradigm even
             | though the code would look completely different.
        
         | kubectl_h wrote:
         | I work on a large multi-threaded ruby code base and it's always
         | a pain to deal with engineers introducing the async usage in
         | it. Most of the time these engineers don't have a grasp on what
         | fibers are good for and we have to painstakingly review their
         | code and provide feedback that no, it's not magical
         | concurrency, we have a limited number of fix sized connection
         | pools for postgres, redis, memcache, etc available in the
         | process and fibers have to respect those limits as much as
         | threaded code. In fact it's better if you don't introduce any
         | additional concurrency below the thread-per-request model if
         | you can. Only in places like concurrent http calls do we allow
         | some async/fiber usage but even then it ends up nominally
         | looking like threaded code and the VM context switching time
         | saved by using fibers instead of threads is trivial. Trying to
         | use fibers instead of threads in typical CRUD workloads feels a
         | little cargo culty to me.
         | 
         | Fibers are cool and performant but usually they should be
         | limited to lightweight work with large concurrency
         | requirements.
        
         | jayd16 wrote:
         | Ever done any UI programming or other paradigms where you have
         | a specific OS thread used for cross runtime synchronization but
         | you need to efficiently call background threads knowing exactly
         | what's running where?
         | 
         | Async/await handles that well and so far the other paradigms
         | just have not targeted these use cases well.
         | 
         | Message passing and promise libraries end up looking a lot like
         | async/await with less sugar. (Compared to C#'s async/await
         | implementation. The level of sugar depends on the language, of
         | course.)
        
         | binary132 wrote:
         | In our C++ stack at work we accept HTTP requests, and then
         | spawn internal IO which takes some time. During that time, we
         | yield the handler thread to handle other requests. The arrival
         | of the internal response sets up the next step in the request
         | handler to resume, which needs the response message.
         | 
         | This _can_ be done manually by polling status of descriptors,
         | and stepping into and out of callables  / passed continuations
         | by hand, or it can be done with a task scheduling API and
         | typesafe chained async functions.
         | 
         | Pick your poison, I guess, but it probably scales better than
         | using synchronous IO and thread-per-request.
        
         | kmeisthax wrote:
         | JavaScript has async/await because it solves a very
         | specifically JavaScript problem: the language has no threads
         | and all your code lives in the main event loop of the browser
         | tab. If you do the normal thing of pausing your JavaScript
         | until an I/O operation returns, the browser hangs. So you need
         | to write everything with callbacks (continuation passing
         | style), which is a pain in the ass and breaks exception
         | handling. Promises were introduced to fix exceptions and
         | async/await introduced to sugar over CPS.
         | 
         | There's also a very specifically Not JavaScript problem (that
         | happens to also show up in Node.js): exotic concurrency
         | scenarios involving lots of high-latency I/O (read: network
         | sockets). OS kernel/CRT[0] threads are a lot more performant
         | than they used to be but they still require bookkeeping that an
         | application might not need and usually allocate a default stack
         | size that's way too high. There's no language I know of where
         | you can ask for the maximum stack size of a function[1] and
         | then use that to allocate a stack. But if you structure your
         | program as callbacks then you can share one stack allocation
         | across multiple threads and heap-allocate everything that needs
         | to live between yield points.
         | 
         | You _can_ do exotic concurrency without exposing async to the
         | language. For example, Go uses (or at least one point it did)
         | its own thread management and variable-size stacks. This is
         | more invasive to a program than adopting precise garbage
         | collection and requires deep language support[3]. Yes, even
         | moreso than async /await, which just requires the compiler
         | transform imperative code into a state machine on an opt-in
         | basis. You'd never, _ever_ see, say, Rust implement transparent
         | green threading[2].
         | 
         | To be clear, all UI code has to live in an event loop. Any code
         | that lives in that event loop has to be async or the event loop
         | stops servicing user input. But async code doesn't infect UI
         | code in any other language because you can use threads and
         | blocking executors[4] as boundaries between the sync and async
         | worlds. The threading story in JavaScript doesn't exist, it has
         | workers instead. And while workers use threads internally, the
         | structured clone that is done to pass data between workers
         | makes workers a lot more like processes than threads in terms
         | of developer experience.
         | 
         | [0] Critical Runtime Theory: every system call is political
         | 
         | [1] ActionScript 3 had facilities for declaring a stack size
         | limit but I don't think you could get that from the runtime.
         | 
         | async/await syntax in Rust also _just so happens_ to generate
         | an anonymous type whose size is the maximum memory usage of the
         | function across yield points, since Rust promises also store
         | the state of the async function when it 's yielded. Yes, this
         | also means recursive yields are forbidden, at least not without
         | strategic use of `Box`. And it also does not cover sync stack
         | usage while a function is being polled.
         | 
         | [2] I say this knowing full well that alpha Rust had
         | transparent green threading.
         | 
         | [3] No, setjmp/longjmp does not count.
         | 
         | [4] i.e. when calling async promise code from sync blocking
         | functions, just sleep the current thread or poll/spinwait until
         | the promise resolves
        
         | didip wrote:
         | Async/await is just a sugar that many people happened to like.
         | 
         | As for me, I like Go's CSP better.
        
         | Fire-Dragon-DoL wrote:
         | I'm with you here, actor model is the way to go.
         | 
         | I thought Go style could be better, but the Go type system is
         | completely inadequate to support that style: it is impossible
         | to guaranteed that a goroutine is panic free, it is not
         | possible to put a recover in the goroutine unless that code is
         | under author's control (could be a lib) and a goroutine panic
         | takes down the whole app.
         | 
         | Suddenly I want a wrapper around each goroutine that bubbles up
         | the panic to the main thread without crashing the app, that
         | sounds a lot like an erlang supervision tree
        
           | explaininjs wrote:
           | That sounds a lot like async/await. Any errors thrown in an
           | async context bubble to the callsite awaiting it.
        
             | Fire-Dragon-DoL wrote:
             | Sorry you are right, that was specific to our case where we
             | were already waiting for a result from the goroutines. In
             | an actor model, you would let the "goroutine" die and the
             | supervisor would restart it if it has a restart policy,
             | otherwise it will die and stay dead (without bringing the
             | system down). In erlang you can also "link" the current
             | actor to another actor so that if the linked actor dies,
             | the linker dies too (or receives a message)
        
       | BiteCode_dev wrote:
       | I don't like colored function for obvious reasons, but fully
       | colorless for async means you don't know when things are async or
       | not.
       | 
       | There are a lot of things I dislike in JS, but I think the I/O
       | async model is just right from an ergonomics point of view. The
       | event loop is implicit, any async function returns a promise, you
       | can deal with promises from inside sync code without much
       | trouble.
       | 
       | It's just the right balance.
        
         | throw10920 wrote:
         | > fully colorless for async means you don't know when things
         | are async or not
         | 
         | The IDE can tell you.
        
           | BiteCode_dev wrote:
           | Given Ruby culture of monkey patching, not always.
           | 
           | Besides, many people dev Ruby with a lightweight text editor,
           | like text mate, that can't introspect code.
        
             | mp1mp2mp3 wrote:
             | > ...culture of monkey patching...
             | 
             | I haven't seen more than a handful of PRs with monkey-
             | patching in the last decade and even then they are
             | accompanied by long-winded explanations/apologies and plans
             | for removal ASAP (eg monkey-patching an up streamed fix
             | that hasn't been released yet).
             | 
             | Also, ruby classes/methods can tell you all about
             | themselves, so if you haven't got ruby-lsp up and running
             | (and even if you do) you can always open a repl (on its
             | own, in the server process, or in a test process) and ask
             | the interpreter about your method. It's pretty great!
             | 
             | It's definitely the case that the editor's ability to
             | understand code via static analysis is limited compared to
             | other languages, but it's not like we ruby devs are
             | navigating in the dark.
        
       | thwarted wrote:
       | > _Async code bubbles all the way to the top. If you want to use
       | await, then you have to mark your function as async. Then if
       | someone else calling your function wants to use await, they also
       | have to mark themselves as async, on and on until the root of the
       | call chain. If at any point you don't then you have to use the
       | async result (in JavaScript's case a Promise <T>)._
       | 
       | I find many descriptions of async code to be confusing, and _this
       | kind of description is exactly why_.
       | 
       | This description is backwards. You don't choose to use await and
       | then decorate functions with async. Or maybe you do and that's
       | why so many async codebases are a mess.
       | 
       | You don't want to block while a long running operation completes,
       | so you decorate the function that performs that operation with
       | async and return a Promise.
       | 
       | But Promises have exactly the same value as promises in the real
       | world: none until they are fulfilled. You can't do further
       | operations on a promise, you can only wait for it to be done, you
       | have to wait for the promise to be fulfilled to get the result
       | that you actually want to operate on.
       | 
       | The folly of relying on a promise is embodied in the character
       | Whimpy from Popeye: "I'll gladly pay you Tuesday for a hamburger
       | today".
       | 
       | Once you have a promise, you have to await on it, turning the
       | async operation into a synchronous operation.
       | 
       | This example seems crazy to me:                   async function
       | readFile(): Promise<string> {           return await read();
       | }
       | 
       | This wraps what should be an async operation that returns a
       | promise (read) in an expression that blocks (await read()) inside
       | a function that returns a promise _so you didn 't need to block
       | on it!_. This is a useless wrapper. This kind of construct is
       | probably the significant contribution to the mess: just peppering
       | code with async and await and wrapper functions.
       | 
       | await is the point where an async operation is blocked on to get
       | back into a synchronous flow. Creating promises means you
       | ultimately need to block in a synchronous function to give the
       | single threaded runtime a chance to make progress on all the
       | promises. Done properly, this happens by the event loop. But
       | setting that up requires the actual operation of all your code to
       | be async and thus callback hell and the verbose syntactic salt to
       | even express that in code.
       | 
       | That all being said, this piece is spot on. Threads (in general,
       | but in ruby as the topic of this piece) and go's goroutines
       | encapsulate all this by abstracting over the state management of
       | different threads of execution via stacks. Remove the stacks and
       | async programming requires you to manage that state yourself.
       | Async programming removes a very useful abstraction.
       | 
       | Independent threads of execution, if they are operating system
       | managed threads, operating system managed processes (a special
       | case of OS managed threads), green threads, or go routines, are a
       | scheduler abstraction. Async programming forces you to manage
       | that scheduling. Which may be required if you don't also have an
       | abstraction available for preemption, but async leaks the single
       | threaded implementation into your code, and the syntactic salt
       | necessary to express it.
        
         | graypegg wrote:
         | To be fair to the author, they do mention in the paragraph
         | above that sample:                   Async code bubbles all the
         | way to the top. If you want to use await, then you have to mark
         | your function as async. [...] If at any point you don't then
         | you have to use the async result (in JavaScript's case a
         | Promise<T>).
         | 
         | I think it's just an artificially lengthy example to show how
         | the responsibility of working with promises grows up the
         | callstack. Interpreting it that way since the final function
         | they define in that sample is `iGiveUp` which is not using the
         | async keyword, but returns a promise. Definitely could be made
         | a bit more clear that's it's illustrative and not that the
         | async keyword is somehow unlocking some super special runtime
         | mode separate from it's Promise implementation.
        
         | monkpit wrote:
         | > This example seems crazy to me [...]
         | 
         | To be fair, this is sort of pointless in JS/TS because an async
         | function returns a promise type by default, so the return value
         | has to be 'await'ed to access the value anyways. There are
         | linter rules you can use to ban 'return await'.
         | 
         | The only benefit to 'return await...' is when wrapped with a
         | try/catch - you can catch if whatever you 'await'ed throws an
         | error.
        
           | thwarted wrote:
           | Well, that was a simplistic contrived example, it may make
           | sense to do this if you have other functionality you want to
           | put in the wrapper. But I've seen more of these blind
           | wrappers, the result of stuffing async and await keywords
           | around in some misguided attempt to make things go faster, in
           | code than is really warranted so it's not being made clear
           | that examples like this are for explanatory purposes and
           | shouldn't be cargo-culted (if _anything_ should be cargo-
           | culted).
        
       | throw10920 wrote:
       | As they should be.
       | 
       | I object to doing what a computer can do for me (in programming),
       | and manually creating separate versions of functions that are
       | identical up to async absolutely falls into that category.
        
         | Rapzid wrote:
         | They have different signatures because they return different
         | things.
        
       | svieira wrote:
       | Anytime this comes up I plug the excellent "Unyielding"
       | (https://glyph.twistedmatrix.com/2014/02/unyielding.html) and
       | "Notes on structured concurrency" (https://vorpus.org/blog/notes-
       | on-structured-concurrency-or-g...) as the counterpoint to "What
       | color is your function". Being able to see structurally what
       | effects your invocation of a function will result in is very
       | helpful in reasoning about the effects of concurrency in a
       | complicated domain.
        
       | eduction wrote:
       | >Because threads share the same memory space they have to be
       | carefully coordinated to safely manage state. Ruby threads cannot
       | run CPU-bound Ruby code in parallel, but they can parallelize for
       | blocking operations
       | 
       | Ugh. I know Ruby (which I used to code in a lot more) has made
       | some real progress toward enabling practical use of parallelism
       | but this sounds still pretty awful.
       | 
       | Is there any effort to make sharing data across threads something
       | that doesn't have to be so "carefully coordinated" (ala Clojure's
       | atom/swap!, ref/dosync)?
       | 
       | Is the inability to parallelize CPU-bound code to do with some
       | sort of GIL?
        
         | vidarh wrote:
         | That's what Ractor is for, if you want full parallelization
         | without processes.
         | 
         | And, yes, it's to do with a GIL/GVL. The lock is released
         | during blocking IO, and some C extensions etc., so in practice
         | for a lot of uses it's fine.
        
         | sqeaky wrote:
         | They ditched the GIL a while a ago. But there are smaller locks
         | fighting for resources.
         | 
         | EDIT - I remember when patch notes years ago said the GIL was
         | gone and this says there is a GVL, I guess there is some subtle
         | difference.
         | 
         | then I think for practical purposes, "yes" is your answer, but
         | not in precisely that name.
        
         | Lio wrote:
         | It all depends on your Ruby runtime.
         | 
         | If you want parallel threads then you can use JRuby and your
         | threads will run in parallel on the JVM. I've used the
         | Concurrent-Ruby gem to coordinate that[1].
         | 
         | It has copies of some of the Clojure data structures.
         | 
         | Otherwise, Ractors the up coming solution for MRI Ruby.
         | 
         | 1. https://github.com/ruby-concurrency/concurrent-ruby
        
       | curtisblaine wrote:
       | So maybe it's me, but isn't that line mapping on `&:value` in
       | Ruby the exact equivalent of doing `Promise.all` on a bunch of
       | async functions in Javascript, with the downside that you don't
       | explicitly say that the array you calling `value` on is a bunch
       | of asynchronous things that need to be (a)waited for to realize?
       | In other words, since you have color anyway, isn't it better to
       | highlight that upfront rather than hiding it until you need to
       | actually use the asynchronous return values?
        
       | clayg wrote:
       | I feel like I have some intuative understanding of how go
       | achieves colorless concurrency using "go routines" that _can_
       | park sync /blocking io on a thread "as needed" built into the
       | runtime from the very begining.
       | 
       | I don't understand how Ruby _added_ this after the fact, globally
       | to ALL potential cpu /io blocking libraries/functions without
       | somehow expressing `value = await coro`
       | 
       | Python is currently going through an "coloring" as the stdlib and
       | 3rd-party libraries adapt to the explicit async/await syntax and
       | it's honestly kind of PITA. Curious if there's any more info on
       | how Ruby achived this.
        
         | mattgreenrocks wrote:
         | Unless I'm misunderstanding: isn't the JVM's virtual threads
         | another instance of this colorlessness?
        
         | hyualsdfowjasd wrote:
         | The C code in Ruby has to yield to the scheduler when it
         | detects a fiber.
         | 
         | For historical context, python had several "colorless" attempts
         | but none achieved widespread adoption.
        
         | DarkNova6 wrote:
         | Java has shown that a sufficiently strong runtime can take the
         | burden onto itself long after the facts have been established.
         | 
         | Python in contrast has a runtime ultimately held back by its
         | deep integration with the C-ecosystem which creates more black-
         | boxes inside the runtime than you can find inside a swiss
         | cheese.
         | 
         | Similar with C# and Rust. No strong runtime, no colorless
         | functions for you.
        
           | hyualsdfowjasd wrote:
           | > Similar with C# and Rust. No strong runtime, no colorless
           | functions for you.
           | 
           | Zig is the exception here but not sure how well it worked out
           | in practice.
        
             | anonymoushn wrote:
             | Well, the feature is temporarily killed, but users can
             | enjoy zigcoro which exposes a similar API until the old
             | async is back.
        
           | anonymoushn wrote:
           | Greenlet has been available for Python since 2006. Troubles
           | with asyncio are mostly self-inflicted.
        
           | Rapzid wrote:
           | C# has a strong runtime.
        
         | masklinn wrote:
         | > I don't understand how Ruby added this after the fact,
         | globally to ALL potential cpu/io blocking libraries/functions
         | without somehow expressing `value = await coro`
         | 
         | In Python gevent can monkeypatch the standard library to do
         | that transparently (mostly). I assume it works the same way
         | except better: have the runtime keep track of the current
         | execution (mostly the stack and ip), when reaching a blocking
         | point register a completion event agains the OS then tell the
         | fiber scheduler to switch off, the fiber scheduler puts away
         | all the running state (stack and instruction pointer) then
         | restores an other one (hopefully one that's ready for
         | execution) and resumes that.
        
       | fny wrote:
       | I'm confused, and please correct me if I'm wrong.
       | 
       | Aren't all these calls blocking? Doesn't `File.read` still block?
       | Sure it's multithreaded, but it still blocks. Threading vs an
       | event loop are two different concurrency models.
        
         | masklinn wrote:
         | > Aren't all these calls blocking?
         | 
         | Only locally, which is pretty much the same as when you `await`
         | a call.
         | 
         | > Threading vs an event loop are two different concurrency
         | models.
         | 
         | The point is that you can build a "threading" (blocking) model
         | on top of an event loop, such that you get a more natural
         | coding style with _most_ of the gain in concurrency.
         | 
         | It loses some, because you only have concurrency between tasks
         | (~threads) leaving aside builtins which might get special
         | cased, but my observation has been that the _vast_ majority of
         | developers don 't grok or want to use generalised sub-task
         | concurrency anyway, and are quite confused when that's being
         | done.
         | 
         | For instance in my Javascript experience the average developer
         | just `awaits` things as they get them, the use of concurrent
         | composition (e.g. Promise.all) is an exception which you might
         | get into as an optimisation, e.g. `await`-ing in the middle of
         | a loop is embarrassingly common even when the iterations have
         | no dependency.
        
           | Rapzid wrote:
           | The whole point of async/await is to allow for _not_ blocking
           | the caller though until it 's ready for an explicit
           | synchronization point.
           | 
           | If you are blocking the caller you have not "solved" the
           | colored function "problem".
        
       | janci wrote:
       | > 3. You can only call a red function from within a red function
       | 
       | The base of most arguments against async. And it's false. You can
       | call red from blue. And you should, sometimes.
        
       | stephen wrote:
       | Maybe its Stockholm syndrome after ~4-5 years of TypeScript, but
       | I _like_ knowing  "this method call is going to do I/O somewhere"
       | (that its red).
       | 
       | To the point where I consider "colorless functions" to be a leaky
       | abstraction; i.e. I do a lot of ORM stuff, and "I'll just call
       | author.getBooks().get(0) b/c that is a cheap, in-memory,
       | synchronous collection access ... oh wait its actually a
       | colorless SQL call that blocks (sometimes)" imo led to ~majority
       | of ORM backlash/N+1s/etc.
       | 
       | Maybe my preference for "expressing IO in the type system" means
       | in another ~4-5 years, I'll be a Haskell convert, or using
       | Effect.ts to "fix Promise not being a true monad" but so far I
       | feel like the JS Promise/async/await really is just fine.
        
       | bilalq wrote:
       | Other languages may handle it differently, but having to manage
       | threads is not a small compromise for going colorless. You're now
       | forced to deal with thread creation, thread pooling, complexities
       | of nested threads, crashes within child or descendant threads,
       | risks of shared state, more difficult unit testing, etc.
        
       | dang wrote:
       | Related:
       | 
       |  _What color is your function? (2015)_ -
       | https://news.ycombinator.com/item?id=28657358 - Sept 2021 (58
       | comments)
       | 
       |  _What Color Is Your Function? (2015)_ -
       | https://news.ycombinator.com/item?id=23218782 - May 2020 (85
       | comments)
       | 
       |  _What Color is Your Function? (2015)_ -
       | https://news.ycombinator.com/item?id=16732948 - April 2018 (45
       | comments)
       | 
       |  _What Color Is Your Function?_ -
       | https://news.ycombinator.com/item?id=8984648 - Feb 2015 (146
       | comments)
        
       | moralestapia wrote:
       | The argument for "color"ed functions in Javascript is flawed and
       | comes from somebody with a (very) shallow understanding of the
       | language.
       | 
       | Javascript is as "colorless" any other programming language. You
       | can write "async"-style code without using async/await at all
       | while it being functionally equivalent.
       | 
       | Async/await is just syntactic sugar that saves you from writing
       | "thenable" functions and callbacks again and again and again ...
       | 
       | Instead of:                 function(...).then(function() {
       | // continue building your pyramid of doom [1]       })
       | 
       | ... you can just do:                 await function()
       | 
       | That's literally it.
       | 
       | 1: https://en.wikipedia.org/wiki/Pyramid_of_doom_(programming)
        
       | Rapzid wrote:
       | 99.9% of these "colored" function articles have an incomplete or
       | even flawed understanding of async/await symantics.
       | 
       | Fibers are not fungible with async/await. This is why structured
       | concurrency is a thing.
        
       | pjungwir wrote:
       | Great article! I'm looking forward to reading the rest of the
       | series.
       | 
       | I noticed a couple details that seem wrong:
       | 
       | - You are passing `context` to `log_then_get` and `get`, but you
       | never use it. Perhaps that is left over from a previous version
       | of the post?
       | 
       | - In the fiber example you do this inside each fiber:
       | responses << log_then_get(URI(url), Fiber.current)
       | 
       | and this outside each fiber:                   responses <<
       | get_http_fiber(...)
       | 
       | Something is not right there. It raised a few questions for me:
       | 
       | - Doesn't this leave `responses` with 8 elements instead of 4?
       | 
       | - What does `Fiber.schedule` return anyway? At best it can only
       | be something like a promise, right? It can't be the result of the
       | block. I don't see the answer in the docs: https://ruby-
       | doc.org/3.3.4/Fiber.html#method-c-schedule
       | 
       | - When each fiber internally appends to `responses`, it is
       | asynchronous, so are there concurrency problems? Array is not
       | thread-safe I believe. So with fibers is this safe? If so,
       | how/why?
        
       ___________________________________________________________________
       (page generated 2024-07-23 23:08 UTC)