[HN Gopher] Coroutines for Go
       ___________________________________________________________________
        
       Coroutines for Go
        
       Author : trulyrandom
       Score  : 148 points
       Date   : 2023-07-17 19:16 UTC (3 hours ago)
        
 (HTM) web link (research.swtch.com)
 (TXT) w3m dump (research.swtch.com)
        
       | xwowsersx wrote:
       | Somewhat on topic given that OP brought up coroutines in Python:
       | what resources have folks used to understand Python's asyncio
       | story in depth? I'm just now finally understanding how to use
       | stuff, but it was through a combination of the official
       | documentation, the books "Using Asyncio in Python" and "Expert
       | Python Programming", none of which were particularly good.
       | Normally I'd rely just on the official docs, but the docs have
       | created much confusion, it seems, because there's a lot in them
       | that are useful more so for library/framework developers than for
       | users. So, I'm just wondering if anyone has great resources for
       | really gaining a strong understanding of Python's asyncio or how
       | else you might have gone about gaining proficiency to the point
       | where you felt comfortable using asyncio in real projects.
        
         | manifoldgeo wrote:
         | I read the same books you did, and I was equally unsatisfied
         | afterwards. The "Using Asyncio in Python 3" book was good
         | enough to help me write some code that had to hit an API 400k
         | times without blocking, but I never returned to asyncio after
         | that.
         | 
         | Afterwards, I realized there was a package called aiohttp that
         | I could've used, but too late.
         | 
         | I'll be interested to see what other HN people have done.
        
       | djha-skin wrote:
       | I thought that _the entire point_ of green threads was so that I
       | didn 't _have_ to use something like Python 's `yield` keyword to
       | get nice, cooperative-style scheduling.
       | 
       | I thought go's `insert resumes at call points and other specific
       | places` design decision was a very nice compromise.
       | 
       | This is allowing access to more and more of the metal. At what
       | point are we just recreating Zig here? What's next? An _optional_
       | garbage collector?
        
         | vaastav wrote:
         | Garbage collector in Go is optional. You can switch off garbage
         | collection by setting the environment variable GOGC=off.
         | 
         | More info about GOGC: https://dave.cheney.net/tag/gogc
        
           | t8sr wrote:
           | Only very theoretically - in Go, you don't control whether
           | memory goes on your stack or the heap, and heap escape
           | analysis is notoriously unpredictable. There is no explicit
           | free. You would have to write Go in a completely crazy way to
           | be able to turn off the GC and have the program not grow
           | unbounded.
           | 
           | You might think "I'll just use static buffers everywhere",
           | but allocations can occur in unexpected places. The compiler
           | does some very basic lifetime analysis to eliminate some
           | obvious cases (loops...), but it's really hard to avoid in
           | general.
        
           | gkfasdfasdf wrote:
           | Ok so garbage collection is optional, how about garbage
           | generation? Is there any way to manually clean up resources
           | if GOCG=off, or will memory usage continue to grow unbounded
           | as new objects are created?
        
             | vaastav wrote:
             | Grows unbounded. I wasn't recommending that one should set
             | GOGC=off. Just making a remark that one could should they
             | choose to do so.
             | 
             | EDIT: Sorry, I misunderstood part of your question. The
             | memory grows unbounded unless you call runtime.GC() which
             | triggers garbage collection. But this is a blocking call
             | and essentially block your whole program.
        
             | Cthulhu_ wrote:
             | I think in theory you can write code (or do some tricks) to
             | avoid all heap allocation.
        
               | dgb23 wrote:
               | You mean not growing the heap or literally no allocation?
        
               | t8sr wrote:
               | "In theory" being the operative words there. Turn on heap
               | escape analysis sometime, you'll be surprised how hard it
               | is to avoid.
        
         | wahern wrote:
         | yield and resume here aren't keywords, they're regular
         | variables (referencing regular closures) given those names
         | purely for pedagogical purposes.
         | 
         | The only odd thing here is the creation and use of a cancel
         | callback. I've not done much Go programming, so I don't know
         | whether this is just a performance improvement to more quickly
         | collect dropped iterator state, or it's a requirement because
         | Go does not garbage collect goroutines waiting on channels for
         | which the waiting goroutine has the only reference. In Lua you
         | wouldn't need such a thing because coroutines/threads are GC'd
         | like any other object--once all references are gone they're
         | GC'd even if the last operation was a yield and not a return
         | from the entry function.
        
       | chrsig wrote:
       | Coroutines are one thing that i'd probably prefer language
       | support for rather than a library.                   x := co
       | func(){             var z int             for {
       | z++                 yield z             }         }
       | y := x()              for y := range x {             ...
       | }
       | 
       | or something to that effect. It's cool that it can be done at all
       | in pure go, and I can see the appeal of having a standard library
       | package for it with an optimized runtime instead of complecting
       | the language specification. After all, if it's possible to do in
       | pure go, then other implementations can be quickly bootstrapped.
       | 
       | My $0.02, as someone that uses go at $work daily: I'd be happy to
       | have either, but I'd prefer it baked into the language. Go's
       | concurrency primitives have always been a strength, just lean
       | into it.
        
       | VWWHFSfQ wrote:
       | Aside:
       | 
       | Lua is an absolute work of art. Everything about the tiny
       | language, how it works, and even all the little peculiarities,
       | just makes sense.
        
         | bovbknn wrote:
         | Nope. Lua sux. It's a garbage language used by people that hate
         | themselves.
         | 
         | It's possibly the worst language I've ever used in 25 years,
         | and the people that love it are the worst coworkers I've ever
         | had.
        
           | wtetzner wrote:
           | As it stands, this comment contains no information. Can you
           | explain what about Lua is problematic?
        
       | HumblyTossed wrote:
       | what? I'm a Go newb, but isn't this what goroutines and channels
       | get you?
        
         | adrusi wrote:
         | This is an interface that can be implemented in terms of
         | goroutines and channels, but can also be implemented with a
         | lower-overhead scheduler tweak. The article shows how it could
         | be implemented using goroutines and channels, and then reports
         | the result of that implementation versus an optimized version
         | that avoids synchronization overhead and scheduler latency
         | which is unnecessary with this pattern.
         | 
         | Currently, you could use goroutines and channels to implement a
         | nice way to provide general iteration support for user-defined
         | data structures, but because of the overhead, people most often
         | opt for clunkier solutions. This change would give us the best
         | of both worlds.
        
       | alphazard wrote:
       | It looks like a lot of people are missing the point here. Yes a
       | coroutine library would be a worse/more cumbersome way to do
       | concurrency than the go keyword.
       | 
       | The use case motivating all the complexity is function iterators,
       | where `range` can be used on functions of type `func() (T,
       | bool)`. That has been discussed in the Go community for a long
       | time, and the semantics would be intuitive/obvious to most Go
       | programmers.
       | 
       | This post addresses the next thing: Assuming function iterators
       | are added to the language, how do I write one of these iterators
       | that I can use in a for loop?
       | 
       | It starts by noticing that it is often very easy to write push
       | iterators, and builds up to a push-to-pull adapter. It also
       | includes a general purpose mechanism for coroutines, which the
       | adapter is built on.
       | 
       | If all of this goes in, I think it will be bad practice to use
       | coroutines for things other than iteration, just like it's bad
       | practice to use channels/goroutines in places where a mutex would
       | do.
        
         | eklitzke wrote:
         | I think it's also worth mentioning that for certain specific
         | use cases coroutines are much more efficient than a full
         | goroutine, because switching to/from a coroutine doesn't
         | require context switching or rescheduling anything. If you have
         | two cooperating tasks that are logically synchronous anyway
         | (e.g. an iterator) it's much more efficient to just run
         | everything on the same CPU because the kernel doesn't have to
         | reschedule anything or power down/wake up CPU cores, and the
         | data is in the right CPU caches, so you'll get better cache
         | latency and hit rates. With goroutines this _may_ happen
         | anyway, but it 's not guaranteed and at the minimum you have
         | the overhead of going through the Go runtime goroutine
         | scheduler which is fast, but not as fast as just executing a
         | different code context in the same goroutine. Coroutines offer
         | more predictable scheduling behavior because you know that task
         | A is switching directly to task B, whereas otherwise the
         | goroutine scheduler could switch to another available task that
         | wants to run. The last section of the blog post goes into this,
         | where Russ shows that an optimized runtime implementation of
         | coroutines is 10x faster than emulating them with goroutines.
         | 
         | Google has internal kernel patches that implement cooperating
         | multithreading this way (internally they're called fibers), and
         | the patches exist for precisely this reason: better latency and
         | more predictable scheduling behavior. Paul Turner gave a
         | presentation about this at LPC ~10 years ago that explains more
         | about the motivation for the patches and why they improve
         | efficiency: https://www.youtube.com/watch?v=KXuZi9aeGTw
        
         | djha-skin wrote:
         | Still a kitchen sink move, though, isn't it?
         | 
         | Like, no careful thinking and good 80/20 solution this time.
         | Just "huh, we'd need coroutines to do this `right` so let's
         | just do that"
         | 
         | When they added generics, they _really_ , _really_ thought long
         | and hard, and came up with a _compromise_ that was brilliant
         | and innovative in the balance it struck.
         | 
         | I would have hoped to see something like that here, like "we're
         | adding this one feature to goroutines to have control in
         | specific situations" feels like something that would be better
         | than "we're going full Rust on this problem and just adding
         | it."
        
           | morelisp wrote:
           | This has been under discussion for a long time.
           | 
           | https://github.com/golang/go/issues/43557
           | 
           | https://github.com/golang/go/discussions/56413
           | 
           | I'm not sure Russ's personal blog is any kind of official
           | statement "this is what we're doing" yet?
        
         | rcme wrote:
         | What is wrong with:                   for {             next :=
         | getNext()             ...         }
         | 
         | What is the advantage of writing this as:                   for
         | next := range getNext {             ...         }
        
           | aaronbee wrote:
           | Your code is an example of a "pull iterator", and it's not as
           | much of a concern.
           | 
           | The harder case to deal with is a "push iterator", which are
           | often much simpler to implement, but less flexible to use.
           | See https://github.com/golang/go/discussions/56413
           | 
           | The OP is about converting a push iterator into a pull
           | iterator efficiently by using coroutines. This provides the
           | best of both worlds, simple iterator implementation and
           | flexible use by its caller.
        
           | alphazard wrote:
           | In practice the difference would be closer to:
           | getNext := iterableThing.Iterator()         for {
           | next, ok := getNext()             if !ok {
           | break             }             ...         }
           | 
           | vs.                   for next := range
           | iterableThing.Iterator() {            ...         }
           | 
           | One advantage is that it's slightly shorter, which matters
           | for very common patterns--people complain about `err != nil`
           | after all. Another advantage is there isn't another variable
           | for everyone to name differently. Another advantage is that
           | everyone can't do the control flow slightly differently
           | either.
        
           | bouk wrote:
           | The first doesn't do control flow
        
             | rcme wrote:
             | Depends how getNext is defined.
        
               | alphazard wrote:
               | The only way to do control flow in getNext, consistent
               | with how you partially defined it, is to panic. That
               | means there is some important code wrapping the whole
               | loop body that you left out.
        
               | zeroxfe wrote:
               | That doesn't make sense. `getNext()` has no ability to
               | break out of the loop.
        
           | rakoo wrote:
           | None of the proposals submit your idea of writing things
           | differently. The article proposes an implementation that is
           | fully doable and usable with current spec and no breaking
           | changes.
           | 
           | The point of coroutines is that they are little contexts that
           | serve to be called
           | 
           | - many times
           | 
           | - sometimes with different parameters
           | 
           | - change state
           | 
           | - might be interrupted by callers
           | 
           | - might interrupt callers themselves
           | 
           | All of this can be done by other means. Just like any sorting
           | can be done by copy pasting the same code, but generics make
           | it less tedious. That's the same idea here. Some problems can
           | be implemented as interleaving coroutines, and their
           | definition is simple enough that you want to write it all in
           | the body of some CreateCoroutine() function instead of taking
           | out another struct with 5 almost empty functions. It will not
           | solve all problems, but can more clearly separate business
           | logic and orchestration.
        
           | starttoaster wrote:
           | Besides what everyone else said, the obvious advantage is the
           | latter builds in the logic for exiting the loop once getNext
           | has run out of elements in the slice/map. Your former example
           | will need a final step that's like:                   ...
           | if getNext() == nil {           break         }         ...
           | 
           | This isn't a huge boon, and is mostly a matter of style. But
           | I prefer the latter because it's logic that gets handled with
           | the range builtin, so my code can be more about application
           | logic, rather than muddling in breaking out of loop logic.
        
       | ketchupdebugger wrote:
       | I'm not sure why author is advocating for single threaded
       | patterns in a multithreaded environment. Not sure why he's trying
       | to limit himself like this. The magic of goroutines is that you
       | can use all of your cores easily not just one. Python and Lua has
       | no choice.
        
         | yakubin wrote:
         | Coroutines are a control-flow mechanism. They're a single-
         | threaded pattern in as much as for loops are a single-threaded
         | pattern. Ability to write multithreaded programs does not
         | exclude the need for good single-threaded tools.
        
           | ketchupdebugger wrote:
           | Looking at python's asyncio coroutine library, they are just
           | mocking multithreading with asyncio.gather. Since coroutines
           | can be executed in any order they are not really control-flow
           | mechanisms. The selling point of coroutines over traditional
           | threads is its lightweight but its moot since goroutines has
           | similar memory cost to coroutines. The only real benefit is
           | that coroutines are non blocking while goroutines may be
           | blocked. There is no real benefit of having python's
           | coroutine in go since goroutines does the same but better.
        
             | yakubin wrote:
             | I don't know about Python, but this description of
             | coroutines, the general concept, doesn't seem accurate to
             | me. They most definitely cannot be executed in any order.
             | They also have nothing to do with multithreading
             | whatsoever. Lua has the most honest-to-god implementation
             | of coroutines I know of, so I'd suggest looking at that.
             | I've seen the word "coroutines" used in some weird way
             | suggesting multithreading, which probably means Python used
             | them to fake multithreading, but the idea originally has
             | nothing to do with it. The idea actually started out as a
             | way to better structure an assembly program in 1958:
             | <http://melconway.com/Home/pdf/compiler.pdf>.
        
             | maxk42 wrote:
             | Goroutines are non-blocking. Channels may not always be,
             | however.
        
       | vaastav wrote:
       | Not sure if this really is required. Most cases in Go are served
       | well by GoRoutines and for yield/resume semantics, 2 blocking
       | channel are enough. This seems to add complexity for the sake of
       | it and not sure it actually adds any new power to Go that already
       | didn't exist.
        
         | masklinn wrote:
         | Goroutines + channels add an enormous amount of overhead. Using
         | them as iterators is basically insane.
        
           | rcme wrote:
           | Why? Channels are already iterable using the range keyword.
           | ch := make(chan int)         go func() {             for i :=
           | 0; i < 100; i += 1 {                 ch <- i             }
           | close(ch)         }()              for i := range ch {
           | fmt.Println(i)         }
           | 
           | That is very simple.
        
             | t8sr wrote:
             | Aside from the massive performance penalty, cache thrashing
             | and context switching, this code will also leak a goroutine
             | (and so, memory) if you don't finish receiving from `ch`.
             | It's more brittle, longer to write, less local and in every
             | other way worse than a for loop. Why would you ever do it?
        
             | freecodyx wrote:
             | Race conditions. With coroutines you're not supposed to
             | deal with race. If i am understanding the motives
        
               | rcme wrote:
               | You can write to a channel concurrently.
        
               | ackfoobar wrote:
               | And to make the concurrency safe you have to pay the
               | price of synchronization.
               | 
               | https://go.dev/tour/concurrency/8
               | 
               | In A Tour of Go, concurrency section, "Equivalent Binary
               | Trees" is an example of paying the price when you don't
               | need it.
        
               | ikiris wrote:
               | its not that you cant, it's that its very expensive.
        
             | jerf wrote:
             | "Enormous amount of overhead" is the operative phrase. In
             | general, you want your concurrency operations to be
             | significantly smaller than the payload of the operation. In
             | the case of using channels as iterators with goroutines
             | behind them, it works fine for something like a web page
             | scraper, where the act of fetching a web page is enormously
             | larger than a goroutine switch, but as a generalized
             | iteration mechanism it's unusably expensive because a lot
             | of iteration payloads are very small compared to a channel
             | send.
             | 
             | I've encountered a lot of people who read that on /r/golang
             | and then ask "well why are send operations so expensive",
             | and it's not that. It's that a lot of iteration operations
             | are on the order of single-digit cycles and often very easy
             | to pipeline or predict. No concurrency primitive can keep
             | up with that. A given send operation is generally fairly
             | cheap but there are enough other things that are still an
             | order or two of magnitude cheaper than even the cheapest
             | send operation that if you block all those super cheap
             | operations on a send you're looking at multiple factor of
             | magnitude slowdowns. Such as you would experience with your
             | code.
        
           | vaastav wrote:
           | Sure but that's what the implementation of the coro in this
           | post uses under the hood. Not sure how this is any better wrt
           | overheads.
        
             | geodel wrote:
             | Where did you get this '..it uses same under the hood'? The
             | article clearly says:
             | 
             |  _..Next I added a direct coroutine switch to the runtime,
             | avoiding channels entirely. That cuts the coroutine switch
             | to three atomic compare-and-swaps (one in the coroutine
             | data structure, one for the scheduler status of the
             | blocking coroutine, and one for the scheduler status of the
             | resuming coroutine), which I believe is optimal given the
             | safety invariants that must be maintained. That
             | implementation takes 20ns per switch, or 40ns per pulled
             | value. This is about 10X faster than the original channel
             | implementation._
        
               | vaastav wrote:
               | The runtime switch was buried in the last paragraph of
               | the article. All of the code was using goroutines and
               | channels....
        
               | geodel wrote:
               | It does mention in an earlier section:
               | 
               |  _... That means the definition of coroutines should be
               | possible to implement and understand in terms of ordinary
               | Go code. Later, I will argue for an optimized
               | implementation provided directly by the runtime,.._
        
             | wahern wrote:
             | > Not sure how this is any better wrt overheads.
             | 
             | At the end he implements an experimental runtime mechanism
             | that permits a goroutine to explicitly switch execution to
             | another goroutine rather than using the generic channel
             | scheduling plumbing.
        
               | vaastav wrote:
               | Its in the last paragraph of the article...Very easy to
               | miss given the code uses goroutines and channels.
        
           | klabb3 wrote:
           | It's not insane at all. How did you come to that conclusion?
           | 
           | * Mutex lock+unlock: 10ns
           | 
           | * Chan send buffered: 21ns
           | 
           | * Try send (select with default): 3.5ns
           | 
           | Missing from here is context switches.
           | 
           | In either case, the overhead is proportional to how fast each
           | iteration is. I have channels of byte slices of 64k and the
           | channel ops don't even make a dent compared to other ops,
           | like IO.
           | 
           | You should absolutely use channels if it's the right tool for
           | the job.
           | 
           | Fwiw, I wouldn't use channels for "generators" like in the
           | article. I believe they are trying to proof-of-concept a
           | language feature they want. I have no particular opinion
           | about that.
        
             | yakubin wrote:
             | _> Missing from here is context switches._
             | 
             | Exactly. From _rsc_ 's previous post[1]:
             | 
             |  _> On my laptop, a C thread switch takes a few
             | microseconds. A channel operation and goroutine switch is
             | an order of magnitude cheaper: a couple hundred
             | nanoseconds. An optimized coroutine system can reduce the
             | cost to tens of nanoseconds or less._
             | 
             | [1]: <https://research.swtch.com/pcdata>
        
           | ikiris wrote:
           | see also https://github.com/golang/go/discussions/54245
        
       | Zach_the_Lizard wrote:
       | I have written Go professionally for many years now and don't
       | want to see it become something like the Python Twisted / Tornado
       | / whatever frameworks.
       | 
       | The go keyword nicely prevents the annoying function coloring
       | problem, which causes quite a bit of pain.
       | 
       | Sometimes in high performance contexts I'd like to be able to do
       | something like e.g. per CPU core data sharding, but this proposal
       | doesn't scratch those kinds of itches.
        
         | valenterry wrote:
         | Don't worry, it will get even worse, because its foundations
         | are worse than pythons - and it also has less flexibility
         | because it's not interpreted. Like it or not, it's going to
         | happen. I also predicted that generics would be added to Go and
         | folks here laughed about it. :)
        
         | rakoo wrote:
         | > The go keyword nicely prevents the annoying function coloring
         | problem, which causes quite a bit of pain.
         | 
         | The good thing is, nothing in the post even proposes anything
         | that might fall in the function coloring problem.
        
         | talideon wrote:
         | Coroutines and goroutines fill different niches. The latter
         | already fill the niche the likes of Twisted fill. There's
         | nothing here trying to pull anything akin to async/await into
         | Go.
         | 
         | Coroutines will fill a different nice more akin to Python's
         | generators. There are a whole bunch of places where this could
         | dramatically cut down on memory usage and code complexity where
         | you have a composable pipeline of components you need to glue
         | together. The design appears to me to be entirely synchronous.
        
         | cookiengineer wrote:
         | Can you maybe share hints to better channel management
         | patterns/frameworks?
         | 
         | Usually my goroutines related code feels like a mess because of
         | them, and I don't know how to make them "more clean" (whatever
         | that means). Any hints proven to be maintainable patterns would
         | be highly appreciated.
        
         | GaryNumanVevo wrote:
         | > The go keyword nicely prevents the annoying function coloring
         | problem
         | 
         | FYI when you write a goroutine and have to use Context, that's
         | literally just function coloring
        
           | morelisp wrote:
           | Ad absurdum, then anything other than full curried 1-ary-
           | exclusive functions is coloring.
           | 
           | You can easily bridge context-using and non-context-using
           | code, you can have nested contexts, you can have multiple
           | unrelated contexts, you can ignore the context, probably most
           | critically _you can 't pass values back up the context_, I
           | don't see any way these look like function coloring.
        
             | mcronce wrote:
             | Not sure about in Python, but in Rust, you can easily
             | bridge sync and async code. The function coloring problem
             | is, at least in that case, grossly overstated.
        
       | jerf wrote:
       | I'm not 100% sure this is the case, but I believe the context of
       | this goes something like this. As Go has added generics, there
       | are proposals to add generic data structures like a Set. Generics
       | solve almost every problem with that, but there is one
       | conspicuous issue that remains for a data structure: You can
       | iterate over a slice or a map with the "range" keyword, and that
       | yields special iteration behavior, but there is no practical way
       | to do that with a general data structure, if you consider
       | constructing an intermediate map or slice to be an insufficient
       | solution. Go is generally performance-sensitive enough that it
       | is.
       | 
       | The natural solution to this is some sort of iterator, as in
       | Python or other languages. (Contra frequent accusations to the
       | contrary, the Go community is aware of other language's efforts.)
       | 
       | So this has opened the can of worms of trying to create an
       | iteration standard for Go.
       | 
       | Go has something that has almost all the semantics we want right
       | now. You can also "range" over a channel. This consumes one value
       | at a time from the channel and provides it to the iteration,
       | exactly as you'd expect, and the iteration terminates when the
       | channel is closed. It just has one problem, which is that it
       | involves a full goroutine and a synchronized channel send
       | operation for each loop of the iteration. As I said in another
       | comment, if what is being iterated on is something huge like a
       | full web page fetch, this is actually fine, but no concurrency
       | primitive can keep up with the efficiency of incrementing an
       | integer, a single instruction which may literally take an
       | amortized fraction of a cycle on a modern processor. With
       | generics you can even relatively implement filter, map, etc. on
       | this iterator... but adding a goroutine and synchronized commit
       | for each such element of a pipeline is just crazy.
       | 
       | I believe the underlying question in this post is, can we use
       | standard Go mechanisms to implement the coroutines without
       | creating a new language construct, then use the compiler under
       | the hood to convert it to an efficient execution? Basically, can
       | this problem be solved with compiler optimizations rather than a
       | new language construct? From this point of view, the payload of
       | this article is really only that very last paragraph; the entire
       | rest of the article is just orientation. If so, then Go can have
       | coroutine efficiency with the standard language constructs that
       | already exist. Perhaps some code that is using this pattern
       | goroutine already might speed up too "for free".
       | 
       | The concerns people have about this complexifying Go, the entire
       | point of this operation is to suck the entire problem into the
       | compiler with 0 changes to the spec. Not complexifying Go with a
       | formal iteration standard is the entire point of this operation.
       | If one wishes to complain, the correct complaint is the exact
       | opposite one, that Go is not "simply" "just" implementing
       | iterators as a first class construct just like all the other
       | languages.
       | 
       | Also, in the interests of not posting a full new post, note that
       | in general I shy away from the term "coroutine" because a
       | coroutine is what this article describes, exactly, and nothing
       | less. To those posting "isn't a goroutine already a coroutine?",
       | the answer is, no, and in fact almost nothing called a coroutine
       | by programmers nowadays actually is. The term got simplified down
       | to where it just means thread or generator as Python uses the
       | term, depending on the programming community you're looking at,
       | but in that context we don't need to use the term "coroutine"
       | that way, because we already _have_ the word  "thread" or
       | "generator". This is what "real" coroutines are, and while I
       | won't grammatically proscribe to you what you can and can not
       | say, I will reiterate that I _personally_ tend to avoid the term
       | because the conflation between the sloppy programmer use and the
       | more precise academic /compiler use is just confusing in almost
       | all cases.
        
         | adrusi wrote:
         | A goroutine really is implemented as a coroutine though. It's
         | just that without this runtime optimization, there hasn't been
         | a way to interact with them without letting a multithreaded
         | scheduler manage their execution.
         | 
         | I don't remember where I came across this, but many years ago I
         | saw python-style generators termed "semi-coroutines" which I'm
         | a fan of. Python (frustratingly) doesn't implement them this
         | way, but the beauty of generators is that by limiting the scope
         | where execution can be suspended to the highest-level stack
         | frame, you can statically determine how much memory is needed
         | for the (semi-)coroutine, so it doesn't require a single
         | allocation.
         | 
         | Zig takes that a step farther by considering recursive
         | functions second-class, which lets the compiler keep track of
         | the maximum stack depth of any function, and thereby allow you
         | to allocate a stack frame for any non-recursive function inside
         | of another stack frame, enabling zero-allocation full
         | coroutines, as long as the function isn't recursive.
         | 
         | That would... probably be overkill for Go, since marginal
         | allocations are very cheap, and you're already paying the
         | runtime cost for dynamic stack size, so the initial allocation
         | can be tiny.
         | 
         | I would _love_ to see full proper coroutine support make it to
         | Go, freeing users of the overhead of the multithreaded
         | scheduler on the occasions where coroutine patterns work best.
         | I remember back in 2012 or so, looking at examples of Go code
         | that showed off goroutine 's utility as control flow primitives
         | even when parallelism wasn't desired and being disappointed
         | that those patterns would likely be rare on account of runtime
         | overhead, and sure enough I hardly ever see them.
        
       | bovbknn wrote:
       | Using Lua as an example for anything but a bad example is quite
       | the take. Lua is a dumpster fire compared to Go, and its
       | coroutines and handling of them suck as well.
        
       | silisili wrote:
       | Not sure I'm a fan. Looking through the examples, I feel like
       | this makes the language much harder to read and follow, but maybe
       | that's just my own brain and biases.
       | 
       | Further, it doesn't seem to me to allow you to do anything you
       | can't currently do with blocking channels and/or state.
        
       | RcouF1uZ4gsC wrote:
       | I don't think Coroutines would fit in with Go. There is a huge
       | emphasis on simplicity. Coroutines add a massive amount of
       | complexity. In addition, goroutines provide the best parts of
       | Coroutines - cheap, easy to use, non-blocking operations -
       | without a lot of the pain pints such as "coloring" or functions
       | and issues with using things like mutexes.
       | 
       | Just the question of whether one should use a goroutine or a
       | coroutine adds complexity.
        
         | badrequest wrote:
         | There are plenty of complicated things in Go, IMHO where it
         | shines best is judiciously providing incredibly nice interfaces
         | atop the complicated things.
        
         | talideon wrote:
         | There's no colouring here. These are synchronous coroutines.
        
       ___________________________________________________________________
       (page generated 2023-07-17 23:00 UTC)