[HN Gopher] Red and blue functions are a good thing
___________________________________________________________________
Red and blue functions are a good thing
Author : lukastyrychtr
Score : 221 points
Date : 2021-04-26 08:43 UTC (14 hours ago)
(HTM) web link (blainehansen.me)
(TXT) w3m dump (blainehansen.me)
| musingsole wrote:
| So...are we going to just ignore the massive elephant in the room
| with people of a certain skin pigmentation pointing at anything
| and declaring it "colored"?
| SAI_Peregrinus wrote:
| The use of "colored" to describe the hue of an object predates
| its use to describe certain skin pigmentations by several
| hundred years.[1] After the second use was invented the former
| still stayed common, eg in "colored pencils", "colored
| clothing", "colored contact lenses", etc.
|
| If you allow any term which has ever been used in a racist
| manner to be considered offensive you give enormous power to
| racists. They get to DoS attack your language by using whatever
| they want as slurs. I don't like empowering racists.
|
| [1] https://www.etymonline.com/word/colored
| saurik wrote:
| Yeah... I find this entire conversation pretty unpleasant--to
| the point where I haven't really wanted to ever interact with
| it directly, despite having tons of opinions on async/await due
| to my usage of either that specific feature or more generic
| related constructs (coroutines, monads) in many languages now
| (just not Rust)--as it isn't even like "colored" is being used
| in some happy way: things are always apparently somehow bad
| because they are "colored" :/, and I pretty much never hear the
| word "colored" used to talk about anything other than people
| (and, occasionally, functions). I realize a bunch of people are
| going to jump on me and say "I don't mean that", and I am not
| claiming you are: it just causes this continual looming mental
| state, as the word itself is so horribly weighted now.
| q3k wrote:
| Perhaps this discussion could be a opportunity to use the
| wording in a neutral fashion, letting yourself be okay with
| consciously and honestly using it, fighting against that
| looming feeling of unease?
|
| That is, if your issue with this wording is only that it's
| subjectively, personally unpleasant to you for inner reasons,
| and not that you see it as actively harmful.
| saurik wrote:
| The issue I noted is that most of the discussion surrounds
| an assumption--one that I happen to hold neither for people
| _nor functions_ --that "colored" things are inherently bad,
| and that we have to defend the lack of badness of colored
| things... if you can't see why having to sit inside of that
| terminology cluster would make someone who separately has
| to spend a lot of time in arguments about race relations
| uneasy, I fear for your empathy :(.
| [deleted]
| faitswulff wrote:
| > Feel free to create and use languages with invisible program
| effects or asynchrony, and I'll feel free to not pay any
| attention when you do.
|
| As a new Rust user myself, this kind of snobbery is still off-
| putting.
| justin_oaks wrote:
| Although the author talks about Rust, I interpreted that point
| as generic. Rust happens to have characteristics that the
| author likes, such as static typing and declared "exceptions"
| in the form of Results.
|
| I agree that languages with invisible program effects are
| harmful. The author said it another way as "Inconvenient
| knowledge is better than convenient ignorance".
|
| I've been bitten many times when using languages which have
| weak types and perform implicit type conversion. I much prefer
| to know exactly what type of variable I'm using and perform
| explicit type conversion instead of having the language "take
| care of it" for me.
| lelanthran wrote:
| When the author of a blog post introduces the post with:
|
| > A mature and serious language such as Rust
|
| About a language as new as Rust you can be sure that they're
| getting their grinding stone and axe ready for some serious
| use.
| kardianos wrote:
| On one hand I agree that being explicit about what is going on is
| important.
|
| That is why in Go all code is written the same, no matter if it
| is called async or sync. Then the caller gets to decide how it
| should be invoked.
|
| So yes, let's be explicit and separate out the concerns. Thus
| red/green functions are not as good as single colored language.
|
| Good premise. Your conclusion doesn't follow.
| azth wrote:
| golang is a colored language since "context.Context" needs to
| be passed to all asynchronous calls. This is explained very
| well in the Zig talk: https://kristoff.it/blog/zig-colorblind-
| async-await/
| jerf wrote:
| I half agree and half disagree. Having "colored functions" can be
| a useful design feature in a language, and indeed, perhaps ought
| to be spread further than they are.
|
| However, those colors should arise from _fundamental_
| differences, and be chosen deliberately, not from accidental
| considerations chosen by expediency. This post elides the idea of
| coloring functions for _effect_ vs. coloring them for
| _asynchronousness_. Those are not the same thing!
|
| Having coloration of your functions based on their
| synchronousness is an accident, either because of optimization
| choices (like Rust) or because your language was previously
| incapable of the new color and it just wasn't practical to simply
| put it into the language (Javascript). It is not necessary for
| the synchronousness of a function to be in the color, as proved
| by the languages where it isn't, and having used both I'm not
| convinced it's a good reason for a function to have a color. You
| get better code in languages that lack that color distinction,
| even if perhaps at a slight runtime cost.
|
| Having it based on "effects" is probably a good idea. But even in
| Haskell, the current reigning champion of a usable language that
| cares about effects (as opposed to Idris or whatever Coq renames
| itself to, which are interesting but not really "usable" in a
| software engineering sense)... asynchronousness is _not an
| effect_! You can use asynchronousness in whatever pure
| computation you like. There are things that are treated like
| effects that are related to concurrency like STM, but those are
| built on top of the asynchronousness that is not related to the
| effects machinery. (What is actually the "effect" in STM is the
| updating of the Transactional Memory, not whether or not you ran
| things concurrently or in parallel to develop your pure commit.)
|
| Concurrency, _on its own_ , is not an effect. You may
| concurrently try to run multiple effects at once, and create new
| levels of problems with effects, but that's not the creation of
| the concurrency itself. Whether or not a given bit of code has
| "effects" is unrelated to whether or not you run it
| "concurrently". (Given the usual simplifying assumptions we make
| about not running out of memory, not having CPU starvation, etc.,
| which are generally valid because we also simplify those away for
| non-concurrent code too.)
|
| So, coloration is probably a good thing in programming, but
| "spending" your coloration on the accidental detail of explaining
| to the compiler/runtime exactly how to schedule your code is not
| a good place to spend it. Compilers/runtimes ought to be able to
| work that out on their own without you having to specify it by
| hand all over the place.
| Sebb767 wrote:
| One point I massively miss from this discussion is retrofitting.
| Imagine the following situation: I have a webserver which calls a
| parser module which calls an external library which calls a
| closure in my code during parsing. Now this call needs to write
| to a log, which is async due to a file write or so.
|
| To implement this I need to touch the webserver core, implement a
| breaking change in the parser library and get a new external
| library, just to get my log line in this function. This is a
| _massive_ refactoring effort just to get a line in my log. It 's
| purer and better style, yes. But now I need a few days for what
| should take five minutes.
|
| If the codebase is designed from the ground-up with async in mind
| (which probably means to make basically every non-leaf async, to
| avoid the situation above), this is fine. If I'm working with an
| existing codebase, this does not work. I see why people don't
| want a fallback - it leads to bad code in cover. But allowing
| this and simply marking it via linter would make step-wise
| upgrading so much easier and alleviate 90% of the problems.
|
| EDIT: My concerns are specific about JavaScript/TypeScript. I
| think Rust got this right.
| zozbot234 wrote:
| > To implement this I need to touch the webserver core,
| implement a breaking change in the parser library and get a new
| external library, just to get my log line in this function.
| This is a massive refactoring effort just to get a line in my
| log.
|
| Not quite. It's very easy in Rust to purposefully block on an
| async call in sync-only code, or spawn blocking code on a side
| thread in async code and let it "join" whenever the async code
| block gets executed. Of course you'll get less-than-optimal
| performance, but no program-wide refactoring is required.
| Sebb767 wrote:
| Sorry, I was talking specifically about
| TypeScript/JavaScript. I edited the original comment to
| reflect this :)
| ItsMonkk wrote:
| > webserver which calls a parser module which calls an external
| library
|
| There's your problem right there. Nested calls are killing us.
|
| That parser module should be a pure transformation function. If
| it has requirements, your main webserver should be fetching it.
| If you need to first query the parser to know if it has any
| requirements for this call, you can do so as a separate pure
| function. That query will then respond with a value that your
| webserver can then use to call the external library with. When
| the impure external library responds, with no external changes,
| you can now pass that into the parser module to get your parsed
| object out.
|
| Now when you want to add logging to the external library,
| ideally you can do so at the webserver level, but otherwise you
| only need to add logging to the external library, and your
| parser doesn't change at all.
| valenterry wrote:
| > Now this call needs to write to a log, which is async due to
| a file write or so.
|
| Well, think about, without having a specific language in mind.
| There are multiple ways you want everything to be executed:
|
| 1. The library should call your function and wait until both
| parsing and writing to the file is finished before executing
| any other action
|
| 2. The library should call your function and wait until both
| parsing and writing to the file is finished, but potentially
| execute other actions (e.g. other parsing for another request)
| in the meantime / in between.
|
| 3. The library should call your function and wait until parsing
| is finished but not wait until writing to the file is finished
| (which also means to disregard the filewrite result)
|
| The only way to really control how things happen is to have the
| library be aware of potential concurrency and that is true for
| Rust as well. There is no other solution, except for "hacks"
| which, when used, will make it impossible to understand what a
| piece of code does, without reading _all_ the code. Similar to
| global mutable variables (because that is what it would be).
|
| If you application is small or not so critical, then you might
| indeed get away with a hack, but I think the energy should be
| focussed on improving languages so that libraries can easily be
| rewritten to embrace concurrency, rather than allowing for more
| hacks.
| wrlkjh wrote:
| This post's use of the term 'colored' is insensitive to the
| thousands of "colored" folks that were beaten, raped and tortured
| in the United States.
|
| Please change your terms and if you don't we will out you as a
| racist and you will lose your job.
| Jakobeha wrote:
| My take on this:
|
| Red and blue functions, and async / await, are a good thing when
| implemented well. It's much better than implicit every function
| can be async (e.g. python's gevent). Because it 1) makes it clear
| what is and isn't async, and 2) lets the programmer convert async
| to a callback when necessary.
|
| The real issue is that some languages have bad implementations of
| async / await. Specifically Rust, where afaik traits can't have
| async and that makes using async really convoluted. And most
| languages don't have async-polymorphism, and maybe other features
| I can't think of, which ultimately create the problem "I have to
| call an async function in a sync function without blocking". With
| a good implementation of async / await and half-decent
| architecture you shouldn't have that problem.
| pornel wrote:
| The "traits can't have async" thing you're referring is at best
| a minor inconvenience or tiny optimization left on the table.
| This feature was intentionally postponed.
|
| Traits _can_ use async code, just not without boxing /dynamic
| dispatch. Given that other languages always box all their
| Promises/Tasks, Rust needing to do it in a couple of cases
| doesn't even put it at a disadvantage.
|
| To use async code in Rust you don't need to define any traits.
| For users of async traits the issue is invisible. For library
| authors that want their code to be both async and higher-order
| at the same time, the problem boils down to needing to add a
| macro to "polyfill" the feature for now, until "native" support
| comes later.
| gsmecher wrote:
| This is a good time to mention a Python experiment [1] that
| "covers up" Python 3.x's coloured asyncio with a synchronous
| wrapper that makes it more user-friendly. It's been posted here
| before [2]. I've pushed ahead with it professionally, and I think
| it's actually held up much better than I expected.
|
| This idiom worked well in Python 2.x with Tornado. Python 3's
| batteries-included async unfortunately broke it partway (event
| loops are not reentrant [3], deliberately). However, in this Rust
| post's "turn-that-frown-upside-down" spirit, the Python 3.x port
| of tworoutines enforces a more deliberate coding style since it
| doesn't allow you to remain ignorant of asynchronous calls within
| an asynchronous context. This (arguably) mandates better code.
|
| I don't claim Python is superior to Rust, or that hiding red/blue
| functions is a great idea everywhere. I did want to point out
| that dynamic languages allow some interesting alternatives to
| compiled languages. This is worth musing over here because of the
| author's stated bias for compiled languages (I think I agree),
| and his claim that red/blue unification would only be enabled by
| sophisticated compilers (not necessarily).
|
| [1]: http://threespeedlogic.com/python-tworoutines.html [2]:
| https://news.ycombinator.com/item?id=18516121 [3]:
| https://bugs.python.org/issue22239
| douglaswlance wrote:
| Using purple for link decorations is a non-optimal choice.
| declnz wrote:
| ...but it's coloured both red and blue! :)
| coldtea wrote:
| Huh?
|
| It has been the standard in the web for time immemorial (blue
| for links, a shade akin to purple for visited links).
|
| What would be the "optimal" choice?
| hu3 wrote:
| The point is that the blog uses purple for both new and
| visited links: a, a:visited {
| color: #692fac; }
| willtim wrote:
| And why restrict oneself to just two colours? Haskell monads also
| allow one to abstract over the "colour", such that one can write
| polymorphic code that works for any colour, which I think was the
| main objection from the original red/blue post.
|
| Microsoft's Koka is an example of a language that further
| empraces effect types and makes them easier to use: https://koka-
| lang.github.io/koka/doc/index.html
| simiones wrote:
| Don't monads in themselves have the same problem of introducing
| "coloring" (where each monad is a different color) , so that
| functions which work for one monad won't work with easily
| compose with functions which work with another monad?
|
| For example, isn't it true that you can't easily compose a
| function f :: a -> IO b with a function g :: b -> [c] (assuming
| you don't use unsafePerformIO, of course)?
|
| Of course, there will be ways to do it (just as you can wrap a
| sync function in an async function, or .Wait() on an async
| function result to get a sync function), and Haskell's very
| high level of abstraction will make that wrapping easier than
| in a language like C#.
| lmm wrote:
| > Don't monads in themselves have the same problem of
| introducing "coloring" (where each monad is a different
| color) , so that functions which work for one monad won't
| work with easily compose with functions which work with
| another monad?
|
| I'd say that monads surface a problem that was already there,
| and give you a vocabulary to talk about it and potentially
| work with it. Composing effectful functions is still
| surprising even if you don't keep track of those effects in
| your type system (for example, combining async functions with
| mutable variables, or either with an Amb-style
| backtracking/nondeterminism function, or any of those with
| error propagation).
|
| But yeah, monads are an incomplete solution to the problem of
| effects, and when writing composable libraries you often want
| to abstract a bit further. In the case of Haskell-style
| languages we have MTL-style typeclasses, or Free coproducts,
| or various more experimental approaches.
| klodolph wrote:
| You can, h :: a -> IO [c] h = liftM g
| . f
|
| Or fmap, or use various arrow operators, or <$>. I might
| describe it as "pathologically easy" to compose functions in
| Haskell, since it's so much easier in Haskell than other
| languages (and mixing monads & pure code is still very easy).
| linkdd wrote:
| > I might describe it as "pathologically easy" to compose
| functions in Haskell
|
| Maybe because "composing functions" is the whole point of
| category theory, monads and Haskell?
| klodolph wrote:
| The point of category theory is composing morphisms!
| Functions are morphisms in the Set category. Haskell has
| something which looks very similar to category theory,
| but the functions and types in Haskell do not form a
| category. I would call it "category theory inspired".
| linkdd wrote:
| Thank you for the clarification.
|
| But still, since it's all about composition, the
| simplicity is a direct consequence IMHO.
| simiones wrote:
| Sure, but that's not a feature of Monads themselves, but a
| feature of the excellent generic programming support in
| Haskell, along with polymorphic return types that makes the
| boilerplate really easy.
|
| Still, there is a slight bit more to the story if you
| actually had to pass a [c] to another function ( you would
| have to liftM that function as well, I believe, and so on
| for every function in a chain).
| klodolph wrote:
| So I don't think that this criticism is quite fair, and
| I'll explain why. When you choose to make something
| implicit/easy in a language, it often comes with the cost
| of making other things explicit/hard. The best example of
| this, in my mind, is the choice to use Hindley-Milner
| style type inference (ML, Haskell) versus forward-only
| type inference (C, Java). HM style type inference comes
| with its advantages, but introduces limitations that
| affect polymorphism.
|
| In Haskell, you can't automatically lift something into a
| Monad because this would require a type of ad-hoc
| polymorphism that is impossible with a coherent Hindley-
| Milner type of type inference system.
|
| However, that exact type inference system is what lets
| you just write "liftM", and it automatically does the
| Right Thing based on the type of monad expected at the
| call site.
|
| I think ultimately--Haskell is in a sense an experiment
| to see if these alternative tradeoffs are viable, and
| it's a massive success in that regard.
|
| (Also: you wouldn't need to liftM every function in a
| chain. Just liftM the chain at the end.)
| 6gvONxR4sf7o wrote:
| The idea is that your individual monad has to say how to
| compose those functions (and package up values). The type
| signature of bind and return are is (>>=)
| :: m a -> ( a -> m b) -> m b return :: a
| -> m a
|
| In your case f gives you an IO b, which is the first argument
| to bind (your m a). Then g's output needs packing up, so you
| use return to take [c] to m [c], giving `return . g :: b -> m
| [c]`, which will be the second argument to bind (your a -> m
| b).
|
| That gives you `composed_function x = (f x) >>= (return .
| g)`, which a "do" block makes potentially clearer :`do {y <-
| f x; return (g y)}`
| pas wrote:
| Yes, they do. (That's why there's the monad transformer.)
|
| But this is fundamentally a very simple thing. If you use an
| async framework, you have to think inside that. It's just as
| fundamental as CPU architecture, or operating system. If you
| have an Intel CPU you need a corresponding motherboard. If
| you have Windows you need programs that use the Win32 API (or
| use the necessary translation layer, like WSL).
|
| Similarly, if you use - let's say - libuv in C, you can't
| just expect to call into the kernel and then merrily compose
| that with whatever libuv is doing.
|
| This "coloring" is a suddenly overhyped aspect of programming
| (software engineering).
|
| And of course fundamentally both are wrappable from the
| other, but it's good practice to don't mix sync/async libs
| and frameworks and components. (Or do it the right way, with
| separate threadpools, queues, pipes, or whatnot.)
| an_opabinia wrote:
| > you can't just expect to call into the kernel and then
| merrily compose that with whatever libuv is doing.
|
| Quasar Fibers, which eliminated function coloring for Java
| async versus blocking IO, I wound up doing the equivalent
| of that in the Java ecosystem, and it works in production
| extremely well.
|
| Defeating the coloring problem makes gluing all sorts of
| really valuable stuff together much easier. In a world
| where you can't just rewrite from scratch the things you're
| gluing.
| Arnavion wrote:
| This kind of "transparent async" (like what golang also
| does) always has the danger of creating race-condition-
| style bugs, because some synchronous-looking piece of
| code doesn't run synchronously so some state at the
| beginning of the function has a different value at the
| end of the function. Have you had those kind of problems
| with your Java codebase?
| marcosdumay wrote:
| Of course the functionality that abstracts over coloring has
| the same problems coloring has on any other language. If you
| don't want coloring to happen on your code, you shouldn't
| declare it by using a monad.
| sullyj3 wrote:
| It's not entirely clear what you want to happen here with `f`
| and `g`. It's trivially easy to apply `g` to the `b` in the
| result of f. You just use `fmap`. --
| suppose we have some x :: a fmap g (f x) :: IO [c]
|
| Of course, you wouldn't want the [c] to be able to escape the
| IO container, because that would break referential
| transparency - you could no longer guarantee that a function
| returns the same result given the same arguments. Of course,
| that's no problem for manipulating the [c] value by passing
| it to pure code.
|
| It's true that "monad's don't compose" in the sense that you
| need to manually write monad transformers to correctly
| combine monadic effects, whereas functors compose
| automatically. However, that's not the same thing as saying
| it's difficult to use functions which happen to return
| different monadic results together, if the types match.
| the_duke wrote:
| The Unison language [1] also has a very interesting effect
| system.
|
| [1] https://github.com/unisonweb/unison
| inglor wrote:
| Basically, in JS you could have multiple colors (with
| generators that imitate `do` notation pretty well) and you can
| obviously implement any monadic structure with bind you'd want.
|
| The thing is: having less generic abstractions (and fewer
| colors) makes certain things (like debugging tooling and
| assumptions when reading code) much much nicer.
| fbn79 wrote:
| Javascript must solve the fundamental problem that generators
| are (by implementation) not clonable. So generators can not
| be used generally as do notation.
| benrbray wrote:
| Fun Fact: The author of Koka (Daan Leijen) wrote a LaTeX-
| flavored Markdown editor called Madoko [1] using Koka! It is
| used to render the "Programming Z3" book [2], for instance.
|
| [1] http://madoko.org/reference.html
|
| [2] https://theory.stanford.edu/~nikolaj/programmingz3.html
| pron wrote:
| The problem is that it is unclear that effect systems are
| something worth having in the first place. A type system could
| introduce many kinds of distinctions -- the unit allocates
| memory or not, the unit performs in a given worst-case time or
| not, etc. etc. -- but not everything it _could_ do it also
| _should_. Every such syntactic distinction carries a cost with
| it (it tends to be viral), and for each specific distinction,
| we should first establish that the benefits gained outweigh
| those costs. Otherwise, it 's just another property among many
| that a programming language may have, whose value is
| questionable.
| mumblemumble wrote:
| I'm not sure it's correct to understand monads as a kind of
| function coloring mechanism. Two reasons. First, monads aren't
| functions.
|
| Second, the whole function coloring thing wasn't a complaint
| about limits on function composition. If it were, we might have
| to raise a complaint that arity is a kind of color. It was a
| complaint that functions of one color can't be called from
| inside functions of another color. And you don't really have an
| equivalent problem with using one monad inside the
| implementation of another monad.
| valenterry wrote:
| You have the same problem with monads. You cannot have a
| (pure) function, that calls another function which returns an
| IO monad, without returning an IO monad itself _or_ ignoring
| the result of the function call (which is exactly the same as
| not calling it from the beginning).
|
| Hence the semantics are the same - just without any compiler
| feature like async/await.
| sullyj3 wrote:
| It's true that you can't extract a value obtained from the
| outside world from the IO type by design, but, I don't
| think I'd characterize that as a problem. For any piece of
| code someone thinks they want to write that involves
| turning an `IO something` to a `something`, I can show how
| to solve the problem without doing that with a minimum of
| fuss. It's actually quite difficult for me to remember why
| someone would want to do that, having been immersed in this
| paradigm for so long.
|
| For example, if you have, say,
|
| `getIntFromSomewhere :: IO Int`
|
| and you want to apply a pure function of type `Int -> Int`
| to it. Supppose the pure function adds one. You might
| initially think that that's all there is to it, you don't
| want IO involved in what happens next. But of course, the
| only reason you would ever do some pure computation is to
| use the result to have some effect on the world, eg
| printing it to a console. So actually in a sense, you do
| want to stay in IO. myIOAction :: IO ()
| myIOAction = do myInt <- getIntFromSomewhere
| print (addOne myInt)
|
| As you say, you can't pull the int out of
| getIntFromSomewhere without operating in an IO action. But
| why would we want to, since we obviously want to make use
| of that value to affect the outside world? And of course,
| that doesn't mean we have any issues getting that int into
| a pure function `addOne :: Int -> Int` first, before
| eventually (immediately, in this case) making use of the
| result to perform some IO.
| mumblemumble wrote:
| That's a special case for IO, isn't it? I guess I've never
| actually tried this, but I don't think it's generally the
| case for all monads.
|
| IOW, Haskell does have coloring, and its coloring does
| manifest through the IO monad, but that does not mean that
| all monads have color. Sort of like how you can't do,
| "Socrates is a man, Socrates is dead, therefore all men are
| dead."
| valenterry wrote:
| I think I see what you mean. However, it is the other way
| around: some monads are special cases in the way that
| they have extra methods that allow you to "escape" their
| context. Like lists: you can fold over them and get out a
| different result type.
|
| But in general you can't do that with monads, in
| particular not if you just have "any" monad at hand but
| you don't know which one it is. And IO is one example,
| parsers would be another one.
| aranchelk wrote:
| I agree with what you're saying, but I think calling them
| "special cases" takes away from the elegance of the
| system. Haskell users frequently say things like "the IO
| monad" or "the list monad", but probably the more
| accurate way would be to say the "the monad instance of
| IO".
|
| Monad is a typeclass (analogous to an interface in OOP or
| traits in Rust). IO is a type. To make some type a member
| of monad (or any other typeclass), you just need to write
| the required functions in an instance declaration.
|
| You can make a type a member of as many typeclasses as
| you want. List is a member of Foldable, Monad, etc.
| Calling that a special isn't quite right as it is very
| common for a type to be a member of many typeclasses.
|
| You're entirely correct in saying the Monad typeclass
| offers no way to extract values. But there actually is an
| extract function called unsafePerformIO with type "IO a
| -> a". The name says it all: use it and you'll lose all
| of the safety normally afforded by keeping data derived
| from effects inside the IO type.
| valenterry wrote:
| You are totally right, I was a bit sloppy with my
| language there. Thank you for the detailed explanation.
| dllthomas wrote:
| > First, monads aren't functions.
|
| Monads aren't functions[0]; in the analogy, monads are
| colors.
|
| > [F]unctions of one color can't be called from inside
| functions of another color. And you don't really have an
| equivalent problem with using one monad inside the
| implementation of another monad.
|
| If you have a bunch of `a -> M b` and one `a -> N b`, it's
| much more straightforward to interleave calls to any group of
| the former than to incorporate one of the latter. You
| encounter this when writing a new `a -> M b`, much like you
| might encounter the corresponding issue when writing a new
| blue function.
|
| I should point out that there is usually good reason for this
| in the implementation, and often good reason for it in the
| domain. When there is good reason in the domain, that's
| _great_ - our type system is surfacing the places we need to
| make decisions, rather than letting us mash things together
| in ways that will cause problems. When there is only good
| reason in the implementation, or no good reason at all, it 's
| unfortunate, but certainly something that can be worked
| around.
|
| Unlike the original setting, Haskell provides the ability to
| write functions polymorphic in "color" ("polychromatic"?),
| but for the (unfortunately necessary?) distinction between `a
| -> b` and `a -> Identity b`, and a lot of colors can be
| easily removed or transformed.
|
| [0]: Some types which are monads are functions but that's not
| the sense in which we're coloring functions.
| zozbot234 wrote:
| Monads are not the same thing as effect types - the latter are
| most commonly modeled as Lawvere theories. They are at a
| different level of composability.
|
| Regardless, this blogpost is about Rust which has yet to gain
| either feature, for well-known reasons (for one thing, monadic
| bind notation interacts badly w/ the multiple closure types in
| Rust). Talking about Haskell monads just does not seem all that
| relevant in this context.
| tome wrote:
| Could you elaborate? Firstly, I'm not aware of any
| programming languages that has Lawvere theories as a first-
| class concept. Secondly, when I last looked into Lawvere
| theories I couldn't work out any way of making them into a
| first-class concept in any way that would make them distinct
| from monads.
|
| (On the other hand Lawvere theories may be useful for
| modelling effects in type theories. That may be so (I don't
| know) but is an orthogonal issue.)
| sullyj3 wrote:
| I don't really know much about this area, but there's a
| language literally called Lawvere.
|
| https://github.com/jameshaydon/lawvere
| [deleted]
| flqn wrote:
| Monads are not the same as effect types, but Haskell affords
| implementing effects using monads, so I'd argue the
| comparison is relevant.
| willtim wrote:
| > Monads are not the same thing as effect types
|
| Monads can be used to implement "effect types" and define the
| semantics of them. I believe this is how Koka works.
|
| > Talking about Haskell monads just does not seem all that
| relevant in this context.
|
| I respectfully disagree. Async/Await first appeared in F# as
| a Monad (Computational Workflow). Monads also do not
| necessarily have to involve closures at runtime, they can be
| useful just to define semantics and build type-checkers.
| lmm wrote:
| Haskell-like monads are absolutely relevant. They're a common
| solution to this problem in Rust-like languages, and while
| there may be some Rust-specific concerns to be addressed it's
| not at all clear that any of them are real blockers.
| ztcfegzgf wrote:
| you can make sync IO calls in Rust if i'm not mistaken. be it
| reading a file or doing a remote HTTP request.
|
| and with this being true, the article's argument does not work,
| there is no types-mark-io happening there.
|
| the argument applies better to Haskell, where you cannot do any
| IO without the types saying so.
|
| interestingly, it also applies somewhat to Javascript, where
| because of backward-compat and history, it's not possible to do
| sync IO (mostly :) welcome to javascript), so if you do not see
| an "async" annotation, you can be mostly sure that this is pure
| cpu-work.
| Skinney wrote:
| While I agree with his points in principle, Rust isn't actually a
| great example to use.
|
| Rust allows to simply spawn threads, right? So a function not
| being marked 'async' doesn't necessarily give any guarantees
| about potential side effects.
|
| Spending most of my time writing Elm, having side-effects present
| in the type system is great. But colored functions in Rust (as
| well as most other async/await langs) only creates a devide
| between functions that provide a specific kind of asynchrony, and
| those which do not.
|
| So async/await improves the syntax of writing a specific kind of
| async code, at the cost of composability (the red/blue divide).
| It does not, however, make any guarantees about the asynchronous
| nature of your code.
| linkdd wrote:
| In Erlang/Elixir, every function from another module is async by
| design, and the await is implicit.
|
| This is by design to allow for hot code reloading. This is a very
| sound design choice because true concurrency is achieved through
| the creation of other processes (agents, genservers, tasks, ...).
|
| You have nice things like Dynamic Supervisors to monitor those
| processes. And it works very well.
|
| Try to do the same with uncancellable promises in
| Javascript/Typescript, or with channels in Go, while it's still
| possible, you'll find your code growing in complexity way
| quicker.
|
| The actor model embedded in the language, like Erlang/Elixir,
| brings so much to the table, it's unfortunate that the author
| dismiss it without a second thought.
| bcrosby95 wrote:
| > In Erlang/Elixir, every function from another module is async
| by design, and the await is implicit.
|
| This statement confuses me. What if I never spawn a process?
| What if the process I spawn makes function calls into other
| purely functional modules?
| linkdd wrote:
| Function calls to another module ALWAYS introduce a
| breakpoint so the BEAM (Erlang VM) can check if the target
| module is up-to-date, and perform hot code reloading if it
| isn't.
|
| Therefore, every function is interruptible (implicit async),
| and calls to another module function introduce the breakpoint
| (implicit await).
|
| You always have at least one process, the one running your
| code. Even if you don't spawn any more processes.
|
| And the BEAM still needs to handle signals (like messages,
| traps, crash, ...) that could be received from another node
| at any point in time, therefore it needs to be able to
| interrupt your code, which will be, by design, interruptible.
|
| The very nature of Erlang/Elixir is asynchronous by design,
| even if you don't really see it in the syntax (as it should
| be, because your concern is the algorithm/data exchanges, not
| the underlying execution model).
| [deleted]
| whizzter wrote:
| Actually the entire Erlang runtime is "async"(in the terms of
| the other mentioned languages) almost all the time since it's
| based on green threads/processes(stuff like a NIF might block
| of course), the module call is more of a lookup-current-latest-
| definition call to enable a well defined point for the hot-
| loading whereas local functions would more likely to be in the
| middle of a computation (and thus be more likely to be in a
| complicated-to-upgrade state).
|
| async/await being implemented and popularized as it is today
| everywhere might just be our generations null-mistake since
| they're so error prone, they're great to show off parallel IO
| performance increases for classically synchronous languages but
| the programmer-burden of explicit await's (luckily
| IDE's/compilers can catch them) with "weird" side-effects was a
| big mistake.
|
| The author of the article seems to be firmly in a specific
| category of developers so the dismissal seems quite in-
| character.
| kannanvijayan wrote:
| If you squint closely enough, you can recognize features of
| the "async/await" discussion in other longstanding issues
| such as the "effectless/effectful" distinction and whether
| that should be brought up in syntax, or the
| "continuations/finite-exits" control transfer distinction
| (finite-exits being traditional languages which provide a
| fixed set of control trasnfer such as return/throw/break).
|
| The "all things are (posslby) async" is a language design
| choice similar to "all things are (possibly) effectful" for a
| procedural language. If you keep your language semantics in
| the less generalized domain (e.g. functional vs. effectful),
| you have to invent syntax to talk about the other semantics
| easily.
|
| So you come up with monad systems in functional languages to
| talk about effects, and make some nice syntax for that. And
| you have your effectful functions and effectless functions
| segregated by type signatures that flow all the way up to the
| top.
|
| And you come up with async/await systems in traditional sync-
| semantics languages, and provide syntax to help with that -
| whether Javascript or Rust. In both Rust and JS, async
| functions are distinguished purely in the type-system, with
| some helpful syntax around it. In both, they desugar to
| `func(...): Promise<T>` or `func(...) -> impl Future<T>`.
| Those types too, flow "all the way to the top", imposing a
| colouring on code.
|
| Erlang solved this problem by adopting the general semantics
| for erlang to achieve erlang's goals, and it seems that the
| "everything is async" semantics works there (I agree that
| it's a model that deserves deeper exploration).
|
| Languages like Rust can't solve this problem the way erlang
| solves it, because it takes away from other fundamental
| aspects of the language. It would necessitate the inclusion
| of a very specific and very heavy (relative to Rust's
| expectations) runtime, along with lots of changes to the
| language that would diminish it along other dimensions.
| whizzter wrote:
| Interesting analogy, seems like many of these binary
| choices cause divides.
|
| My point about the design being mistake is the way you need
| to be explicit in invoking with await and the often harmful
| side-effect of forgetting the await expression (that in
| practice becomes a code-path fork that can become tricky to
| debug if the awaited function is a void function with no
| unexpected values coming out of the missed await
| expression).
|
| For JavaScript the dynamic type-system really leaves no
| choice since the caller has no information about the
| behaviour of the callee, so await for these dynamic
| languages is kinda unavoidable (unless the entire language
| is green-threaded thatis).
|
| For Rust and C# (and probably others) the compiler knows
| about this in advance so the "await" expression just adds
| an extra layer of work for the developer (that is also
| error prone).
|
| There is little reason that for example the C# compiler
| couldn't have auto-inserted the await for all Task<>
| returning functions inside every (async) marked function
| and instead had a (direct) operator for the FEW cases where
| a developer really wants to handle the Task<> object
| manually instead of just awaiting it (C# is quite pragmatic
| in many other cases so the little loss of direct developer
| control would have been far outweighted by better symmetry
| and less ceremony).
|
| Practical? Definitely, the Zig language seems to even
| handle this entirely under the hood by it's compile-time
| functionality. So for a developer the red-blue distinction
| is mostly something handler by the compiler despite not
| affecting the runtime more than some extra code i guess?
| kannanvijayan wrote:
| After some thought, I think I basically agree with that
| main point. I've actually been dipping my feet heavily
| into async programming in rust (personal stuff), and
| async programming in typescript (work stuff) after about
| a decade mostly doing C++ stuff (previous work).
|
| The challenges and confusions you mention are pretty
| fresh in my mind.
|
| Maybe switching the default semantic behaviour over to
| recognize the async type signatures and implementing this
| sort of default syntax transform is the better way to go.
|
| I wonder if it might cause problems with generic code -
| where a return type for some otherwise non-async function
| happens to "become a future" due to type-specialization,
| turning the function async? But maybe that's not an issue
| because the call-specialization behaviour just
| transitively applies to all the callers up the chain,
| same as any other specialization.
|
| Beyond that, I think this whole conversation has a hard
| time moving forward before we get a good industry
| understanding of high-level concurrency structures in
| practice. So far the major one people fixate on is
| "actors", which usually reduces to "stream processor"..
| which is a nice construct but by itself not enough to
| build complete programs out of. You need an entire
| structural framework to talk about what produces the
| things the stream processes. Sometimes that'll be other
| streams, sometimes that'll be static, sometimes that'll
| be some collection of random event sources, some of which
| are direct and some of which are indirect. Then you need
| ways of talking about how those producers relate to each
| other, and _some_ way of identifying and preserving
| global / regional / local invariants.
|
| Error handling is a nightmare. I haven't yet found any
| good vocabulary to talk about shutdown order for tasks
| and error handling in, how to express it clearly in the
| program intent, and how to make sure that intent doesn't
| rot over time. It's all ad-hoc promises and hooks and
| callbacks wiring up _this_ event to _that_ thing..
| accompanied by small or large comments along with the
| hope that your fellow developer is cleverer than you and
| can actually figure out the intent from just the text of
| the code.
|
| After we get a few good examples of those sorts of
| things, it'll be easier to go back and discuss syntax and
| semantics, and we'll have a reference set of
| designs/patterns to check against and make sure we can
| express cleanly.
| singularity2001 wrote:
| Thanks for the deep helpful analysis
| linkdd wrote:
| > Actually the entire Erlang runtime is "async"
|
| I wasn't sure so I did not want to claim it, so thanks :)
|
| > async/await being implemented and popularized as it is
| today everywhere might just be our generations null-mistake
|
| Agreed, especially since each language implements it
| differently.
|
| For example, Javascript's async/await is just syntactic sugar
| over promises and aren't truly "colored" functions.
| singularity2001 wrote:
| The verbosity, hassle and error-proneness makes async/await
| definitely worse than yesteryears 'goto' failure
| tome wrote:
| Interesting. This author took a different interpretation of the
| original red/blue functions article than I did. I understood the
| original to mean "the distinction between red and blue functions
| is a bad thing _when not made visible in the type system_ ".
| Perhaps that's my Haskell bias showing and I missed the original
| point.
| inglor wrote:
| Haskell has one of the _clearest_ distinctions between red/blue
| functions that I know. Functions that perform I/O (sans
| unsafePerformIO and other strange things) are very explicit
| about it (by returning an `IO` monad).
|
| Most other languages (static or not) are equally explicit
| (returning a `Promise<A>` and not an `IO a` but it's very
| similar. Most differences are because call-by-need vs. call-by-
| value language semantics (Haskell being non-strict and the I/O
| not coming out of "thin air" but coming from somewhere - but
| that's kind of a different point than function color type)
| cdaringe wrote:
| I disagree, slightly.
|
| The association with "effect" and "async" is erroneous when
| discussing the topic at this level. Is a heap-read an effect? Is
| an L2 cache write an effect? Of course they are, but the machine
| abstracts these from us. We consider effects like `fetch(...)` an
| effect because of its associativity of remoteness and propensity
| to fail or change, but that's the incorrect lens. Instead, the
| lens should be purity, including purity of passed routines. If
| the network was rock solid (like mem reads) and we read a fixed
| document, suddenly that effect wouldn't feel like an effect. The
| color of the fn would go back to whichever is the pure form, and
| all would be well. We deem some effects effects because of
| programming constructs and performance, and other effects we
| don't even consider because we trust the underlying
| implementation of the machine. Consequently, I fundamentally
| don't fully align with the article's claim.
|
| While I do generally agree that the color information is
| communicative, I think function names and input/output types are
| responsible for that type communication.
|
| When OCaml effects land in 5.x, I look forward to using the
| effect keyword to achieve this goal. For better or worse, I plan
| to use it for much more as well--providing general
| implementations for _most things_, simply s.t. providing test
| doubles in integration tests becomes a piece of cake, with zero
| need for fancy test provisions (module mocking, DI, ...<insert
| usual suspects>)
| hn_throwaway_99 wrote:
| In this "async/await vs. threads" debate (which is primarily what
| this debate is about), I'm very interested to see what happens
| with Project Loom in Java. The primary downside with threads is
| really the thread-per-connection model, and how that means long-
| lived connections can prevent a server from scaling.
|
| If you can solve that problem, the appeal of single threaded
| languages with an event loop like Node really takes a hit.
| otikik wrote:
| > Inconvenient knowledge is better than convenient ignorance
|
| For perfect ideal beings, yes. Humans are a bit less
| straightforward, even for the particular case of red-blue
| functions.
| afarrell wrote:
| Here is one benefit I imagine to naming the functions according
| to colors: expanding the Working Memory you have available to
| reason about code.
|
| The brain's working memory hardware has a few pieces and among
| them are the visuospatial sketchpad and phonological loop [1].
| Code is language and when you hold parts of a codebase in your
| head, it occupies space in the phonological loop. By making some
| data about code chromatic/visual, we can hold that in our heads
| more easily.
|
| [1] https://en.m.wikipedia.org/wiki/Working_memory
| dustinmoris wrote:
| > Inconvenient knowledge is better than convenient ignorance
|
| Can't disagree with this statement more. I've worked with a
| coloured language for a very long time (C#) and also recently
| worked a lot with Go and both have huge advantages and
| disadvantages. Go is not conveniently ignorant. The green thread
| model which Go uses is actually a very elegant solution where one
| can write simple code without leaking abstractions down the stack
| as it happens to be with red and blue functions. It's not
| ignorance. It's focus! Only the function which deals with async
| code has to know and think about it. Calling code doesn't need to
| adapt itself to the inner workings of another function which it
| has no control of. I quite like it and don't see any particular
| problem or traps with it. Quite the opposite, as a very
| experienced C# developer I see a lot of less experienced
| developers do very fatal mistakes with async/await as it's so
| easy to write code which can end up in a deadlock or exhaust
| resources and run into weird issues which only a person with a
| lot of experience would be able to spot and fix. I think Go's
| model is much more tolerant and forgiving to beginner mistakes
| which means that writing fatal code tends to be a lot harder and
| almost impossible if not done intentionally.
| willtim wrote:
| > I quite like it and don't see any particular problem or traps
| with it.
|
| While Go's model is certainly a valid point in the design space
| and keeps things nice and simple, there are of course going to
| be many trade-offs. For example, low-level control is very much
| lost. There are no opportunities for custom schedulers,
| optimising how non-blocking calls are made, or easily dealing
| with blocking syscalls (or foreign libraries that call them).
| It would not be the right approach for Rust.
| iovrthoughtthis wrote:
| text based languages have two options for providing meta data to
| declared constructs.
|
| 1. syntax (explicit)
|
| 2. context (implicit)
|
| i cant wait until we have some other dimensions to play with.
| bring me the weird text interface to a rdf graph / syntax tree
| database plz.
| pron wrote:
| There are two problems with this:
|
| 1. Languages with "coloured functions" usually _don 't_ inform
| you of effects (except for Haskell and languages like it). They
| allow expressing equivalent semantics with the other colour,
| which doesn't distinguish effects.
|
| 2. Expressing anything in the type system always has a cost. That
| it _can_ express something doesn 't mean that it should. The only
| way to know whether expressing this difference -- assuming you
| could (which, again, most languages with coloured functions _don
| 't_ do) -- is to empirically show the benefits outweigh the cost,
| not just hypothesise that they do, which just amounts to "I like
| it this way." "I like it" is perfectly fine, but it's not the
| same as "it's a good thing."
| quelltext wrote:
| > Most in the tech world _are now familiar with the post
| describing red /blue functions_ and their downsides, especially
| in the context of asynchrony.
|
| We are?
| agumonkey wrote:
| I assumed you also knew COBOL Object standard and the
| difference between the first and second ADA proposals.
| rollulus wrote:
| That sentence triggered my impostor syndrome symptoms, which I
| felt I was getting under control lately.
| marton78 wrote:
| There! An impostor! Seize him!
| brailsafe wrote:
| Had the same thought. I've never heard the term coloured
| functions, and I was certainly browsing HN in 2015.
| ghengeveld wrote:
| Despite being in the industry for a decade and authoring a
| library for async operations, I had never heard of this before.
| toopok4k3 wrote:
| What, you don't breathe this magical glitter dust in and out
| every day?
|
| I feel old reading these kind of blog posts that assume that
| we all know what the latest unicorn has been spewing out of
| its butt. It's like I'm missing out on the greatest debate.
| On dung.
|
| I'm sure it's valuable debate for those in the know. But this
| is just how it translates to some of us.
| coldtea wrote:
| This is not about some unicorn company, or some far-fetched
| concept.
|
| Async/await has been in the discussion for most popular
| languages (C# and JS first for the masses, Python, Rust,
| Java, and others) for a decade now.
|
| And the colored functions analogy was a very popular post
| (and was subsequently referenced and discussed in lots of
| others).
|
| Besides, HN is not some website about just knowing the core
| concepts used in the trade. There's Hacker Noon and others
| for that. HN has always been all about these kind of intra-
| group discussions.
| saurik wrote:
| Right: there is a decade of discussion about this one
| narrow instantiation of a feature that is itself a mere
| part the past fifty years of discussion of the general
| class of thing it is from--coroutines--and an infinite
| amount of academic material on both that concept and
| monads--which is the correct mental abstraction to have
| here--and so the idea that I _must_ have read this one
| random recent blog post (one which I honestly had heard
| about it in passing, but without any belief that it was
| important to anyone or that there was even one specific
| one that mattered) that was written by someone who
| probably, ironically, _hasn 't_ read all of those
| actually-seminal papers is more than a bit frustrating.
|
| (edit) Really, everyone should probably start by reading
| the 1978 papers "on the duality of operating system
| structures" and "communicating sequential processes",
| which, when combined with the appropriate background in
| the continuation passing style compiler transform (which
| I am pretty sure was also developed around the same time)
| could have hopefully prevented Ryan Dahl from
| accidentally causing the callback-hell "dark age" of
| software engineering from ever happening in the first
| place ;P.
|
| https://cs.oberlin.edu/~ctaylor/classes/341F2012/lauer78.
| pdf
|
| https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf
| coldtea wrote:
| > _that was written by someone who probably, ironically,
| hasn 't read all of those actually-seminal papers is more
| than a bit frustrating._
|
| For completeness and curiosity maybe. Otherwise one
| doesn't have to read the "actually-seminal papers" if
| they already know the concepts from the 40 to 20+ years
| that followed.
|
| Do physicists need to read the original Einstein or
| Maxwell if they had read tomes of subsequent course and
| academic books on the subject, plus modern papers for the
| later developments?
|
| In any case, I'm pretty sure the author of that post [1]
| had read at least the CSP papers -- he works on the Dart
| language team, and has written Game Programming Patterns
| and Crafting Interpreters, both quite popular books,
| which have been discussed (as in first page) more than
| 3-4 times in HN in the past years.
|
| [1] https://journal.stuffwithstuff.com/2015/02/01/what-
| color-is-....
| saurik wrote:
| I would (and in fact kind of do) agree with you... but
| for the assumption that we have all read this one
| particular article; like, the premise here isn't that
| "this is the specific literature you must read", but "why
| assume I read that when there is so much less to have
| read? here is what _I_ would have said you should have
| read, and I can 't imagine you did!".
|
| I guess maybe the issue in your comment is that my
| complex sentence with "is more than a bit frustrating"
| was mis-parsed, as you left off the direct referent of it
| and then seem to be arguing back at me with the same
| point I was trying to make with some emphatic irony (I
| was really hoping to provide three papers _all_ from
| 1978, but there wasn 't a perfect one on
| continuations)... to re-architect the center a bit:
|
| > ...the idea that I must have read this one random
| recent blog post [--when I can't imagine this person has
| read the specific literature _I_ think everyone should
| have read, as there was an infinite amount of it and a
| lot of it is equivalent to the rest: the only thing that
| makes this blog post exciting is almost certainly some
| kind of recency bias--] is more than a bit frustrating.
| OskarS wrote:
| If you're not, here's the link [0]. It's a pretty influential
| post, it's introduced whole new terminology of "colored
| functions" into tech jargon that's starting to become
| reasonably commonly used. You should familiarize yourself with
| it if you haven't already.
|
| The point the original post is making is that in languages that
| support asynchrony using callbacks, promises or async/await,
| they've split the language in two: the "synchronous" and
| "asynchronous" parts, and reconciling them is a pain. The only
| languages which avoids this split comfortably are Go and a few
| other more niche ones (notably BEAM languages like Erlang and
| Elixir).
|
| This post makes the counter-argument that it's actually a good
| thing that functions are "colored" differently, as synchronous
| and asynchronous functions have different execution models, and
| that should be reflected in the language.
|
| [0]: https://journal.stuffwithstuff.com/2015/02/01/what-color-
| is-...
| kaba0 wrote:
| Just a note, Java will also avoid function coloring very soon
| with Project Loom.
| closeparen wrote:
| I've been hearing this for what feels like 10 years. Are
| the close to upstreaming now?
| kaba0 wrote:
| It started in late 2017.
|
| But it already works well, you can try it on the
| project's branch. But there is no official timeline, it
| will be ready when it's ready.
| esfandia wrote:
| The original red/blue article said that Java /didn't/ have
| function coloring, and that's because it didn't rely on
| callbacks but rather on threads. Project Loom may in fact
| be introducing function coloring if I understood the
| article correctly.
| vips7L wrote:
| No loom will not be introducing it. It's exactly what the
| team DOES NOT want. pron even stated it in his "State of
| Loom" article.
|
| "Project Loom intends to eliminate the frustrating
| tradeoff between efficiently running concurrent programs
| and efficiently writing, maintaining and observing them."
| [0]
|
| http://cr.openjdk.java.net/~rpressler/loom/loom/sol1_part
| 1.h...
| _old_dude_ wrote:
| for sync vs async, yes !
|
| but Java still have checked exceptions which is also a form
| of coloring.
| [deleted]
| kaba0 wrote:
| Yeah, you are right! Though I think not every function
| coloring is necessarily bad and while Java's
| implementation of checked exceptions have some edge
| cases, I think the idea itself is sound and it's better
| to have it supported at a language level instead of
| recreating it with some meta-language in terms of
| Maybe/Result types, that loose extra-linguistic
| information like stack traces (yeah I'm sure they can be
| included, but that should be the default working).
| _old_dude_ wrote:
| You can have exceptions like in Erlang, OCaml, C#, Kotlin
| which all keep the stacktrace without having checked
| exceptions.
|
| If you take a look to the history of Java, checked
| exceptions was there since the beginning but were weaken
| several times, first by the introduction of
| generics+inference (a type parameter can not represent an
| union of exceptions) then by the introduction of lambdas,
| the type of a lambda (the function interfaces in Java
| lingo) used by the standard lib, do not declare checked
| exceptions.
|
| It's so bad that IOException, the king of checked
| exception, has two unchecked dual defined by default,
| UncheckedIOException and IOError that are raised randomly
| by the new APIs like java.nio.file.Files that do not want
| to deal with the complexity of checked exceptions.
|
| Java has a checked exception problem but it does not want
| to recognize it.
|
| BTW, there is another coloring in Java which is primitive
| vs object, but at least this one is recognized as an
| issue (see OpenJDK project Valhalla).
| rvanmil wrote:
| Thank you for the clear and concise explanation. That blog
| post was incredibly annoying to read and imo utterly fails to
| address an otherwise interesting point.
| quelltext wrote:
| Sorry, I feel bad that you had to write this. It's of course
| on me to learn about the linked concepts if I want to follow
| the post.
|
| I was just put off from doing so because of the author's
| assumption that everyone in tech is on the same page when
| this isn't necessarily the case. Many or even most folks "in
| tech" give little thought to PL design, at least not in the
| way that would have them read that article, though
| influential it might have been to _some_ folks in tech that
| are into these discussions 6 years ago.
| rocqua wrote:
| You should not at all feel bad!
|
| This way, others in a similar situation to you get to
| benefit from GP's explanation. Moreover, GP got to interact
| with others on this concept.
|
| I'd also wager it took less effort for GP to write this,
| and then for you to read it, than it would have taken you
| to do all of it yourself.
|
| Point being, your question led to a very useful answer, and
| you should be proud, not sorry.
| TeMPOraL wrote:
| I for once read that article twice already (once a few
| years ago, second time today), and I still gained by
| reading GGP's comment - reading someone else's summary of
| a topic is a way of quickly validating your own
| understanding.
| cesarb wrote:
| As xkcd explains, you are one of the 10000:
| https://xkcd.com/1053/ so you shouldn't feel bad about it.
| booleandilemma wrote:
| Don't feel bad, only in tech do we start using a whole new
| terminology from some random dude's blog post.
| OskarS wrote:
| Yeah, don't worry about it. People learn new things all the
| time, I didn't mind writing the comment :)
| wccrawford wrote:
| I sincerely hope people don't start using just one of the
| color names at a time. If someone says to me, "That's a red
| function" I'm going to stop them every single time and ask
| them which one was red again. It's much clearer and _just as
| easy_ to say "that's an async function".
|
| I think the post was great for making people think about this
| in an abstract way, but I don't think it was meant to reach
| beyond that mental exercise.
| ehnto wrote:
| So abstract that it took a few paragraphs for me to realize
| they were talking about a problem I'm intimately familiar
| with, and already have a pretty decent lexicon for.
| munificent wrote:
| Yes, that was a deliberate rhetorical trick. It's works
| for some audiences, but obviously not all of them.
|
| At the time I wrote the original article there were a lot
| of people really excited about the _performance_ of async
| IO in Node (and JS in general) and were implicitly
| accepting the async programming style that went along
| with it. They liked the former so much that it muddled
| their perspective on the latter. Sort of like how when
| you go to a shitty diner after an amazing concert and
| recall the meal as amazing when really it 's just because
| it got wrapped up in the emotional experience of the
| entire evening.
|
| I had many many discussions of people where I tried to
| talk about how the async programming model is really
| cumbersome and their response was "but it makes the app
| go faster!" We were talking at totally different levels.
|
| It can be really hard to get people to have a clear
| perspective on something they have an emotional
| attachment to. By making up a fake language and a made-up
| concept of function color, it let me show them "hey
| here's this thing that kind of sucks" without triggering
| their "but I love async IO in Node!" response.
|
| I had no idea the article would catch on like it did.
| Something I find interesting about it in retrospect is
| that _because_ it is kind of strange and abstract,
| readers take very different things away from it _while
| assuming their interpretation is exactly the one I had in
| mind._ Some folks (like the author here) think it 's
| about effects and static types (even though there is
| nothing about types at all in the article). Some think
| it's a love letter to Go (a language I'm not particularly
| enthused about in general).
|
| Really, I just wanted to talk about the unpleasant user
| experience of a certain programming style without getting
| hung up on what underlying behavior the style is attached
| to.
| ghoward wrote:
| Can you put a link to your comment in your original blog
| post? I am really happy to hear from you what you meant,
| and adding that to the original might help the message.
|
| Also, I was one of the readers that managed to interpret
| your original basically the way you meant it. (One thing
| that helped is that I had previously read your criticisms
| of Go.)
|
| I am curious though: what do you think about structured
| concurrency? It was your post that made me search for a
| better solution than async/await, and I am of the opinion
| that structured concurrency might be better. Am I wrong?
| munificent wrote:
| _> Can you put a link to your comment in your original
| blog post?_
|
| I could, but I'm never sure how much work I want to
| invest in maintaining and updating old content. I try to
| focus most of my (limited time) and making new stuff and,
| even though I'm generally a perfectionist, I think it's
| generally best to just keep moving forward.
|
| _> I am curious though: what do you think about
| structured concurrency?_
|
| I'm not familiar with it, sorry. My overall impression is
| that there is no silver bullet when it comes to
| concurrency. Humans aren't very good at thinking
| concurrently, so there's always some level of cognitive
| inpedence.
| simiones wrote:
| Is there an alternative word/concept to "function color"
| that covers not just sync/async, but also (C++-style)
| const/non-const functions for example?
| OskarS wrote:
| I don't see people using the specific colors very often,
| it's more like "that function is colored differently" or
| "this language doesn't have colored functions". When used
| like that, it's a really useful metaphor, it really
| clarifies what you mean.
| watwut wrote:
| I have to say, I dont find it clarifying at all. More
| like hiding actual information behind term that does not
| even hint at what it could mean.
| jhayward wrote:
| I still have that reaction every time I hear someone refer
| to 'green threads'.
| rob74 wrote:
| @sync: #00f; @async: #f00;
| GrumpyNl wrote:
| So we gonna replace sync and async with colors, whats next?
| mseepgood wrote:
| Each function gets a spin, charge and mass.
| zwaps wrote:
| Texture, smell, taste? Why not?
|
| This function is a here is a rough function, this function
| there is smooth. This function yonder is damp, that one is
| dry. Careful, you are dealing with a sour function!
|
| Wait, let me open up my blog
| simias wrote:
| I think the idea is to make the concept more general (it's
| not about blocking/async, it's about adding an additional
| concept into the language) but then the article is
| specifically about sync/async so I agree that it's just
| confusing.
| [deleted]
| [deleted]
| zaphar wrote:
| The colors are important if you are going to do async. They tell
| you significant details about the functions you are using. The
| issues most people have with colored functions aren't that they
| are have colors. It's that the colors don't compose. The
| languages that deal the best with this pick a single color and
| commit to it. Go and any BEAM language make every function async
| effectively and handle the coordination for you. They succeed
| because their runtimes will do a better job than you will 99% of
| the time. For the rest they give you some tools to reason about
| it.
|
| Rust however doesn't come with it's own runtime it's meant for
| the kind of development where you are building your own runtime
| or bringing one. For that reason the colors (i.e. types) are
| surfacing _all_ of that complexity for you and helping you handle
| it.
|
| The bigger problem as I see it is that the rush for async has
| left a lot of crates choosing either sync or async only and so
| finding the crate that for your particular use case can be hit or
| miss. Half the db libraries may be sync only and half of the http
| libraries may be async only and then you have to figure out how
| to marry the two. If you can live in one or the other world only
| you are fine. It's when you have to live in both simultaneously
| that things get hairy.
| hnedeotes wrote:
| For someone complaining about sloppiness...
|
| Don't know about go, but in elixir "We have to use Task.await",
| is like, you can use it if you want exactly those semantics, just
| because it has await and task in its name doesn't mean you have
| to - although simple in usage it's quite a higher level
| abstraction inside the BEAM - the base unit is the "process", and
| you can make any sort of asynchronous or synchronous flow on top
| of that.
|
| But even with Tasks, they can be called inside other processes
| and then instead of having to wait for all tasks to finish
| they'll be delivered one by one to that process mail box and you
| can decide at all points of execution what to do. Of course if
| you're running your code like a sequential block... You'll have
| to wait until they sequentially finish?
| clktmr wrote:
| The example screams "Premature Optimization".
|
| What if fetch_*() weren't async function (e.g. because doing CPU
| heavy work)? The Rust Code would look just like the Golang code,
| except you'll now need the wrapper code do make them async.
| mlatu wrote:
| Previously on HN:
|
| https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
|
| This author claims there shouldn't be red and blue functions at
| all, instead concurrent paths should be handled by a Nursery
| object, so that no background tasks can ever be dropped by the
| runtime, but always return to the nursery.
| cube2222 wrote:
| Being used to Go, I must say the lack of colorless, always
| implicitly async functions everywhere is probably the main thing
| I miss whenever I write in a different language.
|
| It really is liberating, when you can write plain,
| straightforward, imperative, blocking code, and spawn almost-free
| async tasks wherever you might want to.
|
| Having experienced a plethora of more-or-less functional monadic
| approaches, python with its GIL, Rusts async approach (for which
| I understand the trade-offs were necessary) it makes life _so
| much_ easier.
|
| People bash Go for staying in the Stone age with respect to some
| features, but this is why I always really feel like going back to
| the stone age when writing in other programming languages.
| kaba0 wrote:
| Rust pretty much had to add function coloring, because of its
| low-level nature. Other languages can get away with it due to
| having a runtime, notably Go, and with the coming Project Loom,
| Java. For higher level languages, I do believe that coloring is
| not a useful concept, similarly to how you use a GC, instead of
| manual memory management. The runtime can "hijack" calls to IO or
| other blocking functions and can park a virtual thread, and
| resume execution afterwards automatically. There is no useful
| information in coloring in this case.
| pas wrote:
| Python has a runtime too, but it still did async in a half-
| assed way :/
| kaba0 wrote:
| Well, Python is very dependent on C FFI, which may make it
| harder.
| _old_dude_ wrote:
| It's only an issue when you have a C callback that calls a
| Python code, because you have C code in the middle of your
| stack thus you can not move the stack frames back and forth
| (because the C code can reference stack addresses).
|
| I may be wrong but most of the C calls in Python do not
| have callbacks and you can pin the coroutine to the kernel
| thread if there is a callback.
| coder543 wrote:
| Zig is just as low level as Rust, AFAICT, and it appears to
| have avoided this: https://kristoff.it/blog/zig-colorblind-
| async-await/
| ItsMonkk wrote:
| The link to hexagonal architecture at the bottom is an important
| one.
|
| Colored functions are a good thing. In fact, just splitting based
| on red/blue for sync/async doesn't go far enough. You also need
| to split based on pure/impure. Many times async are impure, but
| the reverse(sync being pure) is not at all guarenteed and the
| distinction is there.
|
| The colored pain point that the original topic pointed out IS a
| good thing, if you don't have a direct dependency between the
| code that ran before the await and after the await, they should
| not be nested into the same layer. The thing that I think I've
| come to understand is that the typical call stack has way to many
| layers. Hexagonal Architecture together tracking async/sync and
| pure/impure leads to a pipeline of many functions at the top of
| the module, and this leads to better code.
| bluetomcat wrote:
| You never have such issues in well-designed programs with a main
| event loop where all possible events are waited on. Your program
| is either idle, or waken up by a set of events (any kind of IO, a
| timeout, or a signal from another thread). You handle all events
| by dispatching them in a timely manner to the responsible code,
| and wait again. Of course you need non-blocking APIs or dedicated
| threads for the IO and compute-intensive stuff.
|
| The whole async/await thing is just a convenience feature that
| looks great but distracts you from thinking about the true
| concurrency of your program.
| chriswarbo wrote:
| > well-designed programs with a main event loop
|
| This is a _massive_ over-generalisation. In particular, you 're
| making a huge assumption that looping is _possible_ , leaping
| from there to having loops be _desirable_ , then pole-vaulting
| to claim that anything _not_ architected entirely around
| looping is "not well-designed".
|
| Based on this generalisation, I'm going to guess your
| experience is limited to languages which (a) have some sort of
| looping construct built-in (e.g. a `for` or `while` keyword),
| (b) do not eliminate tail-calls and (c) either don't support,
| or discourage (perhaps until recently) higher-order
| programming. This would include languages like C/C++/Go,
| Python/PHP/Perl, Java/C#, etc.
|
| In contrast, I consider looping to be a problematic, error-
| prone construct in general (for reasons outlined at
| http://lambda-the-ultimate.org/node/4683#comment-74302 ).
| Recursion is much safer, in languages which eliminate tail-
| calls.
|
| I also consider "main loops" to be an anti-pattern (for reasons
| outlined at http://disq.us/p/kk1da6 ). They break all sorts of
| modularity/encapsulation; force bizzare control flow patterns
| (e.g. lots of highly non-local 'returns', AKA unlabelled
| GOTOs); force us to make arbitrary (yet _highly_ impactful)
| decisions between 'carrying on' or 'trampolining off main'
| (these decisions are often hard to change after-the-fact,
| regardless of their suitability); require lots of boilerplate
| to marshal and manage intermediate values which, by their
| nature, must conflate all sorts temporary values from the
| entire codebase into amalgamated blobs; and so on.
|
| Rather than shoehorning important logic and decisions into a
| global, manually-kludged tail-call trampoline (AKA a 'main
| loop'); I find it _far_ easier to write, reason-about and debug
| simple code which performs a 'step' then tail-calls to the
| next.
|
| As far as concurrency goes, I would claim that "well-designed
| programs" with control following data flow, it's usually pretty
| trivial to spin off parts as Futures/etc. as needed. A type
| system will tell us where to place the necessary calls to
| "map", etc. as needed.
| silon42 wrote:
| +1 on all about main loops... but I'll prefer to have my
| stack traces instead of tail recursion.
| logicchains wrote:
| >In contrast, I consider looping to be a problematic, error-
| prone construct in general (for reasons outlined at
| http://lambda-the-ultimate.org/node/4683#comment-74302 ).
| Recursion is much safer, in languages which eliminate tail-
| calls.
|
| A "main loop" doesn't necessarily imply an actual loop. A
| game written in Haskell or Scheme probably has a main loop
| implemented via recursion, but it's still called a main loop.
| "Main tail-calling function" sounds somewhat ugly.
|
| >Far better to write functions which perform a 'step' then
| tail-call the next function.
|
| This sounds like a recipe for spaghetti. The "main loop"
| approach is much more conducive to the clean approach of
| "functional core, imperative shell" (https://www.destroyallso
| ftware.com/screencasts/catalog/funct...), where IO,
| asynchrony and the like are cleanly separated out from
| business logic.
| chriswarbo wrote:
| > A game written in Haskell or Scheme probably has a main
| loop implemented via recursion, but it's still called a
| main loop
|
| If such a 'main loop' is a single function which (a)
| dispatches to one of a bunch of 'handlers' then (b) calls
| itself recursively, I would still call it a 'main-loop
| anti-pattern'.
|
| More formally, it's likely to be in "defunctionalised" form
| https://blog.sigplan.org/2019/12/30/defunctionalization-
| ever...
|
| The problem with such 'main loops', whether actual loops or
| self-recursive functions, is that control flow has to keep
| doubling-back to this 'central dispatcher'. In the case of
| _actual_ 'main loops' this is often an unavoidable
| consequence of lacking tail-call elimination. In languages
| _with_ tail-call elimination, we 're just over-complicating
| the problem.
|
| > This sounds like a recipe for spaghetti.
|
| The control flow for a 'main loop' program goes like this:
| main function fooHandler barHandler |
| +------call-------+ |
| +-----return------+ |
| +------call---------------------+
| | +-----return--------------------+
| | ...
|
| If we have tail-call elimination, the handlers can just
| call the 'main loop' instead of returning (this is just a
| CPS transformation), e.g.: main function
| fooHandler main function barHandler main
| function | +----call--------+
| | +-----call----------+
| |
| +------call----+
| |
| +----call----+
| |
| ...
|
| Even this simple transformation looks less like spaghetti
| to me. Calls to this 'main function' now have much more
| context, compared to "top level" calls to itself. Hence we
| can avoid special-cases and handler-specific checks
| migrating their way into the 'main function'; even if we
| want such things, we don't need to pollute the 'main
| function' itself (or defunctionalised intermediate data)
| with such cruft; we could just pass such things as extra
| arguments, from those call-sites which need them. Tugging
| at this thread, we'll often find that _most_ of the 'main
| function' could be moved to its callers, and any remaining
| functionality may be better in a generic 'plumbing'
| operation like a monadic bind (the latter being a
| "programmable semicolon" after all ;) ).
|
| > The "main loop" approach is much more conducive to the
| clean approach of "functional core, imperative shell"
|
| I don't see why. The "functional core, imperative shell"
| approach is mostly about writing as much as possible using
| pure functions, then 'lifting' them to the required
| effectful context (via map, join/>>=, etc.); in contrast to
| the common approach of writing everything within the
| required context to begin with (e.g. making _everything_
| "IO" just because we need one 'printLn' somewhere).
|
| That doesn't require manually writing a global "main loop".
| For example I like to use polysemy, where all the business
| logic can remain pure, and the `main` function just
| declares how to 'lift' the application to `IO`. Under the
| hood it uses a free monad, which dispatches to 'handlers';
| but (a) that's all automatically managed, and (b) it
| recurses through calls spread across the application,
| rather than returning to a self-calling "main function".
| lmm wrote:
| > Even this simple transformation looks less like
| spaghetti to me. Calls to this 'main function' now have
| much more context, compared to "top level" calls to
| itself.
|
| That's backwards IMO. The first example is structured,
| hierarchical, organised. The second example is spaghetti
| because there's no difference between returning and
| calling, so all you see is a bunch of functions connected
| to a bunch of other functions.
|
| > Hence we can avoid special-cases and handler-specific
| checks migrating their way into the 'main function'; even
| if we want such things, we don't need to pollute the
| 'main function' itself (or defunctionalised intermediate
| data) with such cruft;
|
| It's the opposite of cruft; the whole point of
| defunctionalising is that you replace invisible control
| flow with a reified datastructure that you can actually
| inspect and understand. You can properly separate the
| concerns of "figure out what state we're in" and
| "actually execute the things that need to happen now"
| rather than sprinkling in special-case tests at random
| points in your execution.
| zozbot234 wrote:
| Async await notation is for write "dispatch" based program you
| describe, as straight line code. Is convenience feature, but
| like syntactic sugar only.
| kbd wrote:
| I really can't agree with this premise after seeing how Zig
| implements color-less async/await:
|
| https://kristoff.it/blog/zig-colorblind-async-await/
|
| I highly recommend watching Andrew Kelley's video (linked in that
| article) on this topic:
|
| https://youtu.be/zeLToGnjIUM
| valenterry wrote:
| Zig also uses colored functions here, but introduces a mode to
| globally(?) block on every async call. While you can certainly
| do that, I'm not sure if that is a great language feature to
| have, to be honest - at least if I don't misunderstand their
| idea.
| kbd wrote:
| I think you're confusing the io_mode with async/await.
| cygx wrote:
| In Rust, you color functions at declaration time through
| special syntax, and, depending on the color of the caller,
| you have to use either block_on() or .await to get a value
| out of a call to an async function.
|
| That's not the case for Zig. There's no special decorator, no
| block_on/await distinction, and regular calls will return the
| expected value by implicitly awaiting (in case evented mode
| is enabled). A callsite may decide not to implicitly await
| the result by using the async keyword, which then requires an
| explicit await on the returned promise.
|
| _edit:_ fix caller /callee typo
| valenterry wrote:
| Well yeah - you can do exactly the same in other languages,
| such as Scala. It just isn't a good idea. They explain it
| themselves:
|
| > there's plenty of applications that need evented I/O to
| behave correctly. Switching to blocking mode, without
| introducing any change, might cause them to deadlock
| dnautics wrote:
| zig doesn't switch. It's statically typed, and I think
| (not 100% sure) it detects if you have called a function
| with the async keyword and when that happens it triggers
| compiling a second version of the function with async
| support, so there are actually two versions of the
| function floating around in your binary. Since zig is
| itself a parsimonious compiler, if that's never in a
| codepath (say it's provided by a library), it doesn't get
| compiled. The reverse is true, too, if you ONLY call a
| function with the async keyword, it doesn't compile the
| non-async version. Again, with the caveat that I think
| that's what's going on.
| dnautics wrote:
| Correction: my understanding is incorrect, zig does not
| compile two versions of the functions. Nonetheless, it
| does not switch.
| valenterry wrote:
| Well, I'm quoting their own explanation...
| whizzter wrote:
| Zig seems to be doing a lot of things right (just like Rust
| did).
|
| Sadly I think that their disregard for safety (the manual
| states that dangling pointers is the developers problem) kinda
| makes it a no-starter for many people. I personally consider
| possible information breach far worse than a crash or runtime
| stall.
| dnautics wrote:
| > Sadly I think that their disregard for safety
|
| I wouldn't call it a disregard. Zig's approach is to give you
| easy and automatic tools to detect and handle most of those
| memory safety problems in its language-first-class test
| system, which you can think of as a carrot to get you to
| write tests.
| kbd wrote:
| > disregard for safety
|
| Your opinion seems to be that any future systems language
| that doesn't implement a heavy Rust-style borrow checker and
| explicit safe/unsafe modes "disregards safety"?
|
| Zig does have a lot of checks, such as bounds checking[1].
| There are also test modes that help catch errors. I don't
| know what you're referring to about "information breach".
|
| > The manual states that dangling pointers is the developers
| problem...
|
| In a systems language where you can cast an int to a pointer:
| const ptr = @intToPtr(*i32, 0xdeadbee0);
|
| or where you have to manually manage memory, what else do you
| expect?
|
| Zig made the design choice to not split the world into safe
| vs unsafe. It seems a bit unwarranted to say that because
| they didn't make the same design choices Rust did that they
| have a "disregard for safety".
|
| [1] https://ziglang.org/documentation/master/#Index-out-of-
| Bound...
| avodonosov wrote:
| If the author likes to see explicit `async` label on functions,
| he misses the pleasure of having `async` on every function.
| a = x + z; b = y + x;
|
| That's concurrency laying on the table - refactor your code to
| run those two lines in parallel.
| TheCoelacanth wrote:
| That's an entirely different kind of concurrency. Those
| statements are just going to run on the CPU. It's only faster
| to do them in parallel if you have an extra CPU core lying
| around and if the overhead of splitting it across CPUs doesn't
| outweigh the time saving of doing both at once (it almost never
| does for such a small piece of work).
|
| With async functions, the CPU isn't actually doing anything.
| It's just waiting for something to happen, like a network
| response from another machine. It's almost always faster to do
| them in parallel if possible.
| avodonosov wrote:
| > for such a small piece of work
|
| Don't take the example too literally, some function calls can
| be here.
|
| Running computations in parallel is often valuable. Or run
| computations in parallel with waiting for external resource -
| why does not the code in the article compute something while
| waiting for a, b and c?
|
| Anyways, if async functions are so good, why not have all
| functions async?
|
| The article says this a kind of "documentation" that tells
| you what functions can wait for some external data and what
| functions are "pure computation". If it was so, it would be
| OK. Such a documentation could be computed automatically
| based on the called function implementations and developer is
| hinted: "these two functions you call are both async,
| consider waiting for both in parallel". In reality, the async
| / await implementations prevent the non-async functions from
| becoming async without the calning code change. This
| restriction is just a limitation of how async / await is
| implemented, not something useful.
|
| As other commenter says, the article "embraces a defect
| introduced for BC reasons as if it's sound engineering. It
| really isn't."
|
| When my code is called by a 3rd party library, I can not
| change my code to async. That's the most unpleasant property
| of today's async / await. What yesterday was quick
| computation tomorrow can become a network call. For example,
| I may want the bodies of rarely used functions to only load
| when called first time (https://github.com/avodonosov/pocl).
|
| The article suggests we have to decide upfront, at the top-
| level of the application / call stack, which parts can be
| implemented with as waiting blocks and which should never
| wait for anything external. This is not practical.
|
| > It's almost always faster to do them in parallel if
| possible.
|
| Yes, if possible. And it is determined, in particular, by
| dependencies between the operations. If the first call is
| writing to DB and the second call is a query that reads the
| same DB cell, they can't be parallelized. The same is with in
| memory computations - if next steps do not depend on previous
| steps they can be parallelized. Hopefully, compilers of the
| future will be able to automatically parallelize
| computations.
| TheCoelacanth wrote:
| > Anyways, if async functions are so good, why not have all
| functions async?
|
| They aren't. They're only good when the primary thing that
| the function is doing is waiting.
|
| > When my code is called by a 3rd party library, I can not
| change my code to async. That's the most unpleasant
| property of today's async / await. What yesterday was quick
| computation tomorrow can become a network call.
|
| You can still do that. Network calls don't have to be
| async. Blocking network calls still exist, just as they did
| before async was created.
|
| > Hopefully, compilers of the future will be able to
| automatically parallelize computations.
|
| Ah, yes, the famous sufficiently smart computer that can
| determine if two operations happening on the other side of
| the network depend on each other or not. Personally, I'm
| not holding my breath.
| avodonosov wrote:
| > Blocking network calls still exist,
|
| Browsers have already deprecated sync XMLHttpRequest.
| TheCoelacanth wrote:
| This article was focused primarily on Rust, which does
| not have that problem.
|
| Needing to add async to a function is problematic in JS,
| but it's far less problematic than the alternative which
| is completely rewriting your function to be event-driven.
| Blocking network calls aren't a possibility in JS due to
| the choice to make the entire runtime single-threaded.
| donkarma wrote:
| Would your CPU not do this automatically?
| skywhopper wrote:
| Reasonable point of view about the topic, but it's unfortunate
| that the author needs to go out of their way to make broad
| statements about how every programming language should be
| statically typed and explicit about async state. It reveals such
| a narrow view of what programming can and should be used for.
| Yes, these are great features for Rust to have. But other
| languages make other choices because not everyone has the same
| problem space. If you're going to make assertions about how
| programming languages should work, you need to bound that
| assertion with the specific use cases and work parameters you are
| talking about. Otherwise your statement is too general to be
| meaningful much less true.
| dukoid wrote:
| I wish we had this for purity, too. But I guess it gets hard to
| define when handing in structured arguments...?
| one-more-minute wrote:
| Being explicit about side effects in the type system isn't a bad
| idea, but don't conflate that with blocking/non-blocking
| operations. Some side effects aren't async (eg console.log in JS,
| mutating shared state) and some pure functions are (eg GPU linear
| algebra operations).
|
| Effect systems (as in Koka) are mentioned elsewhere; arguably
| their major innovation is being explicit about effects without
| sacrificing composability. Functions are only "coloured" by the
| type system, rather than syntactically, and can be coloured
| differently depending on context. This is quite different from
| the async/await situation.
| clarge1120 wrote:
| What I learned from this article was:
|
| Go is all asynchronous. Problem solved.
| slver wrote:
| This article was a whole lot of "this pain you feel, cherish it,
| it's good".
|
| It embraces a defect introduced for BC reasons as if it's sound
| engineering. It really isn't.
|
| If you start over, there's no reason any blocking API should
| really BLOCK your program. Your functions can look completely
| synchronous, but yield to the event loop when they invoke a
| blocking API.
|
| Basically like Erlang works.
|
| Eventually it will be the case, but these kind of "love the flaw"
| attitudes are slowing us down.
| marcosdumay wrote:
| > Your functions can look completely synchronous, but yield to
| the event loop when they invoke a blocking API.
|
| Haskell does that, and it is great. But there is no chance at
| all that I would want anything like that in Rust. It would be
| completely out of place, and break the entire concurrency
| model.
|
| That would make the language unfit for any task I use it for
| today.
| zozbot234 wrote:
| > Your functions can look completely synchronous, but yield to
| the event loop when they invoke a blocking API.
|
| This require either runtime-based or operating system support.
| If need be, you can always rely on the OS scheduler to be your
| "async blocking API".
| jhgb wrote:
| > If need be, you can always rely on the OS scheduler to be
| your "async blocking API".
|
| Well, the original red-blue function article _did_ say "just
| use threads", didn't it?
| zozbot234 wrote:
| IIRC the original article was talking about JavaScript and
| there's no threading in JS.
| jhgb wrote:
| It talked about several languages, for sake of
| comparison.
| DDR0 wrote:
| What about web workers, service workers, web assembly
| threads..?
| slver wrote:
| You need operating system support in the form of async I/O,
| which every OS offers. No special runtimes, no threads
| required.
| pas wrote:
| This is becoming more and more true with io_uring, but it
| was absolutely not the case a few years ago. (Linux AIO was
| a strange beast. Direct IO was great with preallocated
| files, but there was no way to do fsync() - especially on
| file/directory metadata - in an async way. Plus there was a
| threadpool based glibc thing that somehow faked async I/O,
| righ?)
| simias wrote:
| >If you start over, there's no reason any blocking API should
| really BLOCK your program. Your functions can look completely
| synchronous, but yield to the event loop when they invoke a
| blocking API.
|
| That implies a level of sophistication and abstraction that
| simply doesn't exist today, and probably won't for a while.
|
| For this to work you either need an ultra-sophisticated
| compiler what will transparently auto-async your code at
| compile time (as realistic as a unicorn at this point) or a
| heavy runtime which will have performance implications.
|
| So in the meanwhile I agree with TFA that coloured functions is
| probably the pragmatic choice if you want pay-for-what-you-use
| async right now.
|
| Personally my main issue with async is that it seems to me that
| a generation of devs who grew up on JavaScript see it as the
| Alpha and Omega, the one and only way of handling any sort of
| parallel or event-driven system, not considering other
| approaches such as threads and/or polling. This sometimes
| results in spaghetti code that I personally find over-
| engineered.
|
| To give you an example I hacked on rbw a week ago, a Rust
| command line client for Bitwarden. It's a good program, but
| it's completely built with async code and I genuinely don't get
| it given that it's almost entirely synchronous in practice (you
| query an entry, you enter your passphrase, you get a result).
| As a result I found it a lot harder to follow the flow of the
| program and understand how it all fit together. But hey, I
| eventually got it and managed to fix the issue I was debugging,
| so it's not the end of the world.
| audunw wrote:
| > For this to work you either need an ultra-sophisticated
| compiler what will transparently auto-async your code at
| compile time (as realistic as a unicorn at this point) or a
| heavy runtime which will have performance implications.
|
| Depends on how much of it you want to solve automatically.
| The Zig compiler does essentially auto-async your functions.
| So I think you can write a whole lot of synchronous code, and
| then just have a single "async" at the top to have it yield
| to the event loop whenever it hits blocking IO.
|
| So I think you can solve 90% of that problem without being
| ultra-sophisticated.
| cube2222 wrote:
| This is quite literally how Go works and has been working for
| years.
|
| Automatic yield points and a proxy to syscalls are used for
| that.
|
| It's also still plenty fast (as opposed to Erlang, which was
| mentioned in other comments).
| tomp wrote:
| > a level of sophistication and abstraction that simply
| doesn't exist today
|
| > you need an ultra-sophisticated compiler (as realistic as a
| unicorn at this point)
|
| Isn't what Zig does, described in the post linked by a few
| other comments [1], basically this?
|
| [1] https://kristoff.it/blog/zig-colorblind-async-await/
| dnautics wrote:
| Yes, and it works quite well. I think it works well because
| zigs async requires you to think what is actually going on
| with the data structures at the low level (you have to
| understand what a frame is, that's a "new" low level
| construct on the level of "pointer" or "function"), once
| you get that and understand the control flow it is quite
| easy to see what's happening.
| meowface wrote:
| >That implies a level of sophistication and abstraction that
| simply doesn't exist today, and probably won't for a while.
|
| >For this to work you either need an ultra-sophisticated
| compiler what will transparently auto-async your code at
| compile time (as realistic as a unicorn at this point) or a
| heavy runtime which will have performance implications.
|
| I may be misunderstanding this point, but if I'm not, it
| seems like gevent managed to do this for Python a decade ago
| and it still works well today. I still use gevent for new
| code because it gracefully sidesteps this function-coloring
| problem. (I know people don't normally associate "monkey-
| patch" with "graceful", but they somehow made it work. I
| guess this falls under the "heavy runtime" approach, but it
| doesn't feel heavy at all.)
| lmm wrote:
| gevent is difficult to use when interoperating with code
| outside its runtime (e.g. if you have a callback that gets
| called from a C library) and you lose control over where
| you yield, so you'll never get as high throughput as
| blocking code in the scenarios where that's appropriate.
| It's effectively just making all your functions async, like
| JavaScript or indeed Erlang.
| meowface wrote:
| >gevent is difficult to use when interoperating with code
| outside its runtime (e.g. if you have a callback that
| gets called from a C library)
|
| True, but is this not also the case when using Python's
| native async/await functionality?
|
| The difference is that async/await doesn't even work with
| previous pure-Python code which performs any I/O, either.
| There are thousands of ordinary third-party libraries
| which basically have to be tossed out the window and
| rewritten from scratch, despite none of them using any C
| extensions.
|
| gevent works well with almost all of those old libraries.
| ghostwriter wrote:
| > and you lose control over where you yield
|
| you don't [1]
|
| [1] https://www.gevent.org/api/gevent.html#gevent.sleep
| simias wrote:
| Right, maybe "heavy" was a bit too strong, I was more
| thinking from the point of view or Rust which aims to have
| a small, mostly optional runtime.
| geocar wrote:
| > >If you start over, there's no reason any blocking API
| should really BLOCK your program. Your functions can look
| completely synchronous, but yield to the event loop when they
| invoke a blocking API.
|
| > That implies a level of sophistication and abstraction that
| simply doesn't exist today, and probably won't for a while.
|
| This sounds like a description of UNIX and was true from the
| very first versions: a blocking call would trap the system,
| select a process task that can continue and jump to that
| instead. At the language level, I'm pretty sure we had this
| in the 1980s in Forth, and I don't think it was considered
| new even then (although I don't have a reference handy);
| libdill does this for C programs.
|
| Or am I misunderstanding what you're saying?
| simias wrote:
| So there are two things, there's OS-provided event handling
| (poll, blocking syscall that schedule another task, OS
| threads) and there's coroutines (libdill and friends).
|
| OS-level async is what I accuse some devs to underuse,
| preferring language-level async constructs. I've read a lot
| of frankly FUD and cargo culting about how inefficient
| things like pthreads are. Let's not even talk about pre-
| fork parallelism which is probably the simplest way to use
| multiple cores when dealing with independent tasks but
| hardly anybody seems to use these days. I wonder if most
| devs even realize that it's an option.
|
| In my experience coroutines were never really popular in
| the C world simply because without GC or at least a borrow
| checker it's just very hard to write correct code. Avoiding
| memory violations in C is tricky, avoiding memory violation
| in async C is asking a lot.
|
| Rust does a lot better here thanks to borrow checking but
| it still leads to, IMO, very complicated code.
| akiselev wrote:
| _> Let 's not even talk about pre-fork parallelism which
| is probably the simplest way to use multiple cores when
| dealing with independent tasks but hardly anybody seems
| to use these days. I wonder if most devs even realize
| that it's an option._
|
| Speaking from experience: they don't. I recently picked
| up a Linux kernel development book [1] and after four or
| five chapters I was able to eliminate several bottlenecks
| and eliminate latency edge cases in personal projects
| with a greater understanding of the scheduler algorithm
| and process priority/affinity/yields.
|
| I think the problem is that the vast majority of
| developers work on top of cross-platform runtimes that
| hide any detail that can't be easily implemented for all
| operating systems. It's been more than a decade since
| I've used Unix sockets directly in any language and I
| don't how to set SO_REUSEPORT in Rust. I'd probably have
| to implement it using `TCPListener::as_raw_fd()` and
| `unsafe libc::setsockopt`? It's just easier stay sane
| within the bumpers.
|
| [1] https://www.amazon.com/Linux-Kernel-Development-
| Robert-Love/...
| lmm wrote:
| Multiprocessing (or multithreading, which is the unix model
| is much the same thing) is even heavier than a green-
| threading runtime. Instead of swapping out little task
| handlers / continuations you're swapping out whole address
| spaces.
| acdha wrote:
| > If you start over, there's no reason any blocking API should
| really BLOCK your program. Your functions can look completely
| synchronous, but yield to the event loop when they invoke a
| blocking API.
|
| This approach tends to become a lot less simple if you have any
| sort of significant resource usage to manage: you'll be
| fighting all of that magic when you have non-trivial memory
| usage, need to avoid limits on file handles or connections,
| etc. not to mention the runtime support for things like
| debugging.
|
| Any time you see a long-running widespread problem and think
| there's an easy fix, ask whether you're really that much
| smarter than an entire field or whether it might be harder than
| you think.
| SebastianKra wrote:
| This would mean that there's no guarantee about the order in
| which your code will run, because your scheduler might decide
| to execute some unrelated code right in between any two lines.
|
| Therefore, you would need to deal with far more race
| conditions.
|
| Even in single-threaded languages, your entire state may change
| during a single asynchronous function call. By differentiating
| between synchronous code (i.e. code that needs to be run in a
| single slot) and asynchronous functions, you can immediately
| see where you need to account for unexpected state changes.
|
| (Yes, it's more complicated in multithreaded languages, but the
| main idea still holds because they usually all sync back to a
| single thread)
| mikepurvis wrote:
| But per the original function colouring article (and
| summarized at the top of this one) several major parallel-
| first languages already do this, notably Go and Erlang. And
| it seems to work great for them.
| lmm wrote:
| You can do it, but at a cost. Interop via the system ABI
| becomes much harder. You lose the ability to write high-
| performance blocking code, and reasoning about performance
| requires understanding a lot of details of the runtime. And
| to get to the grandparent's point, you have to find a way
| to protect yourself from the problems of shared-memory
| threading; Erlang does this by adopting a strict ownership
| model that's not directly tied to threads (actors), which
| forces you to structure your programs a particular way and
| makes certain things impossible. Go has a less strictly
| enforced model and ends up being quite prone to race
| conditions, but keeps it under control by discouraging
| writing large programs at all (in favour of the
| "microservice" style).
| shawnz wrote:
| > If you start over, there's no reason any blocking API should
| really BLOCK your program. Your functions can look completely
| synchronous, but yield to the event loop when they invoke a
| blocking API.
|
| Like the article points out, then you would need green threads
| to do asynchronous work simultaneously. And if you are already
| using threads then there is no need for "async". The point of
| "async" is to give you the ability to efficiently manage the
| event loop WITHOUT threads.
| slver wrote:
| Green threads aren't actual threads. They're an abstraction
| letting you have an async event loop, but writing
| synchronous-like code. It's precisely what you do need.
| pornel wrote:
| Rust was conceived as an Erlang-like language, and had
| "colorless" I/O and green threads in its early implementations.
| This idea has been demonstrated to be unfit for a systems
| programming language, and scrapped before 1.0:
|
| https://github.com/rust-lang/rfcs/blob/0806be4f282144cfcd55b...
| SahAssar wrote:
| How do you feel about the example in the article? Is it
| unnecessary performance optimization to parallelize multiple
| adjacent asynchronous functions or should we just hope that the
| runtime/compiler gets smart enough to know what is safe to
| parallelize?
| spinningslate wrote:
| Not GP but:
|
| > Is it unnecessary performance optimization to parallelize
| multiple adjacent asynchronous functions?
|
| The important word in your question is "async". Take that out
| and the answer is "no" - it's not unnecessary to parallelise.
| But the example serves well to illustrate at least one aspect
| of why red/green is wrong in my opinion: who decides what
| should be able to run concurrently?
|
| Let's modify the example slightly, and assume we have a
| function `size(url) -> Int` that returns the size in bytes of
| a given URL. It's making network calls, so it'll have
| latency. Should the function be async?
|
| As the developer of that function, I can't reasonably make
| that choice beyond "probably". But it comes with a cost. The
| caller can't now just call it: they have to await, and
| perhaps unbox the answer. If they're only calling it once
| then you've just added unnecessary complexity to their code.
|
| The _callee_ is the one who's appropriately placed to decide.
| Using the example, we're calling it 3 times. So, as the
| developer writing the _calling_ function, I can decide that
| it's appropriate to paralellise.
|
| To use GP's example, Erlang gets that the right way round.
| Functions are functions are functions. If I want to call a
| function synchronously, I just call it. If I want to
| parallelise, then I simply spawn the function instead.
|
| > or should we just hope that the runtime/compiler gets smart
| enough to know what is safe to parallelize?
|
| That would be a step on further still. It's definitely doable
| - compilers already do a buch of pretty sophisticated
| dataflow analysis.
|
| But even today, the red/green async model is a very limited
| thing. With significant constraints. Other than perhaps rust
| - with its focus on zero-cost abstractions - it feels at best
| an interim step that asks programmers to compensate for a
| lack of good fine-grained concurrency support in language
| runtimes.
| gpderetta wrote:
| A Cilk-style depth-first work stealing executor would block
| by default, but if any executor has free cycles it can
| steal the continuation and generate more work. You would
| need to wait only when work is actually needed or when
| returning from the function.
|
| Cilk requires annotating async functions calls at the call
| point (with the spawn keyword to make them async), but not
| the funciton themselves, so there is no red/blue coloring.
|
| Synchronization is explicit via the sync keyword, but in
| principle I think a compiler could insert the sync point on
| first use of the value.
|
| I don't think a cilk style runtime would be heavier than an
| executor behind async/await.
| [deleted]
| singularity2001 wrote:
| Functions are functions are functions. If I want to call a
| function synchronously, I just call it. If I want to
| parallelise, then I simply spawn the function instead.
|
| I don't understand why this isn't the universally accepted
| correct answer.
|
| Why not put the async idea on the caller side alone, as
| suggested and as seen in go?
|
| fetch(url) // synchronous
|
| go fetch(url) // asynchronous
|
| if the compiled code of synchronous and async functions
| really has to look different for fundamental reasons, the
| compiler can still silently emit both versions (and gc
| unused variants)
|
| Someone please help me understand, or rid the world of this
| async-await mess.
| kortex wrote:
| `fetch (url) // synchronous`
|
| `go fetch(url) // asynchronous`
|
| This is not something you can readily abstract over. That
| depends on whether language is compiled vs interpreted,
| has a runtime or not, is scheduled or not, stack model,
| threading model, ownership model, and so on. `go
| myfunc()` implies something to dispatch, execute and reap
| that function and its metadata, and there's no one-size-
| fits-all solution.
|
| You can design a language that has this feature, but it's
| not without tradeoffs, and it's certainly not something
| you could just introduce to a language unless it already
| lends itself to it.
| dnautics wrote:
| This is what zig does, and I can attest it works
| fantastically. I use zig to dispatch C ABI FFI in erlang
| vm, which uses a unfriendly tail-call system to yield
| from cooperative C ABI code into it's preemptive
| scheduler. With zig async you can write traditional
| looping structures and yield back to the erlang vm inside
| your loop. Because zig has uncolored async you can call
| the same function using a different strategy (for example
| in a spawned os thread), and the yields will no-op
| without having to rewrite the function.
|
| > the compiler can still silently emit both versions
|
| I think that is zig's strategy under the hood (not 100%
| sure)
| AndyKelley wrote:
| Zig only emits one version. Either:
|
| 1. The function does not have a suspend point, so it is
| emitted as a normal function, and it just gets called
| like a normal function at the async callsite. Await is a
| no-op.
|
| or...
|
| 2. It has a suspend point. It is emitted with the async
| calling convention, and any normal callsites to it also
| become suspend points.
| dnautics wrote:
| Thanks for clarifying!! I had my understanding inverted!
| lmm wrote:
| > As the developer of that function, I can't reasonably
| make that choice beyond "probably". But it comes with a
| cost. The caller can't now just call it: they have to
| await, and perhaps unbox the answer. If they're only
| calling it once then you've just added unnecessary
| complexity to their code.
|
| > The _callee_ is the one who's appropriately placed to
| decide. Using the example, we're calling it 3 times. So, as
| the developer writing the _calling_ function, I can decide
| that it's appropriate to paralellise.
|
| The right answer is polymorphism, and it's easy in a
| language with HKT (I do it in Scala all the time). If you
| make every function implicitly async (or possibly-async at
| the runtime's discretion) then you lose the ability to know
| where the yield points are, to get higher performance from
| blocking where appropriate, but more than that you just
| don't get consistent performance behaviour.
| spinningslate wrote:
| that's interesting - can you expand? How do higher-kinded
| types address the sync/async question?
| simiones wrote:
| If you want to parallelize X requests, you should spawn X
| threads to handle those X requests, right? Depending on how
| large X is and how long you expect the requests to take, this
| may only be worth it if your threads are actually stack-less
| coroutines, of course, but why not be explicit about that
| fact that you want parallelism here?
|
| Thread-based code (CSP) is, after all, the most well studied
| model of parallel computation, and for a reason. Async code
| which achieves real parallelism may get much harder to
| interpret, especially if it's non-trivial (e.g.
| requestA.then(requestB).then(requestC); in parallel,
| requestD.then(requestE); and mix this up with mild processing
| between A and B and C, and between D and E - you can easily
| get to race conditions in this type of code even in single-
| threaded runtimes like NodeJS).
| lmm wrote:
| > Thread-based code (CSP) is, after all, the most well
| studied model of parallel computation, and for a reason.
| Async code which achieves real parallelism may get much
| harder to interpret, especially if it's non-trivial (e.g.
| requestA.then(requestB).then(requestC); in parallel,
| requestD.then(requestE); and mix this up with mild
| processing between A and B and C, and between D and E - you
| can easily get to race conditions in this type of code even
| in single-threaded runtimes like NodeJS).
|
| Huh? Traditional threading is not CSP; threads are
| generally defined as sharing memory. Async is much closer
| to the CSP model.
| simiones wrote:
| Threads + some kind of channel (a la Erlang processes or
| Go's goroutines) are pretty much direct programmatic
| representations of CSP. Shared memory does break the
| model to a great extent, but async code is significantly
| more different.
|
| The main difference is that in CSP (and thread-based
| models) the thread of execution is explicitly represented
| as an entity in the code. In async models, the threads of
| execution are mostly abstracted away and implemented by a
| runtime scheduler, while communication is done purely by
| function return values.
|
| Of course, you can typically implement any of these
| models in terms of each other (e.g. you can easily
| implement CSP channels in a language which offers mutexes
| and shared data; or you can implement them with
| async/await primitives). But the base support in the
| language and the patterns you'll see emerge in idiomatic
| code will be different between goroutine/CSP, shared-
| memory parallelism, and async/await.
| andrekandre wrote:
| Threads + some kind of channel (a la Erlang processes or
| Go's goroutines) are pretty much direct programmatic
| representations of CSP.
|
| sorry if i misinterpreted, but i dont think that is
| correct that erlang processes exemplify channels/csp
|
| afaiu erlang is based on non-sequencial processes more
| like the actor model, there is no "communicating
| sequencial" process at the base of it (though you can
| implement csp in actor languages) afaik
|
| here you can see joe armstrong and carl hewitt
| fundamentally disagree with sir tony haore (csp) and
| sequential channels as a fundamental construct [1]
| https://www.youtube.com/watch?v=37wFVVVZlVU&t=720s
|
| again, apologies if im arguing something that wasnt
| implied, it was just something i noticed ^^
| yongjik wrote:
| Maybe it's me, but I don't see the benefits of async functions
| that much. The one big advantage vs. threads is that you don't
| have to worry about locks if everything is single-threaded, but
| if you have multiple levels of async functions then you still
| have to worry about variables changing between them.
|
| Another advantage would be that it's much cheaper to keep a
| million callbacks in memory than a million thread stacks, which
| is true, but unless your job is writing a load balancer or DB
| engine, few people need that much concurrency. I've seen
| production services that take _several_ concurrent requests on
| average.
|
| I feel that old-fashioned thread pools can be a more
| straightforward and manageable solution for most use cases.
| pornel wrote:
| In Rust I get two nice things:
|
| * Instant, reliable cancellation. You can drop a Future at any
| time, and it won't do any more work, and cleans everything up.
| Racing multiple things is easy.
|
| * Ability to mix network-bound and CPU-bound tasks, and have
| both resources fully utilized at the same time. With threads if
| I sized threadpool for CPU cores, it'd be underutilizing CPU
| when doing I/O. If I sized it for connections, it'd hammer the
| CPU. An async scheduler has CPU threadpool + any number of
| connections it wants.
| estebank wrote:
| > unless your job is writing a load balancer or DB engine, few
| people need that much concurrency. I've seen production
| services that take several concurrent requests on average.
|
| But Rust is aimed at exactly that space _and_ embedded, so it
| needs a way to express this kind of code in an efficient way.
| It is true that you can go use threads instead and it might be
| easier to get something working, but if async /await already
| exists, you might as well take advantage of it. This is why you
| will see libraries that don't _need_ to be async, written using
| async.
| bsder wrote:
| I remain unconvinced that red/blue functions are a good thing in
| a statically compiled language.
|
| If anything, the amount that async/await spirals outward to other
| areas in Rust demonstrates that it doesn't seem to be a well
| _contained_ feature.
|
| In addition, async/await executors in Rust quite often fail on
| corner cases. If I want to only wait on a socket, stuff works
| great because that's what everybody wants.
|
| If, however, I want to do something like wait on sockets _and_
| wait on a character device file descriptor ... that tends to
| break in fairly mysterious ways.
| willtim wrote:
| The problem is more that Rust cannot (currently) abstract over
| these different function types; and that they are more limited
| (e.g. no async methods in traits). But the idea of effects
| tracking is a good thing and we are bound to see more of it in
| modern statically-typed languages.
| kevincox wrote:
| This is partly true. Unlike JS (where the colouring argument
| came from) you can always just block, so while you mean be
| "stealing" an event dispatcher thread nothing bad will
| happen, you just have to deal with the fact that the
| functions you are calling may task some time to return (as
| the article argues). There is no true function colouring in
| Rust. (Much like you can unwrap an Option or Result to handle
| that type of colouring.) In Rust an `async` function is just
| a fancy way to write a function that returns a
| `std::future::Future. You can certainly write a trait with
| functions that return a `std::future::Future<Output=T>`.
| Although you will probably have to box them at the moment.
|
| https://play.rust-
| lang.org/?version=stable&mode=debug&editio...
|
| I think this restriction is because they are trying to figure
| out how to handle dynamic vs static generics. Presumably they
| will add support for this in the future.
|
| The compiler also recommends https://docs.rs/async-trait
| which has a pretty clean solution.
| coldtea wrote:
| > _Feel free to create and use languages with invisible program
| effects or asynchrony, and I 'll feel free to not pay any
| attention when you do._
|
| As if you matter? The conclusion could be stripped, and the
| article would still make its point.
| justin_oaks wrote:
| Probably it would have been best rephrased as "Feel free to
| create and use languages with invisible program effects or
| asynchrony, but beware of the downsides when you do"
|
| That way it doesn't sound like the author's opinion is what
| matters.
| q3k wrote:
| The snobbishness around Rust repeatedly makes me reconsider if
| I really should be investing even more time to continue
| learning the language.
| justin_oaks wrote:
| I'm wondering if people are looking for Rust snobbery and
| finding it where it doesn't exist. This is not a dig at who
| I'm responding to, although the comment did trigger the
| thought.
|
| I interpreted the points in the article as fairly generic.
| The idea of "Inconvenient knowledge is better than convenient
| ignorance" goes well beyond Rust.
|
| The author of the article does imply that Rust does a lot of
| things right, but I'm curious how one could say "Hey, I like
| this language it does a lot of things well and fixes problems
| I see in other languages" without sounding like a snob.
| q3k wrote:
| > I'm curious how one could say "Hey, I like this language
| it does a lot of things well and fixes problems I see in
| other languages" without sounding like a snob.
|
| Just not writing "Feel free to create and use languages
| with invisible program effects or asynchrony, and I'll feel
| free to not pay any attention when you do." would do the
| trick in this case.
|
| Or, generally, show your reasoning, show your arguments,
| acknowledge any tradeoffs, contextualize against other
| languages as needed. And perhaps do not jump into every
| programming language discussion with "this wouldn't be an
| issue with Rust" unprompted, as that's generally not
| constructive, and does not acknowledge that there is more
| to software engineering than just what programming language
| you pick.
| [deleted]
| [deleted]
| whizzter wrote:
| The core innovations were brilliant and it does seem to
| attract a certain kind of developers yes, that said there
| seems to be quite a few awesome people in the community as
| well so I guess the strategy is to use it where it's the best
| tool and not be too invested in the rest.
| geodel wrote:
| > The conclusion could be stripped, and the article would still
| make its point.
|
| I'd say maybe conclusion was the point. The author clearly said
| others who wrote Rust articles were not forceful enough that
| Rust color functions are actually good thing. So they needed to
| make position clear.
| kristoff_it wrote:
| In Zig allocators are always passed around explicitly and yet we
| have a completely opposite stance on function coloring [1].
|
| In my opinion the argument doesn't really apply to async await
| when you get down to the details, but I need to think about it
| more to be able to articulate why I think so with precision.
|
| [1] https://kristoff.it/blog/zig-colorblind-async-await/
| losvedir wrote:
| It's really interesting that Zig seems to have managed to avoid
| the "color" distinction for async. I'm curious to see how it
| works out in the end (and am a happy sponsor, to help that
| out!).
|
| However, and maybe this is what you're alluding to with your
| mention of allocators, it seems that Zig has "colored"
| functions in its own way, with regard to allocation! If a
| certain function allocates, then to use it the caller will need
| to pass in an allocator, and that caller will need to have been
| given an allocator, etc. In other words, explicit allocation
| "infects" the codebase, too, right?
|
| I can't help but see the parallels there. I think Zig's stance
| on explicit allocation is one of its strengths, but it does
| make me wonder if some compiler trickery can make it colorblind
| as well.
| AndyKelley wrote:
| I can see why you would think that but if you try out some
| Zig coding you will quickly realize there is no parallel
| here. It just doesn't make sense to try to compare them. I'm
| at a loss as to how even to explain it.
| dnautics wrote:
| Red/blue means that in one direction a seam is impossible
| to create. That's not generally the case for allocators in
| zig.
| dnautics wrote:
| It's not true that in zig allocators are always passed around
| explicitly. You can:
|
| - declare a global allocator for your program and call it
| implicitly everywhere. This is very accepted by the community
| as bad practice for libraries but if you are writing end
| product who cares. In the end in most cases you will have to
| bootstrap your allocator against a global allocator anyways,
| and usually that in some form or another
| std.heap.page_allocator.
|
| - bind an allocator into your struct when you build it. This
| makes the allocator "implicit" instead of explicit.
|
| The important criteria about red/blue functions that a lot of
| people forget about is that one of the seams between the two
| colors is missing. As far as I can tell, for all of
| "red"/"blue" ish things in zig, (async, allocators, errors)
| there are clear and documented ways of annealing those seams in
| your code.
| diragon wrote:
| First it was "Rust's async isn't f#@king colored!"
| (https://www.hobofan.com/blog/2021-03-10-rust-async-colored/)
|
| Then "Rust async is colored, and that's not a big deal"
| https://morestina.net/blog/1686/rust-async-is-colored
|
| And now it's a good thing.
|
| Honestly, this sounds like somebody struck Achilles' heel of Rust
| and it hurts. They're neatly covering 3 stages of grief over
| these 3 articles, and I wouldn't be surprised if we could uncover
| the other 2 in the blogosphere somewhere.
| pornel wrote:
| This is because the original article talked about a couple
| different things under the same title of a "coloring" problem:
|
| 1. JS has no way to wait for results of async functions in a
| sync context. This creates hard split between sync and async
| "colors". This is what most people agree is the major problem.
|
| 2. C# can avoid the split, but the syntax for using sync and
| async is different. The original article wasn't very clear
| about why this is bad, other than it exists.
|
| The second case gets interpreted as either:
|
| 2a. Programming languages should be able to generically
| abstract over sync/async syntax.
|
| 2b. Programming languages shouldn't have a different sync/async
| syntax in the first place.
|
| Rust's case is consistently:
|
| - It generally doesn't have the problem #1. Sync code can wait
| for async. Async code can spawn sync on a threadpool. There are
| some gotchas and boilerplate, but this kind of "color" isn't an
| _unsolvable dead-end_ like it is in JS.
|
| - It has distinct syntax and execution model for async, which
| still counts as "color" (#2) per the original article. However,
| Rust can abstract over sync and async, so it doesn't have the
| problem #2a. The syntactical distinction is very intentional,
| so it doesn't consider #2b to be a problem at all.
| merb wrote:
| the problem is that if you introduce async, your could should
| preferably async all the way to the bottom. of course you can
| trick yourself by using thread pools and move long running
| functions to them, but not everybody knows that.
|
| of course sync functions have the same problem (not the same
| problem, but it's the same concept, just in the other way),
| except that most frameworks just deal with it at the very
| front, so developers do not need to deal with it.
| skohan wrote:
| I actually have found all the angst and consternation around
| the "coloring" of Rust's async approach to be rather funny,
| because there are _heaps_ of concepts in Rust which introduce
| something analogous to function coloring.
|
| A good example would be lifetimes. Once you introduce lifetime
| parameters into a data type, it starts to introduce ballooning
| complexity into everything that touches it.
| orra wrote:
| Is that problematic in practice? Write functions that are
| pure or take references (not ownership), and for the rest,
| use a judicious .copy()?
| skohan wrote:
| Yeah sure it's problematic. For instance, if you have a
| deeply nested struct, and you have to add a lifetime
| somewhere in the bottom, now you have to propagate it all
| the way up to the top level. Also if you run into a case
| where you need an explicit lifetime in a function
| somewhere, this ends up propagating outward.
|
| I've run into cases with Rust where I refactored a
| significant portion of a program to avoid nested references
| and explicit lifetimes in a way I haven't had to in other
| languages
| fauigerzigerk wrote:
| Yes, but you only have three choices basically:
|
| a) Accept the performance hit that comes with GC or
| defensive copying.
|
| b) Convince yourself that developers can keep all those
| potentially dangling references in their heads at all
| times.
|
| c) Use something like Rust lifetimes.
|
| My preference would be (a) whenever possible, (c) when
| absolutely necessary, and never (b)
| zozbot234 wrote:
| You don't need GC or copying. Rust supports reference-
| counted handles that will never dangle, and will only
| leak in very rare cases involving self-referential data
| structures.
| fauigerzigerk wrote:
| I consider reference counting a form of GC - a rather
| slow one with lots of potential pitfalls.
| nickez wrote:
| It might be slow, but it is deterministic which is more
| important in systems programs.
| Joker_vD wrote:
| In what sense it is deterministic? Any refcount decrement
| may or may not be the last one and that may or may not
| cause a huge graph of objects being free()d which (even
| without destructors) takes unpredictable amount of time.
|
| Then again, malloc() itself is usually not deterministic
| in a sense that it may take an arbitrary amount of time
| to make an allocation: many implementations shift work
| from free() to malloc() to get better asymptotic time
| complexity.
| pkolaczk wrote:
| The problem of destructor chains has been solved a long
| time ago. You can deallocate on another thread or
| deallocate in small batches during allocations.
|
| Also pauses from malloc/free are orders of magnitude
| smaller than pauses caused by tracing GC. Low pause
| tracing GC creators make it a big news when they manage
| to get pauses below 1 ms, while malloc/free operate in
| range of tens or maybe hundreds nanoseconds.
|
| Tracing GC can pause a fragment of code that does not
| make any allocations. Refcounting can't. Refcounting has
| only local effects - you pay for it where you use it. But
| when you add tracing GC, its problems spread everywhere.
| cesarb wrote:
| > In what sense it is deterministic?
|
| In at least two senses:
|
| 1. Memory usage: with reference counting, at any given
| moment, the amount of memory used is the minimum
| necessary for all live references.
|
| 2. Timing: with reference counting, the only moment when
| memory will be released (and their corresponding
| destructors called) is when a reference count is
| decremented.
|
| > Any refcount decrement may or may not be the last one
| and that may or may not cause a huge graph of objects
| being free()d which (even without destructors) takes
| unpredictable amount of time.
|
| That's actually predictable: it will always happen
| precisely at the last refcount decrement for that object,
| and will release only that object and other objects owned
| exclusively by that object, and nothing else. The size of
| that "huge graph of objects" is bounded, which makes the
| time taken to release it predictable.
| Joker_vD wrote:
| > The size of that "huge graph of objects" is bounded,
| which makes the time taken to release it predictable.
|
| By the same line of reasoning, since the total size of
| the heap is bounded, it makes the time taken to sweep
| through it predictable.
| cesarb wrote:
| Unfortunately, that's not the case, since you can't
| easily predict the number of passes over that heap, nor
| can you predict when they will happen.
| pkolaczk wrote:
| A "rather slow" is quite debatable. Tracing GC gets speed
| advantage at the cost of much higher memory and energy
| usage, unpredictable pauses, cache thrashing, etc. If you
| try to make these things equal, tracing can become
| embarrassingly slow, or even just crash with OOM.
| tadfisher wrote:
| These are three different opinions from three separate people.
| Please consider that the Rust community does not share one
| consciousness, as your post doesn't add anything useful to the
| conversation.
| darthrupert wrote:
| Is that why 5 different people answered parent's comment
| dismissingly over a 15 minute period?
| lukebitts wrote:
| So are you arguing that the rust community shares one
| consciousness? Cause otherwise I don't see your point
| [deleted]
| wffurr wrote:
| The other links are interesting context for the debate over
| async in the Rust community. Too bad GPs post didn't frame it
| that way.
| [deleted]
| mumblemumble wrote:
| > They're neatly covering 3 stages of grief over these 3
| articles
|
| No. It could only be something like three stages of grief if a
| single person were cycling among these positions. When it's
| just three different people writing three different blog posts,
| it's either three completely unrelated things, or a debate.
| pwdisswordfish8 wrote:
| Or maybe, and just maybe, the whole 'stages of grief' concept
| is not actually predictive of which opinion is actually
| right, and it's just a cheap rhetorical trick providing an
| excuse to patronizingly dismiss someone else's scepticism
| towards an idea.
| [deleted]
| ghosty141 wrote:
| I doubt the commentor was 100% serious when applying the
| "stages of grief" to blogposts.
| mumblemumble wrote:
| True, but whether or not something is a joke and whether
| or not it is rhetorically misleading are orthogonal
| characteristics.
| clarge1120 wrote:
| Thank you. The Gamma radiation was starting to glow.
| [deleted]
| adonovan wrote:
| Exactly. Kubler-Ross, the author of the original "stages of
| grief" paper, has since tried to clarify her horribly
| misunderstood idea: the "stages" are neither essential nor
| sequential, but are merely some of the many ways in which
| people react to grief.
| Sacho wrote:
| The two articles you list actually agree with each other - the
| second one is basically a more thorough argumentation of the
| points raised in the first one.
|
| For example, the first article agrees that "async is
| colored"(despite the title), but that the big issue of
| "colored" functions, "You can only call a red function from
| within another red function", doesn't exist in Rust. This is
| also a major point in the second article.
|
| I think a more accurate description would be that Rust async is
| informed by painful async implementations(Javascript) and has
| tried to avoid most of their shortcomings.
| jackcviers3 wrote:
| But they are a good thing, from a type perspective, they encode
| the idea that a side-effect is going to happen at the type
| level, and then force you as the caller to handle that fact or
| pass the buck to your caller.
|
| Effect types are a signal for all programmers that "Here be
| dragons". They outline parts of your code which may not be safe
| to reorder during refactoring. And they can cover so much more
| than just asynchronous I/O. [1]
|
| 1. https://en.wikipedia.org/wiki/Effect_system
___________________________________________________________________
(page generated 2021-04-26 23:01 UTC)