[HN Gopher] I'm Betting on Call-by-Push-Value
___________________________________________________________________
I'm Betting on Call-by-Push-Value
Author : todsacerdoti
Score : 161 points
Date : 2024-03-09 18:56 UTC (1 days ago)
(HTM) web link (thunderseethe.dev)
(TXT) w3m dump (thunderseethe.dev)
| falcor84 wrote:
| I really want to like this more, but need a better notation
| rezonant wrote:
| Can someone TLDR this? It's pretty thick, and I can't help but
| think this is a solved problem with the ability to pass functions
| as arguments. What's the new insight here?
| amluto wrote:
| I can't. I think it would have been nice if the article defined
| the terminology it used to describe its toy languages.
|
| But it really seemed like it was describing a language like C++
| or Rust where a "computation" is a value that happens to be a
| callable function (std::function<int ()> or Fn() -> int, for
| example). A thunk is a value of callable (computable? is this
| really different?) type, and Return is the magic operation that
| wraps a value into a trivial closure, like:
| [val]() { return val; }
|
| in C++.
|
| Maybe there's more to it than that.
| Dylan16807 wrote:
| It feels like half the text is dealing with arity confusion
| that only comes up because of a particular way of
| implementing functions.
|
| Because I agree, if you take a statically typed call-by-value
| language and "introduce little spurts of lazy evaluation"
| then don't you get the same flexibility? And you'll also know
| the arity at all times.
|
| There's something in the syntax I don't understand in the
| middle about unifying things, but then it later says you have
| to explicitly use "Return" on thunks in records to evaluate
| them, which sounds like the same thing a CBV language does.
|
| Are there other benefits I'm not understanding?
|
| Does the unification apply only to function parameters?
| amluto wrote:
| I found it almost impossible to understand. The syntax used
| for their examples was lisp-like, and the function arity
| seems unambiguous. There is no implicit currying in lisp as
| far as I know. Extra-confusingly, the definitions at the
| start were in a syntax resembling Haskell, but I think it
| has multiple mistakes.
| crq-yml wrote:
| If you're in eager evaluation land, everything evaluates as you
| come to it, so you can handle all arguments at runtime by using
| a stack and no problem is felt. Once we say, "actually we don't
| evaluate until it's used" ambiguity arises.
|
| So the context of this is to solve a problem-in-practice with
| arguments in a lazy evaluation environment, which is that you
| don't know how many arguments you need to supply without doing
| the evaluation.
| analognoise wrote:
| Is that what they mean by "calculating the arity"?
| crq-yml wrote:
| Yes, arity is "number of arguments". I don't have expertise
| in lazy evaluation myself, I've just been exposed to enough
| of it to explain the motivations.
| amluto wrote:
| I'm very far from an expert in these styles of programming
| languages, but I'm pretty sure I'm missing something, and
| it's not well explained in the OP.
|
| The OP uses a lisp-y syntax, fully parenthesized, for its
| code examples. But function arity isn't so much an issue in
| these languages -- you can infer it from the parentheses, and
| there's no implicit currying. (The OP uses a Haskell-like
| syntax for its definitions, but I _think_ it contains
| multiple mistakes, making it very hard to understand.)
|
| A language with implicit currying _and_ eager evaluation
| specifically of function arguments just seems overcomplicated
| to me. It sounds like a stack machine in which you pop an
| element off the stack, evaluate it enough to determine its
| arity, then pop that many more elements, evaluate _them_ ,
| and then apply the function. This sounds unpleasantly dynamic
| and quite hard to implement efficiently.
|
| Am I missing something here?
| bitwalker wrote:
| I wouldn't read too much into the syntax - the Haskell-like
| definitions are just a succinct way to describe the AST
| representation for the toy call-by-push-value language the
| post is talking about. Similarly, the syntax that resembles
| LISP, is actually (I believe) just shorthand for how the
| language would be represented in an extended lambda
| calculus with call-by-push-value semantics.
|
| I think it's important to note that the post is clearly
| written for someone working on the implementation of
| compilers/type systems. So for example, in a compiler IR,
| especially those based on an extended lambda calculus
| representation, functions/closures are often curried, i.e.
| only take a single argument. As a result it can be an issue
| at a given call site to know whether a function is
| "saturated", i.e. all of its arguments have been applied so
| it will do some work and return the actual result; or
| whether it is only partially saturated, in which case it
| returns a new function that needs one less argument than
| before. I gather that this interacts poorly with typing in
| some situations, but I'm not super familiar with the issues
| here as I haven't gone down the rabbit hole of building a
| compiler with this kind of IR.
|
| That said, as someone who works as a compiler engineer, I
| do recognize the potential of call-by-push-value, namely
| that it reifies evaluation semantics in the IR, rather than
| it being implicitly eager or lazy, and thus having to hack
| around the lack of one or the other, i.e. implementing lazy
| evaluation on top of eager semantics or vice versa. Making
| it a first-class citizen enables optimizations that would
| otherwise be impossible or very difficult to implement. To
| be clear though, you can obviously implement lazy on top of
| eager, or eager on top of lazy, without call-by-push-value
| - but I think if you anticipate needing to do that, and you
| are building a new compiler, using a call-by-push-value IR
| might be a worthwhile approach to take.
|
| There might be more to it that I'm not really seeing yet in
| my brief reading of it, but the post made sense to me, it's
| just assuming a lot about the reader (i.e. that you are
| deeply familiar with compiler implementation, type theory,
| etc.).
| charcircuit wrote:
| Imagine if functions were polymorphic over the ability to pass
| in a function as an argument. So both f(1) and f(() => 1) would
| be valid for a function that takes a number. It would also
| apply to variables and such too. So "const a = () => 1; const b
| = a + a;" would be possible to do.
| rezonant wrote:
| This is a good explanation. So effectively it's making the
| lazy/eager distinction controlled by the caller and not the
| callee.
| Izkata wrote:
| Ahh k. So like Rebol and Red have worked for decades:
| >> F: func [A][print type? :A print join "Arg: " A]
| >> F "Foo"
| string Arg: Foo >> F func [][return "Bar"]
| function Arg: Bar
|
| I really got spoiled learning Rebol as my first language, I'd
| already internalized a ton of concepts long before they were
| introduced during school and college. Higher-order functions
| were the first time I really realized it, I guess here's
| another to add to the pile.
|
| Even knowing this though, that post really is really obtuse.
| Definitely could have been better written.
|
| Basically in these languages, functions don't have a special
| syntax to be called, like F(), they just get called when
| used. So usage of a no-argument function is identical to a
| variable, and what happens depends on its datatype (function,
| string, whatever). The special :A syntax is to get the value
| without evaluating it if it is a function.
| vidarh wrote:
| That's already the case for any dynamic languages, and it's
| just down to whether you write your code to be flexible about
| inputs. E.g def int(i) =
| i.respond_to(:call) ? i.call : i.to_i def f(i) =
| int(i) * 2
|
| f in this case can take both ints and lambdas or anything
| call-able.
|
| In practice we'd usually do this via any of the normal
| conversion methods like expecting the caller to support to_i
| etc., and bear the complexity for us as needed unless there's
| a really compelling reason to think the caller will often
| want to defer the evaluation past the call site and into the
| method itself.
|
| That's usually very obvious when you come across it, like
| e.g. if the argument is expected to come from a remote
| source, but as long as the API for how the method uses the
| value is clear, substituting the value for a deferred
| calculation is easy.
| burakemir wrote:
| It is nice to see CBPV on HN. It is a calculus that deal with a
| fundamental choice in PL of how to approach evaluation. For a
| different take, here is the first part of a mini series of posts
| on CBPV that aims at working out the connection to logic
| ("polarised natural deduction"): https://burakemir.ch/post/cbpv-
| pt1-small-steps/
| zozbot234 wrote:
| Interesting to see (in the followup posts) that the structure
| of CBPV seems to lead to "lazy" records as a natural choice,
| contrasting with "strict" primitive values and sum types, and
| "lazy" function types. The connection to polarity and focusing
| is also worthy of note.
| p4bl0 wrote:
| This made me think of Kernel, a programming language of the Lisp
| family where the main primitive is vau rather than lambda,
| defining a fexpr rather than a function.
|
| Juste like lambdas, fexprs are lexically binded closures. The
| difference is that a fexpr arguments are not evaluated before
| it's called, instead, the arguments _syntax_ is passed to the
| fexpr (so it 's not call-by-name either, more like a macro),
| except that the fexpr also receives the dynamic environment (the
| one from the call site) so that it can eval its arguments in this
| context if it wants to.
|
| This makes fexpr a kind of all powerful form that can do both
| what macros and functions can do. Even primitives like lambda or
| quote can be written as fexprs. Quite fascinating.
|
| See https://web.cs.wpi.edu/~jshutt/kernel.html about Kernel :).
| dmux wrote:
| I'm not familiar with the theoretical aspects, but what you're
| describing reminds me of Tcl: arguments to procedures can be
| passed unevaluated (quoted, using Tcl terms) and the procedure
| itself can alter the caller's environment via uplevel.
| bradrn wrote:
| It's also extremely similar to R and Rebol.
|
| (In fact, R isn't just 'extremely similar': it _does_ use
| fexprs, though without mentioning the term.)
| Tyr42 wrote:
| It's what let's the tidy data tools take arguments like .x
| where x is a column name, not a local varia le that's in
| scope.
| nerdponx wrote:
| R is even more wild because you can access the entire
| definition of a function from the function object itself.
| You can literally make a new function that's a copy of an
| existing function with some lines of code injected.
| bch wrote:
| In Tcl, [info body ...].
|
| https://www.tcl.tk/man/tcl/TclCmd/info.htm#M6
| hobs wrote:
| Powershell lets you do this too.
| FridgeSeal wrote:
| "Wow this is fun and amazingly useful"
|
| 8 months later.
|
| "I regret this decision, and a thousand curses on anyone
| who chooses to use this in a code base I have to work on"
|
| I've been at the receiving end of this particular
| "feature" and it was _awful_. Your logic is splayed all
| over the place, and you'll often get _mystery_ errors,
| because someone else's code makes some undocumented, un-
| typed assumption about the availability of some magically
| named variables. This is not a feature, this is bugs and
| misery in disguise.
| Certhas wrote:
| Isn't that a feature/problem of every dynamic language?
| bsder wrote:
| In Kernel, the environments are completely reified--they
| exist as a thing and you can pass them around.
|
| However, if you don't get an environment passed in, you _can
| 't get access to it_. In addition, the environments in many
| ways are append-only, you can change what you see as "car"
| but you can't change what previous callers see as "car"
| unless you get a handle to their environment but that won't
| change previous calls.
|
| It's really quite tricky. It's doubly tricky because there
| are some environment mutations that you need to do as the
| system that you need to prevent as the user once everything
| is bootstrapped.
|
| I really wish Shutt had implemented a version in
| C/C++/Ada/anything systems language and not just a
| metacircular scheme implementation. The metacircularity
| obscures the management of the environments that is key to
| the implementation. It also obscures just how much garbage
| Kernel cons-es up in order to maintain all of the necessary
| housekeeping to pass all this stuff around.
|
| Alas, he is no longer with us to request anything of. RIP.
| PhilipRoman wrote:
| >The difference is that a fexpr arguments are not evaluated
| before it's called
|
| What's funny is that the actual lambda calculus allows multiple
| reduction strategies, including lazy evaluation. So I guess the
| only reason to introduce this distinction is to facilitate
| impure code, which cares about the difference.
| kaba0 wrote:
| I mean, there are observable differences between the two. An
| obvious one is to have an infinite recursive function
| application as a parameter to a function. Depending on
| whether CBV or CBN is used, you either get non-halting code,
| or a meaningful result.
| tonyg wrote:
| Kernel-style fexprs give macros plus a bunch of reflective
| abilities that even macros don't provide. Lazy evaluation
| (alone) doesn't do that.
| JonChesterfield wrote:
| The kernel thesis is worth reading if you haven't already.
| The fexpr/vau reduction semantics are (a subset of) the
| lambda calculus.
| kazinator wrote:
| > _makes fexpr a kind of all powerful form that can do both
| what macros and functions can do._
|
| A fexpr cannot do everything a macro can do.
|
| A macro can be expanded upfront, so that even code that is not
| reached during execution is expanded.
|
| The separate macro expansion pass can take place in a different
| host environment, such as a developer's build machine. The
| expanded code then executes elsewhere: a target machine
| different from the developer's machine.
|
| A macro can, for instance, grab a text file on the build
| machine, massage it and turn it into a literal (or even code,
| obviously).
|
| Macro expansion can perform basic checks on code. For instance,
| in TXR Lisp, unbound variables are reported by the macro
| expander, before the code is executed, and even if it isn't.
|
| This error is caught during expansion. The function is still
| interpreted: 1> (defun foo () (list a (cons)))
| ** expr-1:1: warning: unbound variable a foo 2>
| (fun foo) #<interpreted fun: foo nil>
|
| Cons not being given enough arguments is caught in the
| compiler: 3> (compile 'foo) ** expr-1:1:
| warning: cons: too few arguments: needs 2, given 0 #<vm
| fun: 0 param> 4> (fun foo) #<vm fun: 0 param>
|
| Application-defined macros can implement their own static
| checks. Commonly, macros check arguments for validity and such,
| but more advanced checks are possible.
|
| For instance, in the _awk_ macro provided the same Lisp
| dialect, it 's possible to perpetrate a situation in which code
| that looks like it is scoped a certain way isn't. The macro
| diagnoses it: 5> (awk ((let ((x 1) (y 2)) (rng
| x y)) (prn))) ** expr-5:1: warning: rng: form x
| is moved out of the apparent scope
| and thus cannot refer to variables (x) ** expr-5:1:
| warning: rng: form y is moved
| out of the apparent scope and
| thus cannot refer to variables (y) ** expr-5:1: warning:
| unbound variable x ** expr-5:1: warning: unbound variable
| y
|
| The issue is that (rng ...) expressions, which test for a
| record being in a range indicated by two conditions, are not
| evaluated in the apparent scope where they are physically
| located. They are evaluated when a new record is read and
| delimited into fields, _before_ the ladder of condition /action
| pairs is evaluated. The information is recorded in a hidden
| vector of Boolean values, and when the (rng ...) expression is
| encountered, it simply retrieves its corresponding pre-computed
| Boolean.
|
| A fexpr could implement this check but (1) it would be
| expensive at run-time to be digging into scopes, and (2) it
| would have to be executed and (3) it would need some stateful
| hack to avoid diagnosing repeatedly.
|
| Another thing fexprs cannot do is to be debuggable via
| inspecting an expansion. To debug the form which uses the
| _fexpr_ we have to debug into the fexpr. To debug a form using
| a macro, we can inspect the expansion. We can understand what
| is wrong in terms of the behavior of the expansion, which can
| be very simple compared to the macro. Then work backwards to
| understand why the bad expansion was produced. At that point we
| can dig into the macro, and that is a different problem: a pure
| code generation problem we can likely debug in isolation in a
| development system.
|
| Yet another thing fexprs cannot do is simply go away entirely
| in a software build. We can stage macros such that their
| definitions are available at compile time, but are not
| propagated into the compiled program.
| skybrian wrote:
| I'm wondering what's gained over writing your thunks as zero-
| argument functions (etc) in an eager language.
| p4bl0 wrote:
| If I understand correctly, here it is transparent. In an eager
| language a function needs to know whether it is given a value
| or a thunk, but here it can be both interchangeably (the thunk
| only need to have a type that says it returns a value of the
| same type as what could have been passed directly).
| soulbadguy wrote:
| In theory, yes i would agree that they might be a different
| somewhere. But it's not clear (at least to me that) is this
| distinction maters in practice.
|
| I think the main point here would that a thunk is also a
| value. And most type system can already express "a
| computation which return Type X" by a value of type "() ->
| X". Adding side effect annotation, it seems to be that a pure
| function of type "() -> X" gives you very close to the same
| semantic.
|
| It's also unclear to me that one should want to allow both
| value and computation producing values to be used
| interchangeably. If the goal of CBPV is about better
| performance, then being able to mix and match both "kind" of
| value is probably not wise (at least without good compiler
| error messages), same as mixing type of values would break a
| type system.
| jayd16 wrote:
| Is it transparent? It seems like the goal is to be deliberate
| through explicit calls to thunk(). I guess you mean that a
| function parameter can be both a value or lambda.
| tel wrote:
| There are a few differences, depending on how you evolve from
| the CBPV core, but basically other types may be eager or lazy.
| Functions can be eager (and thus statically know to represent
| an eager call tree) and sum types can be lazy (deferring
| choice). These distinctions are all available at the type
| level.
|
| To have thunking only arise from single unit-arg functions kind
| of depletes the logic of useful structure. But it's not wrong
| to see it as an approximation.
| samus wrote:
| Aren't thunks caching the result?
| erik_seaberg wrote:
| Not sure I see a difference between thunks and futures.
| iso8859-1 wrote:
| Thunk is a general concept of a small function which is used
| only to adapt the call or prepare/modify it in some way and
| then redirect to the proper function. Things like promises,
| futures, closures, wrappers, stubs or implementations of the
| concept of virtual function tables in some OO languages (like
| C++) are just special use cases of thunks (thunks are often
| used to implement them).
|
| Source: https://stackoverflow.com/a/22548372/309483
| soulbadguy wrote:
| The definition that the source is using seems wrong to me.
| Promise,futures and closures are not used to redirect to
| anything or wrapper around something else... They are proper
| compationational object with they own behaviors...
| Mathnerd314 wrote:
| I got really excited about call-by-push-value about 15 years ago,
| when I first encountered it, but at this point I think it is
| overhyped, particularly the presentation of "values" and
| "computations" as duals. For example see
| http://www.itu.dk/people/mogel/papers/eec.pdf, it is a CBPV-style
| lambda calculus except computations are a subset of values and
| there is no duality. Similarly in Levy's book, he discusses
| "complex values" like 1+2 which allow pure computations to happen
| inside the "value" type, more evidence that they are not duals at
| all.
|
| If you squint at CBPV, there is exactly one primitive that
| sequences computation, `M to x. N`. In Haskell this is just the
| monadic bind `M >>= \x -> N` or `do { x <- M; N}`. If I had to
| bet, I would place my bet on monads, not CBPV.
| compressedgas wrote:
| So what they have made would be a more complicated version of
| https://en.wikipedia.org/wiki/A-normal_form
| Mathnerd314 wrote:
| I hadn't thought about it that way, but it makes sense. I
| would say the main difference is that ANF is type-preserving,
| whereas CBPV introduces new types. But they are similar in
| that they are IRs with accompanying translations from
| standard lambda calculus. You can see the translations of
| CBV/CBN on page 11 of
| https://www.cs.bham.ac.uk/~pbl/papers/tlca99.pdf
| 082349872349872 wrote:
| I was very relieved to discover from
| https://www.cs.bham.ac.uk/~pbl/papers/siglogcbpv2.pdf that
| PBL has learned how to explicitly _number pages_ in the
| intervening decades.
|
| (see Fig. 6 p.21)
| mrkeen wrote:
| I'm not so sure.
|
| I believe that by applying ANF, you're baking in either CBN
| or CBV at the time of writing the compiler.
|
| If I've read it right, the author is suggesting that CBPV
| evaluates (ordinary) values before passing them, but does not
| evaluate functions (even though they are also values!) before
| passing them.
| 082349872349872 wrote:
| If it doesn't evaluate functions, how do closures get their
| lexical environments?
| dfgdfg34545456 wrote:
| I thought monadic binds would typically be implemented _in
| terms_ of this intermediate language, not within it.
| gatlin wrote:
| Shameless self promotion: I made a toy CBPV language in
| TypeScript. It's purely for pedagogy. The idea is that the
| implementation is essentially an executable small step semantics,
| and the interop between language and machine is real purdy.
|
| https://niltag.net/code/precursor
| dfgdfg34545456 wrote:
| Is the toy language all self contained in that repo? I can see
| it pulls in some deps, wasn't sure if everything I need to see
| is in that precursor.controller.ts file or elsewhere.
| gatlin wrote:
| No, I'm annoying and split it up.
|
| https://github.com/gatlin/precursor-ts
|
| https://github.com/gatlin/precursor-
| ts/blob/main/src/ceskm.t...
| dfgdfg34545456 wrote:
| Cheers
| AtlasBarfed wrote:
| So you abstract a function argument behind an interface:
|
| .get() will evaluate the argument when the specific value is
| needed. It might be a literal wrapper.
|
| .set() will be called when the argument value is set. It either
| does nothing in the case of a wrapper that is call-by-value, or
| back-persists the mutation if the set() actually does something.
|
| I get that doesn't have the sexy lazy eval / late binding of
| functional langs, but in the imperative model, is that the basic
| idea?
|
| ------
|
| as an aside, I've found that in my (I'm a groovy programmer
| preferentially) scripts, where I'm doing bash-ish things with a
| lot of CLI execution, I do things like this:
|
| "cd somedir; ls -la". with { <look for and parse out a file and
| its date>} "df -k". with { <parse out some device we are
| interested in> }
|
| and I find it very nice to structure the readability, since the
| meat of these scripts is the command: I define the data, then
| "push" it to the processing I want to do to it.
|
| Is there any programming language name for this sort of "value
| first" model? I find it also flows well for chaining operations,
| whereas something like:
|
| saveResults(parseOutput(runCommand("df -k")))
|
| forces you to mentally pop the stack. As you read from right to
| left you have to push the operations in your mental model until
| you get to the core of what you're working on, while
|
| "df -k". with{ runCommand(it) }. with{ parseOutput(it) }. with{
| saveResult(it) }
|
| is more like the natural order
| Izkata wrote:
| I don't know groovy, but it looks a bit like method chaining,
| except using an external object/method "with" to chain
| standalone functions?
| kaba0 wrote:
| I think what you are getting at is usually called piping. Some
| languages have it inbuilt (usually as a |> operator).
| jayd16 wrote:
| So in practice using a parameter could be surprisingly very
| expensive? Are you supposed to thunk all your parameters before
| grabbing a resource?
|
| It seems like it would end up quite cumbersome. C# has object
| properties that can be values or functions. Etiquette dictates
| you don't query a database in an object property but I've seen it
| happen. I'm not even sure what the etiquette here would be. The
| entire point is to lazily evaluate slow things, right?
|
| But I guess its all just for IRs and the idea is a compiler is
| handling it all anyway?
| userbinator wrote:
| Who else looked at the title and thought "isn't that __cdecl?"
| and was expecting an article about calling conventions, but got
| something related yet completely different?
|
| For an article about efficiency, there seems to be a noticeable
| absence of actual instruction sequences for comparison.
| aranchelk wrote:
| Personally I like how PureScript has done it, as essentially an
| eager Haskell, they just implemented laziness in a library, you
| loose some of the elegance of laziness by default, but most of
| the time I want eager expressions anyway, and when I don't it's
| clear what I'm doing and easy enough to make stuff lazy.
|
| https://pursuit.purescript.org/packages/purescript-lazy/3.0....
| vidarh wrote:
| One of the things I realized from having support for lazy
| evaluation in Ruby's Enumerable is that while it's great on a
| very few occasions, as you say most of the time I want eager
| evaluation, and even most of the times where I don't care for
| any specific expression, it ends up being forced so close by
| anyway that it is in practice little different.
|
| It changes how you think in a few very specific cases, but even
| a lot of situations where e.g. infinite series are helpful,
| expressing iterations over blocks that can terminate the
| iteration itself early removes any need for laziness.
| JonChesterfield wrote:
| The case analysis for this stuff often seems incomplete. Let's
| say you're going to "pass something to a function", what're the
| common options:
|
| 1. Evaluate it in the caller, pass the result. "Strict".
|
| 2. Build a thunk, pass the thunk, eval it on use. "Normal".
|
| 3. Build a thunk, pass the thunk, eval on first use, cache
| result. "Lazy".
|
| 4. Pass the value unchanged. "Macro?". Might be a better name.
|
| 5. Associate it with surrounding context, pass unchanged.
| "Hygienic macro".
|
| 6. Mangle things in a preprocessor, call the constructs "macro".
| Unrelated to the above but sometimes gets thrown in anyway.
|
| In particular people are really prone to conflating lazy
| evaluation with macros. I think it's because the standard example
| of "if" only evaluating two of the three arguments happens to
| work with lazy evaluation or macros. Passing a thunk representing
| the evaluated form of an argument is a different thing to passing
| the unevaluated data. I see a lot of Haskell people saying they
| have no use for macros since the language is lazy - that might be
| true, but lazy is a subset of macro semantics so it's not
| immediate.
|
| I'm not totally sure that being able to annotate a specific
| function as call-by-value or call-by-thunk (or call-by-name) is
| especially useful but you can certainly do it. It's a flag stored
| in the function that call dispatch checks, ideally at compile
| time. Annotating a function as either "strict" or "macro" works
| well in kernel (the lisp).
|
| The other dimension is whether individual arguments should get
| specific calling conventions. For example, if would do well with
| the predicate given strict semantics while the consequent and
| antecedent have something else. Also interesting, not sure that's
| worth the bother either. I believe some prior art called this
| nlambda.
|
| The other thought is you can annotate the call site instead of
| the function. That could give you a single function
| implementation used for strict or lazy evaluation, though that's
| probably syntactically confusing.
|
| That's a rambling way to say I think the current setup of a
| language having a default calling convention and some way to
| choose a different one at function granularity seems fine.
___________________________________________________________________
(page generated 2024-03-10 23:02 UTC)