[HN Gopher] Faster JavaScript Calls
       ___________________________________________________________________
        
       Faster JavaScript Calls
        
       Author : feross
       Score  : 330 points
       Date   : 2021-02-15 15:33 UTC (7 hours ago)
        
 (HTM) web link (v8.dev)
 (TXT) w3m dump (v8.dev)
        
       | k__ wrote:
       | I read, TypeScript code is sometimes faster, because it tends to
       | favor monomorphic functions.
       | 
       | Can I assume that V8 moved that gain to JS?
        
         | exyi wrote:
         | Even in TypeScript optional parameters are very common, so I
         | don't really think it's a JS specific optimization.
        
       | abdo2023 wrote:
       | The 12 Best Jobs for Dog Lovers
       | 
       | https://bit.ly/MONY23GIFT
        
       | mromanuk wrote:
       | > "turns out that, with a clever trick, we can remove this extra
       | frame, simplify the V8 codebase and get rid of almost the entire
       | overhead."
       | 
       | A lost opportunity to mention "with this one weird trick"
        
       | Amin3456 wrote:
       | How to Take Care of Betta Fish
       | 
       | https://bit.ly/2Zqahul
        
       | The_rationalist wrote:
       | I wonder if Openjdk or other VMs could benefit from a similar
       | optimization.
        
         | Gaelan wrote:
         | I'd imagine not. Assuming the JVM's semantics aren't too
         | different from those of the Java language, you always know at
         | compile time how many arguments a function takes[1], so this
         | optimization wouldn't be relevant.
         | 
         | [1]: Java does have variadic functions, but we also know at
         | compile time which functions have such a signature, so it
         | probably just desugars to a normal array as the final argument.
        
           | avereveard wrote:
           | well, there's also invokedynamic, but call site selection is
           | in user space, so one can implement this optimization for
           | it's own dynamic language without violating the JVM own
           | static invocation rules.
        
         | kevingadd wrote:
         | The whole problem this solves doesn't exist in the JVM.
        
           | The_rationalist wrote:
           | The JVM isn't limited to Java, for example GraalJs is also an
           | ecmascript compliant JS engine
        
             | pjmlp wrote:
             | Graal uses a complete different implementation model.
        
               | The_rationalist wrote:
               | Never heard about nashorn, jruby, groovy then ? It all
               | use the same C2 JIT as Java
        
               | pjmlp wrote:
               | None of them relate to Graal, they just pretend to be
               | Java to the JVM.
        
               | The_rationalist wrote:
               | _The whole problem this solves doesn 't exist in the
               | JVM._
               | 
               |  _The JVM isn 't limited to Java, for example: nashorn,
               | jruby, groovy_
               | 
               | Hence the optimization would make sense for the JVM for
               | those client languages: nashorn, jruby, groovy, etc
               | 
               | Quite trivial to understand, isn't it?
        
         | PaulBGD_ wrote:
         | Java knows at compile time exactly how many arguments are
         | passed to a method, so presumably there's no adapter layer.
        
           | The_rationalist wrote:
           | Wrong, Java Varargs functions can take spreaded arrays as Var
           | parameters hence the number for this not uncommon case (e.g
           | jdbc apis, format(), etc) is only known at runtime
        
             | masklinn wrote:
             | > Java Varargs functions can take spreaded arrays as Var
             | parameters
             | 
             | That's syntactic sugar for an array, in the most literal
             | way possible: when you define `func(Object... args)` the
             | actual bytecode is that of `func(Object[] args)`, _and_ you
             | can pass in an array to stand in for the entire varargs.
             | 
             | So no, they're right. The varags just counts as 1 formal
             | parameter.
        
               | jdmichal wrote:
               | Yep! One can even define the `main` method using either:
               | public static void main(String[] args)         public
               | static void main(String... args)
        
             | colejohnson66 wrote:
             | But in Java (and similar), you have to be explicit about
             | varargs. In JavaScript, _every_ function supports varargs.
             | This will run just fine:                   function
             | print(arg) {             console.log(arg);         }
             | print("a", "b", "c");
             | 
             | It'll only output "a", obviously. In fact, this will run
             | just fine too:                   print();
             | 
             | You'll get "undefined" in the console, but it'll work.
             | 
             | JavaScript's nature is that arguments are in the
             | "arguments" pseudo-array, and the parameters are just fancy
             | names for different values in "arguments". See:
             | function count() {
             | console.log(arguments.length);         }         count();
             | count("a");         count("a", "b");
             | 
             | In order, you'll get: 0, 1, and 2. Despite the fact that
             | "count" has no parameters in the definition.
             | 
             | In the first function ("print"), "arg" is just syntax sugar
             | for "arguments[0]".
             | 
             | What I'm getting at is: in C, Java, etc., the compiler
             | knows what is a varargs function and what isn't. In
             | JavaScript, the interpreter/compiler doesn't and has to
             | assume everything is.
        
               | masklinn wrote:
               | > In the first function ("print"), "arg" is just syntax
               | sugar for "arguments[0]".
               | 
               | So much so that they reflect one another: you can set
               | `arguments[0]` and retrieve that value from `arg`, and
               | the other way around:                   function
               | foo(arg0, arg1) {           arg0 = "foo";
               | arguments[1] = "bar";
               | console.log(Array.from(arguments), arg0, arg1);         }
               | foo(1, 2)
               | 
               | will log                   [ "foo", "bar" ] foo bar
               | 
               | Although in reality optimising engines treat `arguments`
               | as a keyword of sorts: they will not reify the object if
               | they don't _have_ to. However this meant they had to
               | deoptimise a fair bit before the  "spread" arguments
               | arrived as that was how you'd call your "super".
        
               | desert_boi wrote:
               | https://developer.mozilla.org/en-
               | US/docs/Web/JavaScript/Refe...
        
               | colejohnson66 wrote:
               | Hence, "pseudo-array"
        
               | The_rationalist wrote:
               | Great explanation and I think you are right for Java,
               | Kotlin, etc. But what about dynamic languages on the JVM
               | such as GraalJS, groovy, etc?
        
             | [deleted]
        
             | bcrosby95 wrote:
             | Note that there's no such thing as spreaded arrays in Java.
             | Instead, varargs are just syntactic sugar for arrays.
        
       | bachmeier wrote:
       | "JavaScript allows calling a function with a different number of
       | arguments than the expected number of parameters, i.e., one can
       | pass fewer or more arguments than the declared formal
       | parameters."
       | 
       | I'm relatively new to Javascript. I've been bitten by both of
       | these recently. I wasted an hour where I added a third argument
       | to a function but missed a place elsewhere that was sending only
       | two parameters. That makes it harder to change your code. Now I
       | read that it's not only a good way to introduce ugly bugs, but
       | that this wonderful feature also makes your code run slower.
       | Genius.
        
         | Scarbutt wrote:
         | That feature is one of the reasons for Javascript's
         | flexibility, many hate JS in beginning but then they start to
         | appreciate the advantages of this features and start to like
         | the language.
        
           | erichocean wrote:
           | We (ab)used the shit out of that kind of thing in SproutCore
           | way back in 2008.
           | 
           | In situations like single-page app building, it's very
           | useful.
        
         | jchw wrote:
         | It has its issues, but the best use case is as a mechanism for
         | optional parameters, especially when adding new parameters to
         | existing functions while maintaining API compatibility, which
         | is especially important for standard functions.
         | 
         | If you want to reduce the amount of mistakes made, in
         | TypeScript accidental under-application becomes a type error. I
         | feel the experience of writing TypeScript is a lot smoother
         | than JS alone.
        
           | Stratoscope wrote:
           | When you add a new optional parameter to a function, there is
           | no guarantee that you won't break code that calls the
           | function.
           | 
           | Here is one notorious example:                 [ '1', '1',
           | '1', '1' ].map( s => parseInt(s) )
           | 
           | This returns the expected result:                 [ 1, 1, 1,
           | 1 ]
           | 
           | Then you think, "I don't need that little wrapper, I can just
           | pass parseInt as the callback directly":                 [
           | '1', '1', '1', '1' ].map( parseInt )
           | 
           | And this returns something that you may not expect:
           | [ 1, NaN, 1, 1 ]
           | 
           | That happens because parseInt() takes an optional second
           | parameter with the number base, and map() passes a second
           | parameter with the array index. Oops!
           | 
           | A similar situation can arise when you add a new optional
           | parameter. You still have to check all the call sites, and
           | for a public API you won't be able to do that.
        
             | jchw wrote:
             | This is true, I mainly didn't bother mentioning it since it
             | is still a fairly obscure edge case (relative to just
             | getting started in JS anyways,) _and_ because TypeScript
             | catches that too, most of the time.
        
           | harikb wrote:
           | Other languages have solved it using better methods.
           | 
           | 1. Declare another function with same name and make the older
           | function a wrapper with a default for new argument
           | 
           | 2. Support for an explicit default argument
           | 
           | Either of these would have prevented GP from having their
           | phone bug/problem
        
             | jakelazaroff wrote:
             | Presumably OP didn't want to create a new function, or they
             | would have. JavaScript supports default parameters, but
             | again, presumably OP wanted the caller to supply all the
             | arguments.
             | 
             | If you're trying to change the signature of an existing
             | function, I don't think we've yet figured out a better
             | safeguard than static typing.
        
               | colejohnson66 wrote:
               | It's possible to use "arguments.length" to "change" the
               | function signature and execute different code for
               | different parameter counts. It's a hack, but it works.
        
             | jchw wrote:
             | The first solution can be done in JavaScript and it is done
             | sometimes, but other times it may be undesirable,
             | especially for refactoring. The second solution was done;
             | ECMAScript (since the 2015 edition) supports explicit
             | default arguments.
             | 
             | When you add TypeScript into the mix, it can help a lot as
             | it is often able to detect under- and over-application as
             | type errors (though not _always_ , due to intentional
             | looseness in the language; TypeScript is not fully sound.)
        
         | dividuum wrote:
         | You'll love the output of ['1', '7', '11'].map(parseInt)
        
           | jannes wrote:
           | Number() only takes a single parameter. The correct version
           | is:
           | 
           | ['1', '7', '11'].map(Number)
        
             | davidmurdoch wrote:
             | Careful with that: https://jakearchibald.com/2021/function-
             | callback-risks/
             | 
             | Using `Number` is probably safe because it'd break much of
             | the web of it were changed. Also note that it is not
             | equivalent to parseInt, as Number and parseInt apply
             | different parsing rules.
        
               | dividuum wrote:
               | Thanks for linking that post. I meant to include it in my
               | comment but wasn't able to find it again.
        
             | dahfizz wrote:
             | If only the language did anything at all to help you
             | realize the wrong version is wrong and that this is somehow
             | different.
        
           | pmontra wrote:
           | One would assume it to be equivalent to (written in the most
           | old style possible way so everybody can understand it)
           | ['1', '7', '11'].map(function(n) {return parseInt(n);})
           | 
           | but no, map has got an extra argument, the index of the array
           | function traceParseInt(n, i) {         console.log(i);
           | return parseInt(n, i);       }       ['1', '7',
           | '11'].map(traceParseInt)       0       1       2       [ 1,
           | NaN, 3 ]
           | 
           | Nice way to wait for an integration test to complete. Thanks.
        
             | erichocean wrote:
             | And parseInt() also take an extra argument, the "base" of
             | the number--hence the weird results.
             | 
             | (Which also means that a static type checker _would not_
             | have caught this  "bug".)
        
           | nobleach wrote:
           | I've always found this example so frustrating. JavaScript
           | isn't curried. Plenty of languages are not. `parseInt` is
           | overloaded/variadic. What we know is if one does not read the
           | documentation for `parseInt`, they might get unexpected
           | results... but who's fault is that? I don't see this a a
           | "hidden gotcha". Yes, if you came from say.... Pascal, and
           | you were used to `StrToInt` taking one arg, and you refused
           | to check into it, you'd get odd results. But who among us,
           | when learning a new language don't look up things like, "how
           | to convert a string to an integer in <insert name of new
           | language>"? That search will most likely land you at some
           | docs that explain that second arg.
        
         | dragonwriter wrote:
         | > I'm relatively new to Javascript. I've been bitten by both of
         | these recently. I wasted an hour where I added a third argument
         | to a function but missed a place elsewhere that was sending
         | only two parameters. That makes it harder to change your code.
         | 
         | How? Sure, if you make a breaking change it requires you to
         | hunt down all callers, but JavaScript's dynamic nature does
         | that anyway.
         | 
         | It makes a lot of things easier, too, though it's not without
         | gotchas (the most frequently encountered, IME, one being
         | functions that you normally only use a subset of the arguments
         | combining with functions which take callbacks where you
         | normally use only a subset of the passed arguments interacting
         | in surprising ways; the sibling comment on the map/parseInt
         | combo being a good example.)
        
         | runarberg wrote:
         | May I recommend adding a static type checker to your tooling.
         | You can annotate function types with TypeScript using JSDoc
         | comments[1], or Flow[2] and transpile the annotations away with
         | babel.Then you would run the type checker as you would run your
         | linter as part of your CI.
         | 
         | 1. https://www.typescriptlang.org/docs/handbook/intro-to-js-
         | ts....
         | 
         | 2. https://flow.org/
        
           | bachmeier wrote:
           | > May I recommend adding a static type checker to your
           | tooling.
           | 
           | I fully understand that the JS community has a thing for
           | dependencies, and views Amazon as its representative use
           | case, but I was hoping to add a few small Javascript
           | functions to a static html page. JS loses its appeal quickly
           | when you start adding things like that - if I can't open an
           | html file in a text editor and add the functions I need, I
           | might as well use a different language.
        
             | jakelazaroff wrote:
             | I'm not sure what other language you could use to add
             | client-side functionality to a static webpage?
             | 
             | But anyway, if you want to avoid a compilation step, the
             | link GP mentioned shows how TypeScript can check JS files
             | annotated with normal comments. You _can_ run it through
             | the a compiler to remove the annotations if you want, but
             | it 's absolutely not required.
        
             | madeofpalk wrote:
             | I'm not really sure what Amazon has to do with this, but
             | the problem you described - "I changed the API and forgot
             | to update a part of the application" - is _exactly_ what
             | type checking is for.
             | 
             | This is not a "JS community has a thing for..." bit, this
             | applies to any programming language.
        
           | AgentME wrote:
           | As someone who has used both Flow and Typescript for years,
           | I'd recommend Typescript over Flow. You can use Babel to
           | compile Typescript code just like you can with Flow and have
           | Typescript do its type-checking in CI. Typescript has better
           | editor integration, more type-definitions for libraries, and
           | breaks compatibility much less often than Flow.
        
         | the_local_host wrote:
         | I was a late convert to "full stack" myself. Just go directly
         | to Typescript and run a linter on the code before you execute
         | it.
         | 
         | It seems like more than a programmer should _have_ to do, but
         | eventually you 'll forget about the extra tools and have decent
         | pseudo-compile-time error detection.
        
           | colejohnson66 wrote:
           | And if your IDE supports it, you can get informed of the
           | errors before a compile (what Microsoft calls "Intellisense")
        
       | brundolf wrote:
       | I've brought this up before, but I wonder if a "JS constraint
       | spec" could be added to the JS standard that guarantees against
       | this sort of (already uncommon) dynamic behavior in certain
       | cases, so that code can be more aggressively optimized when
       | possible
       | 
       | If you're already using TypeScript to constrain your code, why
       | couldn't it generate interpreter/compiler hints?
       | 
       | These could take the form of a language-agnostic metadata file,
       | similar to source maps, that optionally ships alongside the JS
       | bundle
       | 
       | Edit: Several people have misunderstood, so I must not have
       | articulated this well. What I meant was _not_ a new /stricter
       | subset of JS that forbids certain dynamic behaviors across the
       | board. What I'm talking about is having TypeScript (or
       | otherwise), which knows things at compile-time about how specific
       | pieces of code are and are not called, manifest this information
       | in a way that V8 can digest and act on directly, for that
       | specific codebase. V8 already tries to guess this information and
       | uses it to decide which things to optimize and how, but it's
       | treating the JS bundle as a black box even though in many cases,
       | an earlier stage of the pipeline already had this information on-
       | hand and threw it away.
       | 
       | In addition to being more granular, this (like TypeScript itself)
       | would allow parts of a codebase to continue being fully dynamic
       | while other parts are well-constrained.
        
         | DDSDev wrote:
         | In general I think this is an interesting idea, but I feel like
         | this has a lot of overlap with asm.js and WebAssembly.
         | 
         | With this standard we would have standard dynamic JavaScript as
         | the world knows it today, a restricted subset ('constraint
         | spec') that is still designed for human
         | readability/writability, and then asm.js/WebAssembly, which
         | would not be written directly but instead would be an output of
         | code written in other languages. Programmers will want
         | interoperability between all of these paths, and that is a lot
         | of complexity for these engines to manage.
        
         | [deleted]
        
         | Zababa wrote:
         | This may be a bit of a reach, but there's AssemblyScript [1],
         | which is a compiler from a strict variant of TypeScript to
         | WebAssembly. Since technically V8 is also a WebAssembly engine,
         | porting some part of your codebase from JS/TS to AssemblyScript
         | could improve your performance.
         | 
         | [1]: https://www.assemblyscript.org/
        
           | astrange wrote:
           | Does V8 actually run compiler optimizations on WebAssembly? I
           | thought it came pre-optimized and it just executed it.
           | 
           | My impression of WASM was it's ok but it's not very
           | expressive for an assembly language - at least it has
           | integers though. gcc.godbolt.org doesn't give you any library
           | functions for it, even memcpy(), so I couldn't do a lot of
           | testing.
        
             | Zababa wrote:
             | I think it does. Here's an article introducing Liftoff [1]
             | with the relevant parts:
             | 
             | > With Liftoff and TurboFan, V8 now has two compilation
             | tiers for WebAssembly: Liftoff as the baseline compiler for
             | fast startup and TurboFan as optimizing compiler for
             | maximum performance.
             | 
             | > Immediately after Liftoff compilation of a module
             | finished, the WebAssembly engine starts background threads
             | to generate optimized code for the module.
             | 
             | [1]: https://v8.dev/blog/liftoff
        
         | matsemann wrote:
         | There's already "use strict" that does some of this to enable
         | optimizations and other stuff. Maybe it should be even stricter
         | or something similar could be used.
         | 
         | https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
        
           | brundolf wrote:
           | Right but I'm talking about very specific hints for one
           | codebase
           | 
           | "Function X will only ever take two arguments, they will be
           | of types Y and Z"
           | 
           | "This object will only ever have these properties; it will
           | never have properties created or deleted"
           | 
           | "This prototype will never be modified at runtime"
        
             | hajile wrote:
             | It still has to verify that those assertions are true in
             | practice. Any random person at the console can call a
             | function and pass whatever they like.
             | 
             | The only real way to make that work would be a new `<script
             | type="typed-module" />` variant that implemented a subset
             | of JS in a typed environment.
             | 
             | My guess is that this could happen in the future, but
             | everyone is watching Typescript as a testing ground to find
             | exactly what works, what doesn't, and what works, but isn't
             | desirable.
        
               | brundolf wrote:
               | There's already a mechanism for bailing out when
               | optimization assumptions are violated; they could be made
               | more eagerly but retain the ability to degrade
        
         | phpnode wrote:
         | The V8 team experimented with "Strong Mode" which was
         | approximately this - a fast subset of JS [0]. The experiment
         | was ended [1]
         | 
         | [0]
         | https://docs.google.com/document/d/1Qk0qC4s_XNCLemj42FqfsRLp...
         | 
         | [1] https://groups.google.com/g/strengthen-
         | js/c/ojj3TDxbHpQ/m/5E...
        
           | brundolf wrote:
           | From a quick glance this looks like a further subsetting a la
           | strict mode, which isn't really what I meant
           | 
           | I clarified in another thread, but what I meant to convey is
           | a way of giving very specific hints about specific pieces of
           | _one_ codebase:
           | 
           | "Function X will only ever take two arguments, they will be
           | of types Y and Z"
           | 
           | "This object will only ever have these properties; it will
           | never have properties created or deleted"
           | 
           | "This prototype will never be modified at runtime"
           | 
           | This could guarantee ahead-of-time that certain optimizations
           | - packing an object as a struct instead of a hashmap, for
           | example - can be done, so that V8 doesn't have to spend time
           | speculating about whether or not to JIT something
        
             | k__ wrote:
             | Isn't that what asm.js did?
        
               | brundolf wrote:
               | Maybe from a certain perspective. But it worked by
               | conforming the bundle to a particular subset of JS that
               | the authors knew to be optimized, instead of expressing
               | information to the interpreter outright. That seems like
               | a comparatively limited channel for communication.
        
               | jacobolus wrote:
               | From what I understand, in Safari at least they tried to
               | make many of the optimizations general. So if you use
               | asm.js-style type indications in your code even without
               | following the full spec, you might see some performance
               | benefit.
               | 
               | I have sometimes found a speedup when adding apparently
               | an superfluous |0.
        
               | pornel wrote:
               | The highly optimizable, explicit and precise code for the
               | interpreter is WASM.
               | 
               | JS VMs don't need more _hints_ from the code. They need
               | _guarantees_. They already analyze and profile JS, but as
               | long as JS is allowed to be dynamic (and it has to, it
               | won 't be JS without it), then they have to keep the
               | complexity and cost of all the extra runtime checks and
               | possible deoptimizations.
        
             | bastawhiz wrote:
             | The problem is less about the annotations (you can already
             | infer most of this from a single pass over the code) than
             | it is about the calls. If you have a reference to a
             | function, you need that data handy. Unless you're copying
             | it around with your function reference, you don't have that
             | data (and it's prohibitively expensive to try).
             | 
             | JS is heavy on function references (closures are one of the
             | most popular JS idioms), so it's not easy to know _at call
             | time_ how you can optimize your code.
        
             | nine_k wrote:
             | AFAICT the v8 JIT collects such information form code's
             | behavior, and uses it to generate efficient machine code.
             | 
             | If v8 could accept Typescript directly, it probably could
             | pass this kind of information to the JIT directly, too. But
             | the input language is ES6 or something like it, and it has
             | to follow the calling conventions of it. I'm afraid that
             | adorning ES6 with this information would be either brittle
             | or backwards-incompatible.
        
               | brundolf wrote:
               | > I'm afraid that adorning ES6 with this information
               | would be either brittle or backwards-incompatible
               | 
               | Like I said above, it could be shipped as separate
               | (optional) metadata files associated with source files,
               | the same way that source maps already work
        
               | Jasper_ wrote:
               | It can't trust that all the metadata is correct,
               | otherwise security issues can happen. And if you need to
               | do that, why not just gather / regenerate the same
               | information at compile time.
               | 
               | Also, what do you do about code loading? e.g. scripts
               | loaded from other files at runtime, or eval? Does it
               | throw an error if a third-party script uses a function
               | incorrectly? Or do we assume that metadata is local-use
               | only?
               | 
               | There are a lot of things in JavaScript and also the
               | "browser environment" (e.g. ads, third-party scripts)
               | that can limit the utility of traditional compiler
               | techniques.
        
               | brundolf wrote:
               | There are already stepped levels of optimization going on
               | in the JS engine; it goes to great lengths to reverse-
               | engineer whether something can probably be optimized or
               | not, and to handle all the edge cases where it needs to
               | bail out into a less-efficient mode when some assumption
               | is violated. All I'm talking about is a way to give it
               | extra hints about what to eagerly optimize and how. All
               | of that other robustness can stay in place.
        
               | Groxx wrote:
               | Pretty much all that buys you is a small reduction in an
               | already-small warm-up phase before the jit chooses a
               | late-stage optimization (possibly with an increased cost
               | for loading that phase, so even that small gain may be
               | reduced). Only for code that uses this. And performs
               | worse if it proves incorrect, as bail-out logic is
               | generally a bit costly.
               | 
               | Browsers have experimented with hints quite a lot. Nearly
               | all of them have been rolled back, since adding the
               | general strategies to the jit is _vastly_ more useful for
               | everyone, and perform roughly as good _or better_ since
               | they identify the optimization everywhere, rather than
               | only in annotated code.
               | 
               | ---
               | 
               | The only ones I'd call "successful" so far have been 1)
               | `"use strict";` which is not really an optimization hint,
               | as it opts you into additional errors, and 2) asm.js,
               | which has been rolled back as browsers now just recognize
               | the patterns.
               | 
               | asm.js did at least have a very impressive run for a
               | while... but it's potentially _even more_ at risk of
               | disappearing completely than many, since it 's rather
               | strictly a compiler target and not something humans
               | write. wasm may end up devouring it entirely, and it
               | could happen rather quickly since asm.js degrades to
               | "it's just javascript" and it continues working if you
               | drop all the special optimization logic (which is to its
               | credit - it has a sane exit strategy!)
        
               | nine_k wrote:
               | Missed that, thanks. That could indeed work. It could
               | even nicely dovetail into the Typescript (or Reason, or
               | Elm) compilation process.
        
       | inglor wrote:
       | Very exciting!
       | 
       | This is super useful for Node.js - I've had PRs with features I
       | didn't land because of this and I intend to investigate again
       | e.g. https://github.com/nodejs/node/pull/35877
        
       | khalilravanna wrote:
       | This sort of low level stuff is so interesting to me but I feel
       | out of my depth having not looked at this sort of stuff since
       | undergrad. In a hypothetical world where someone wanted to
       | contribute to v8, what are some good resources to get up to
       | speed?
        
         | hashseed wrote:
         | There is a lot of content on V8's dev blog, with different
         | depth, all pretty well written: https://v8.dev/blog
        
           | khalilravanna wrote:
           | For sure the blog is great! But I'm thinking something more
           | along the lines of "here's how you'd build something like
           | this" or "here's the stuff to read to get started on a
           | project like this".
        
       | BenoitEssiambre wrote:
       | I once proposed to also speed up and simplify asynchronous
       | async/await function calls by removing the extra caching layer
       | and state machine underlying all such calls that come from using
       | promises:
       | 
       | https://es.discourse.group/t/callback-based-simplified-async...
        
         | cxcxcxcx wrote:
         | Have you considered that generators are basically state
         | machines?
        
           | BenoitEssiambre wrote:
           | Yes, but I meant not an additional state machine and caching
           | layer. Promise based async/await also have the generators.
        
       | gameswithgo wrote:
       | It is amazing the amount of brain power put into making this
       | language fast-ish. Too that brain power wasn't used to do other
       | things.
        
       | GiftCard22 wrote:
       | The 12 Best Jobs for Dog Lovers (https://bit.ly/MONY23GIFT)
        
       | adzm wrote:
       | Reminds me of cdecl vs stdcall with the argument ordering. Really
       | interesting low level details here for sure.
        
       | pierrebai wrote:
       | I find it amusing that they basically "rediscovered" the C
       | printf() trick, AKA variadic function trick.
        
       | londons_explore wrote:
       | By putting the number of arguments onto the stack, we end up
       | making the stack larger for every single function call. We also
       | require every access to any stack variable to do math to figure
       | out where on the stack it's arguments are.
       | 
       | That seems an insane overhead for the common case... If this
       | really does make benchmarks faster overall, it suggests an even
       | better solution might exist out there that does not add overhead
       | for the common case...
       | 
       | Perhaps it might be worth keeping track of how many arguments a
       | function is typically called with, and recompiling code for each
       | case? Then all the checks for argument counts can be entirely
       | removed in the optimized code.
        
         | zerovox wrote:
         | > We also require every access to any stack variable to do math
         | to figure out where on the stack it's arguments are.
         | 
         | In the article, it appears to be the opposite. Previously this
         | math was required (`[ai] = 2 + parameter_count - i - 1`), but
         | by reversing the arguments in the stack it's now always a
         | constant offset, and they prevent indexing out of the passed
         | arguments in the frame by ensuring there's at least as many
         | arguments as formal parameters by stuffing the call with extra
         | `undefined`s:
         | 
         | > But what happens if we reverse the arguments? Now the offset
         | can be simply calculated as [ai] = 2 + i. We don't need to know
         | how many arguments are in the stack, but if we can guarantee
         | that we'll always have at least the parameter count of
         | arguments in the stack, then we can always use this scheme to
         | calculate the offset.
        
         | IainIreland wrote:
         | One of the things I've realized after working on SpiderMonkey
         | for a few years is that simple solutions that accept a little
         | bit of overhead are often faster than more complicated
         | approaches that try to eke out every last cycle.
         | 
         | Math is fast. Stack space is rarely a key constraint. On the
         | other hand, tracking typical argument counts for a function
         | also requires overhead, and compiling multiple copies of a
         | function is also not free. If a particular call site is hot
         | enough that call overhead matters, it's probably hot enough
         | that inlining is the correct answer.
         | 
         | As one additional data point: V8's new approach appears to be
         | very similar to how SpiderMonkey's been implemented all along.
         | Convergent evolution is generally a good sign.
        
           | hinkley wrote:
           | YouTube fed me a video I'd watched a couple years ago about a
           | log searching tool. Compressed data, decompressed into cpu
           | cache, and then scanned. Almost no indexing at all, just cpu
           | cache for speed.
           | 
           | We spend a lot of time arguing with the cpu. If we give it
           | something it's actually happy doing, it almost doesn't matter
           | how stupid that thing is, because it's stupid fast.
        
             | owaislone wrote:
             | Interesting. Would really appreciate if you could share the
             | video with us. Thanks
        
               | e12e wrote:
               | I couldn't find the talk hinted at, but this blog post
               | seems to touch on some of the same ideas:
               | https://www.humio.com/whats-new/blog/how-fast-can-you-
               | grep/
        
               | posnet wrote:
               | This is one of the main selling points of Blosc
               | https://blosc.org/pages/blosc-in-depth/
        
         | simias wrote:
         | >We also require every access to any stack variable to do math
         | to figure out where on the stack it's arguments are.
         | 
         | I suspect that on modern architectures these trivial additions
         | are effectively free most of the time. In my experience the
         | hard part of optimizing for modern architectures is
         | compensating for memory latency (cache optimization,
         | prefetching) and pipeline flushes (branch prediction).
         | 
         | An inline, unconditional add is as cheap as it gets these days.
         | A few additional bytes of stack that's (almost) always going to
         | be super hot in cache is not really significant.
         | 
         | I suppose it could pose an issue for super deep function call
         | tree where it would cause the stack to grow more than
         | necessary, but given the usual memory overhead of JS I doubt
         | that it's going to be very significant even then.
        
         | CyberRabbi wrote:
         | Stack space is virtually free and it's just a few instructions
         | to put the value on the stack. Regardless, this increased
         | overhead is very small compared to the total overhead of making
         | a function call in the first place. The place to eke out every
         | last cycle is the tight loop, which should not include function
         | calls anyway.
        
           | hinkley wrote:
           | > Stack space is virtually free
           | 
           | On machines with 64 bit addressing. There are lots of things
           | that don't work so well on 32 bit or 16 bit OSes that we have
           | to unlearn.
           | 
           | There have been a couple times where I thought about trying
           | to catalog all of the things I think I know about a piece of
           | hardware, software or even a single library, so that I can
           | challenge my own assumptions when a major release comes out.
           | Never have gotten around to it.
        
             | CyberRabbi wrote:
             | > On machines with 64 bit addressing. There are lots of
             | things that don't work so well on 32 bit or 16 bit OSes
             | that we have to unlearn.
             | 
             | Stack space is virtually free on 32 bit systems as well. V8
             | is not designed for or portable to 16-bit or 8-bit systems,
             | so while your point is interesting, it's irrelevant here.
        
               | hinkley wrote:
               | > Stack space is virtually free on 32 bit systems as
               | well.
               | 
               | How do you figure that, given we've had quite a bit of
               | time wrestling with 2G memory limitations and how those
               | interact badly with multithreading?
        
               | CyberRabbi wrote:
               | For general applications and as a matter of good
               | practice, stack depth practically never gets deep enough
               | to make a dent in 2GB.
               | 
               | If your stack is reaching a significant proportion of 2GB
               | you are seriously doing something wrong and it's likely
               | other parts of your application will start breaking as
               | well.
        
         | thechao wrote:
         | Is it better, or worse, than dumping the adapter object into
         | the stack? Clearly, their numbers show it's better. Also,
         | runtime variable calling conventions are a nightmare. Telling
         | the callee the calling convention (in some sort of mixed mode)
         | still requires a flag ... like an integer? So, it seems they
         | have to pay the cost of the integer _no matter what_.
         | 
         | Maybe I'm wrong -- could you elaborate how you'd do it?
        
           | josefx wrote:
           | > Clearly, their numbers show it's better.
           | 
           | Their numbers seem cherry picked or is Turbo Fan actually
           | unable to inline code?
        
       ___________________________________________________________________
       (page generated 2021-02-15 23:00 UTC)