[HN Gopher] LLRT: A low-latency JavaScript runtime from AWS
___________________________________________________________________
LLRT: A low-latency JavaScript runtime from AWS
Author : ascorbic
Score : 104 points
Date : 2024-02-08 16:46 UTC (6 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| wmf wrote:
| I wonder how this compares to QuickJS.
| readthenotes1 wrote:
| I wonder how popular "sever less" would be if it were called
| "depending on the kindness of strangers"
| politelemon wrote:
| Since we're paying for it I'd call it more of "depending on
| the competence of service providers." AWS Lambda experience
| has been great for us in that sense, so I'd be looking
| forward to more general availability of this for Lambdas.
| cmsparks wrote:
| I think this project runs on the QuickJS engine
| paulddraper wrote:
| It's built on QuickJS.
| syrusakbary wrote:
| This is awesome, I'm incredibly happy to see more alternatives on
| the Javascript Runtime Space!
|
| In a nutshell: LLRT aims to have incredibly fast startup times
| compared to Node, Bun or Deno (which is critical in Lambda
| environments).
|
| LLRT is built using Rust, Tokio and QuickJS under the hood.
| Here's their compatibility with Node.js APIs [1]
|
| We (at Wasmer), have been working on a very similar approach:
| WinterJS, which is using SpiderMonkey instead (but also built on
| top of Rust and Tokio) [2]. HN announcement was just three months
| ago [3]
|
| Really excited to see how the ecosystem is evolving!
|
| [1] https://github.com/awslabs/llrt?tab=readme-ov-
| file#compatibi...
|
| [2] https://github.com/wasmerio/winterjs
|
| [3] https://news.ycombinator.com/item?id=38047872
| mdaniel wrote:
| curl https://github.com/wasmerio/winterjs | grep -i license #
| :-(
| syrusakbary wrote:
| Oops, will add the license right away!
|
| Edit: MIT license added! We had it on the Cargo.toml and
| wasmer.toml, but forgot to add the LICENSE file to the repo
| https://github.com/wasmerio/winterjs/blob/main/LICENSE
| k__ wrote:
| Faster than Bun?
|
| Not bad!
| intelVISA wrote:
| Easier than you'd think!
| geysersam wrote:
| Are there any benchmarks comparing startup time of LLRT, Deno,
| and Bun?
|
| Edit: particularly curious about comparison with Deno deploy.
| ushakov wrote:
| Deno's and Bun's Isolates should be spawning faster than a
| QuickJS process, they also utilize snapshots for more
| optimization. QuickJS is going to be faster at startup from 0
| though
| mmastrac wrote:
| Interesting project: they are trading lower raw throughput in
| compute-heavy applications for lower latency at startup and
| (likely) better behaviour w.r.t memory and "random GC jank".
|
| If you are writing an I/O bound JS service that doesn't do a lot
| of thinking, but does a lot of "moving stuff around", this will
| probably be a win.
| sfink wrote:
| Yes, though don't think of "traditional" JS engines as being
| all the way on the throughput side. One of the main
| distinguishing characteristics of JS engines, at least those
| that are used in web browsers, is that they have to care a lot
| about startup time and have to juggle a lot of tradeoffs that
| other managed runtimes typically don't need to bother with.
|
| A surprisingly high percentage of downloaded JS code is never
| run at all. (I can speculate: event handlers, unshaken trees,
| shaken trees with paths that happen to not be taken, ...) Of
| the rest, the vast majority is run once or a handful of times.
| That means startup time is a high percentage of total time, and
| therefore engines will JIT very little of that code, and the
| little that is JITted will use a baseline JIT that pretty much
| just translates opcodes to unoptimized machine code. The fancy
| optimization engineering that requires much cleverness and
| effort is actually applied to a very small percentage of
| incoming code. It will need to run something like 1000x before
| it is seen worth bothering with. Latency is more important than
| throughput for almost all of the JS that the engine sees. (Note
| that throughput-sensitivity can easily dominate most of the
| _runtime_ , but that varies widely depending on workload.)
|
| So browser-embedded engines already work hard at optimizing
| latency. We're not talking Python or Ruby here.
|
| That still leaves a lot of potential for improvement by
| stripping the engine down and prioritizing latency above all
| else, though. There's definitely a niche for things like LLRT.
| And WinterJS, too.
|
| (Source: I work on SpiderMonkey)
| ushakov wrote:
| Hermes is a big one as well: low startup latency, low memory
| usage, pre-compiled and the newest version comes with static-
| optimization
|
| https://hermesengine.dev/
| hamoodhabibi wrote:
| > offers up to over 10x faster startup and up to 2x overall lower
| cost
|
| i've been burned by Amazon with these type of claims can we get
| confirmation this is backed up actual real world testing from a
| non-amazon affiliated source?
| nextworddev wrote:
| It's because they never get sued for saying stuff like this
| spankalee wrote:
| I would love to see similarly low startup times with a real JS-
| to-WASM compiler (rather than today's approach of compiling a JS
| runtime to WASM).
|
| With WASM GC landed, and WASI preview 3 taking on async / event
| loops, it seems like this should be possible soon, except for the
| very large task of actually writing the compiler.
| whizzter wrote:
| There's a VERY good reason why JS runtimes use JIT's, it's more
| or less impossible to properly fully type a JS program w/o
| leaving runtime holes.
|
| If you restrict yourself to a subset or are willing to risk
| incompatiblities then it will work.
|
| There's some TypeScript to Wasm compilers that should work
| reasonably well (If you're breaking TS rules you've got
| yourself to blame a bit more than with JS).
|
| Ref: My thesis was on JS AOT and even if I had the idea of
| making it commercial I didn't pursue it in the end.
| ushakov wrote:
| AssemblyScript works fine too! WASM and JS isn't a good
| combination atm.
| jitl wrote:
| This seems ideal for places you use Lambda as glue, like for
| routing requests or authorization policy decisions. But for
| application that doing a lot of thinking, keep in mind v8 jitless
| is about 3x faster than QuickJS, and with JIT it's about 30x
| faster than quickjs. I'm curious to see where the break-even
| point is for various workloads comparing the two, and also
| numbers versus Bun, which starts quicker than Node but has
| comparable top-end JIT performance.
| hamoodhabibi wrote:
| One good thing Lambda provides is contractual guarantee, if you
| want 100000 code spun up you can put it in writing and hold AWS
| against it
|
| but then you realize you can achieve the same thing at 98%
| discount with traditional "monolith" setups with a load
| balancer soaking up all the requests
|
| Serverless is a major failure
| intelVISA wrote:
| The Cloud(tm) experience
| SahAssar wrote:
| Serverless has two things that I really like about it:
|
| 1. Scale to zero, which allows me to do things like
| automatically deploy a full environment for each branch in a
| project, with a full infra, frontend, backend, database, etc.
|
| 2. Good integration with IaC tools, so that I can define my
| infra and my runners in a single language/tool, be it
| terraform/cdk or something else. Most "monolithic" setups
| split configuring the infra and what runs in it into two
| tools/steps (please let me know of ones that don't!).
|
| But if I actually run a application for a long time with
| somewhat consistent load there are always cheaper, more
| performant and flexible solutions than a serverless setup.
| SaltyBackendGuy wrote:
| We've found it really nice for worker queues for specific
| workloads that fire infrequently'ish.
| mlhpdx wrote:
| That's a depends situation. If you're using direct service
| integration instead of Lambda, it can be cheaper. And, if
| you use VMs only as the front-end to push batches of
| requests to serverless compute you get the best of both
| worlds and the cheapest by a long shot. Not everything can
| fit in that model, but the things that do are magically
| inexpensive.
|
| I'm actually working on a product to make that architecture
| more approachable (and cheaper yet). I'd be happy to hear
| from folks running network services on VMs and wishing
| there was a better way.
| SahAssar wrote:
| Do you mean that you can get actual scale-to-zero using
| that approach? I think the cost of the VM would not scale
| down like that, right?
| ushakov wrote:
| Faster what? Runtime performance or startup performance? I bet
| QuickJS would be the fastest to startup from 0 (no JIT = no
| warmup)
| hendry wrote:
| Wonder how it compares to https://aws-lite.org/
| rdtsc wrote:
| https://github.com/awslabs/llrt/blob/e2cc8237f4bd04e161d46b3...
|
| This wraps QuickJS and there is not a single mention on their
| front page. At least a hello or shout-out to Fabrice Bellard
| would have been nice.
|
| But I guess it's AWS and they do this with other projects, so not
| too surprising.
| ignoramous wrote:
| > _At least a hello or shout-out to Fabrice Bellard would have
| been nice._
|
| Mentioned in the third-party file:
| https://github.com/awslabs/llrt/blob/630ec79/THIRD_PARTY_LIC...
| / https://archive.is/0NUAa
|
| > _AWS and they do this with other projects_
|
| Remember similar concerns raised viz Firecracker (which is a
| hard fork of Google _crosvm_ :
| https://news.ycombinator.com/item?id=22360232), but don't
| recall it being a pattern.
| wswope wrote:
| > AWS and they do this with other projects
|
| Probably a nod to the ElasticSearch licensing debacle.
| gfodor wrote:
| Gross.
| drschwabe wrote:
| Amazon's repo here is open source at least so perhaps this
| could be remedied with an edit to their README and a PR - go
| for it !
|
| Agree with you there should be credit where credit is due - I
| have been using QuickJS for some time and its awesome. For the
| cost of about 1MB you can get entire modern JS in your C/C++
| binary.
| darknavi wrote:
| We use QuickJS in Minecraft for our scripting engine. It's nice
| that he has got back into it these days and been adding ES2023
| support.
|
| I also had to double check but we do indeed list it on our
| website, albeit in a sea of others.
|
| https://www.minecraft.net/en-us/attribution
| maxloh wrote:
| They just added the reference to QuickJS [0].
|
| > LLRT is built in Rust, utilizing QuickJS as JavaScript
| engine, ensuring efficient memory usage and swift startup.
|
| [0]:
| https://github.com/awslabs/llrt/commit/054aefc4d8486f738ed3a...
| syrusakbary wrote:
| It seems they just added the mention to QuickJS, I assume,
| based on your feedback:
|
| https://github.com/awslabs/llrt/commit/054aefc4d8486f738ed3a...
|
| Props to them on the quick fix!
| intelVISA wrote:
| Not a good look tbh, you could glue this together in an evening
| yet they're ...proud?
| hamoodhabibi wrote:
| My only wish for AWS team is that they measure success based on
| how complete a product is. There are still 4 year old
| outstanding critical github issues around Cognito. There are
| lot of things that are advertised that aren't delivered.
|
| In fact our trust in AWS and the team has plummeted that we
| ultimately ditched serverless and went str8 to VPS
|
| The days of zero interest rate series B buying up reserved EC2
| instances in the millions are gone. As capital gets expensive
| we see more reluctance to moving to cloud.
|
| AWS claims of "10x faster startup @ half the cost" is
| confirmation that the Serverless movement is DOA
| kentonv wrote:
| The benchmark shows Node.js taking 1.5s(!) to start vs. LLRT
| taking under 100ms.
|
| But the benchmark appears to be a very small program that imports
| the AWS SDK.
|
| Later on in the readme, it's mentioned that LLRT embeds the AWS
| SDK into the binary.
|
| So... How much of what the benchmark is showing here is just the
| fact that the embedded SDK loads much faster than application
| code?
|
| I mean, it's certainly a valid optimization for Lambdas that are
| primarily making AWS API calls. But it'd also be interesting to
| know how much faster LLRT is at loading arbitrary code. As well
| as interesting to know if the embedding mechanism could be made
| directly available to applications to make their own code load
| faster.
|
| (Disclosure: I work on Cloudflare Workers, a competitor. But,
| honestly interested in this.)
| ushakov wrote:
| Even faster would be just having FFI, so you could skip Rust <>
| C <> JS conversion every time you call a function.
|
| This is what hermes is currently doing.
| kentonv wrote:
| True but probably very tedious to achieve full compatibility
| with the existing JS API that way, which seems like it was a
| big goal here.
| ushakov wrote:
| Not really. I have built a fetch-like API for hermes on top
| of libcurl using generated FFI-bindings (they have a
| bindgen tool in the repo). All I had to do is just write
| the JS-wrapper that would call the underlying C-lib and
| return a Promise.
|
| It feels wild, people have been calling UI-libs from JS as
| well: https://twitter.com/tmikov/status/1720103356738474060
|
| I can imagine someone making a libuv binding and then just
| recreating Node.JS with the FFI
| rafetefe wrote:
| Wish someone pay me to build stuff like this, such a cool project
| to work to
| ushakov wrote:
| Fastly, Cloudflare, Vercel, Fermyon, Wasmer, Azion, Deno,
| Netlify, SecondState, Cosmonic are paying to build stuff like
| this, just to name a few
| jerrysievert wrote:
| pljs (https://github.com/plv8/pljs) is a postgres language plugin
| that also uses quickjs. given how much faster the startup is
| compared to v8, and that most postgres functions are very small
| and do very little compared to a node.js program, it is quite
| good.
|
| I can definitely see aws' lambda operations gaining quite a bit
| from this.
___________________________________________________________________
(page generated 2024-02-08 23:00 UTC)