[HN Gopher] RPython-Based VMs: Fast Enough in Fast Enough Time
___________________________________________________________________
RPython-Based VMs: Fast Enough in Fast Enough Time
Author : pcr910303
Score : 74 points
Date : 2021-04-26 15:11 UTC (7 hours ago)
(HTM) web link (tratt.net)
(TXT) w3m dump (tratt.net)
| augusto-moura wrote:
| Today a big contender for a VM to rule them all is WebAssembly,
| even though the primary motivation at the start was a web
| language, the design of the intermediate language (.wasm) is
| explendid, it hits the sweet spot of a LISP language with static
| types and low level coding
| chrisseaton wrote:
| Do you think WebAssembly is well-suited to dynamic languages
| with a lot of dynamic dispatch and dynamically generated code?
| samth wrote:
| It is not yet, and while some people working on Wasm think
| they will eventually accomplish this, there's not that much
| evidence that it will happen.
| augusto-moura wrote:
| Of couse! In the same way of Jython wasm can still be JITimed
| and get close to native performance
| chrisseaton wrote:
| > can still be JITimed and get close to native performance
|
| Has anyone achieved this yet? I haven't seen any results
| myself but I haven't been looking carefully.
|
| Also wouldn't you have to build that JIT yourself and all
| the primitives you need to implement a dynamic language
| like the caches and deoptimisation? Seems like it's not
| really providing the needed tools and you'd have to build a
| lot of basics on top of it.
| augusto-moura wrote:
| Wasm is just the spec, browsers and wasm vm
| implementations that will implement the JIT evaluator,
| and yes JIT is already being implemented in some vendors,
| of course nothing final yet.
|
| Wasm is not mature like the other big VMs _yet_ , but it
| is still a big contender
| rgrmrts wrote:
| wasmtime[0] is a WebAssembly standalone runtime that does
| JIT. It's quite performant anecdotally and I think some
| benchmarks might be available.
|
| [0] https://github.com/bytecodealliance/wasmtime
| chrisseaton wrote:
| I mean WASM as a JIT _target_ , not a JIT _source_ -
| unless that 's what you mean and Wasmtime does this?
| rgrmrts wrote:
| Ah my bad, I think I misunderstood your comment. I was
| reaffirming this:
|
| > wasm can still be JITimed and get close to native
| performance
|
| I'm also not super familiar with this subject, so I
| unfortunately don't understand the distinction you're
| making between source and target.
|
| > Seems like it's not really providing the needed tools
| and you'd have to build a lot of basics on top of it
|
| My understanding is that this is accurate, and folks like
| the Bytecode Alliance[0] are working on building that
| ecosystem of tools on top of WASM, and I see wasmtime as
| being one of those tools.
|
| [0] https://bytecodealliance.org/
| jxy wrote:
| If you want to look at the JIT overhead, for the benchmark stone
| and Richards, multiply the first number by ten and subtract the
| second number. And you can do it in your head, CPython is the
| fastest if you only going to launch the interpreter and run the
| program once.
|
| The sorting benchmark is more interesting. One could estimate
| where the breakeven point is, if considering the sorting scaling
| to be exactly n*log(n). Though the timings could be considerably
| different once everything fits in L2 or L1 cache.
| gandalfgeek wrote:
| Graal and Truffle also let one do something like this: write an
| interpreter for a language and get an optimizing JIT compiler for
| "free".
| ciupicri wrote:
| > February 8 2012, last updated February 7 2013
___________________________________________________________________
(page generated 2021-04-26 23:01 UTC)