[HN Gopher] The V8 Sandbox
       ___________________________________________________________________
        
       The V8 Sandbox
        
       Author : todsacerdoti
       Score  : 201 points
       Date   : 2024-04-04 14:24 UTC (8 hours ago)
        
 (HTM) web link (v8.dev)
 (TXT) w3m dump (v8.dev)
        
       | pjmlp wrote:
       | The most relevant part for the TL;DR; folks,
       | 
       | > The V8 Sandbox has already been enabled by default on 64-bit
       | (specifically x64 and arm64) versions of Chrome on Android,
       | ChromeOS, Linux, macOS, and Windows for roughly the last two
       | years.
        
         | rzzzt wrote:
         | Electron is also using V8's sandboxed pointers:
         | https://www.electronjs.org/blog/v8-memory-cage
        
       | vlovich123 wrote:
       | It's interesting that it spends a lot of time talking about how
       | memory safe languages don't help V8 cause of JIT (true) & then
       | talks about a hardening technique that does get helped by Rust.
       | Why not just be honest and say that the switching cost to a new
       | language is too expensive and error prone than play these games?
       | Similarly the discussion about memory tagging - I know that it
       | would harden the security of things like Cloudflare workers. And
       | if most of the exploits are because it's in-process, why not work
       | on isolating it behind it's own process (there must be ways to do
       | this securely while not giving up too much performance)?
        
         | olliej wrote:
         | ...because the overwhelming majority of memory safety bugs in
         | js engines (v8, JSC, and spider monkey) are in operations that
         | would be in unsafe blocks in rust as well?
         | 
         | In multiple decades I can think of a handful of engine bugs
         | that would have been prevented by rust - and those were largely
         | preventable (and now are) in c++ as well.
         | 
         | It is possible for rust to be a safer language than c++ and to
         | also not meaningfully change the security profile of the
         | language.
         | 
         | It's not just the jit, the interpreter and GCs are also subject
         | largely - necessarily - no more protected by rust than c++.
        
           | vlovich123 wrote:
           | Did you read the article? It has nothing to do with the JIT.
           | It uses the JIT as a smokescreen to talk about sandbox
           | hardening & the issues within the sandbox are definitely not
           | "unsafe" & 100% mitigated by Rust. Take a look at the
           | relevant quotes I extracted in another comment to draw your
           | attention to what the article is actually talking about
           | (sandbox hardening).
        
         | tux3 wrote:
         | I don't think it's fair to call them dishonest here. It's
         | pretty clear they've heard about memory safe languages, they've
         | thought about it, they've considered in details the pros and
         | cons.
         | 
         | >why not work on isolating it behind it's own process (there
         | must be ways to do this securely while not giving up too much
         | performance)?
         | 
         | Well, you make it sound like the easy answer. A good exercise
         | would be to try implementing what you're proposing in a
         | comment. Not necessarily going all the way, but enough to know
         | why it might not be as straightforward as you think.
         | 
         | The people working on V8 are not completely clueless, the
         | concept of moving things out of process or using a memory safe
         | language is not going to be a novel idea that they'll just
         | start working on now that someone clever thought of it.
        
           | vlovich123 wrote:
           | The dishonest piece is that the first part talks about why
           | Rust doesn't help with the JIT (true) but then really talks
           | about the V8 sandbox & hardening techniques they're applying
           | to it where Rust would help 100%.
           | 
           | > However, assuming these numbers are simply stored as
           | integers somewhere in the JSObject, an attacker could corrupt
           | one of them to break this invariant. Subsequently, the access
           | into the (out-of-sandbox) std::vector would go out of bounds.
           | Adding an explicit bounds check, for example with an
           | SBXCHECK, would fix this.
           | 
           | Or use Rust
           | 
           | > Encouragingly, nearly all "sandbox violations" discovered
           | so far are like this: trivial (1st order) memory corruption
           | bugs such as use-after-frees or out-of-bounds accesses due to
           | lack of a bounds check
           | 
           | Or use Rust
           | 
           | > Contrary to the 2nd order vulnerabilities typically found
           | in V8, these sandbox bugs could actually be prevented or
           | mitigated by the approaches discussed earlier. In fact, the
           | particular bug above would already be mitigated today due to
           | Chrome's libc++ hardening
           | 
           | Or use Rust
           | 
           | I'm not saying rewrite the entire thing in Rust, too
           | expensive & would introduce new bugs in the JIT for
           | questionable benefit. But at least mention that & also
           | discuss the technical challenges why the sandbox mechanism
           | isn't written in Rust & what it would take to address those.
           | 
           | Look, I'm not saying the V8 team is making the wrong
           | decisions. My questions are an indication of the shallowness
           | of the blog write-up - why not explain some obvious questions
           | that come up for someone who reads it?
        
             | mandarax8 wrote:
             | How would rust help you when you're executing jitted code
             | (ie assembly)? The fizzbuzz code would run in rust but the
             | event handler would still be unsafe jitted code.
        
               | rcxdude wrote:
               | It doesn't: but it helps with the stuff around it. The
               | article talks about 3 locations for bugs
               | 
               | 1) jitted javascript code with subtle bugs due to logic
               | errors in the compiler (Rust's memory safety can't really
               | help here)
               | 
               | 2) Bugs in surrounding utility code and the interpreter
               | (Rust can help, but running without a JIT entirely is too
               | slow. Still, it's part of the attack surface either way)
               | 
               | 3) Bugs in the sandbox implementation which helps
               | mitigate bugs of the first kind (Rust can help)
               | 
               | AFAIK the main objection raised here is the article
               | dismisses moving to a memory safe language because it
               | doesn't help with 1, but then discusses 2 and 3 where in
               | fact the issues are exactly where memory safety _can_
               | help.
        
             | IainIreland wrote:
             | The whole point is that the sandbox is an approach that can
             | be used in JIT code, where Rust doesn't help.
             | 
             | Take the fizzbuzz example with a missing bounds check. Rust
             | can't prevent you from generating JIT code that omits a
             | bounds check on an array and reads/writes out-of-bounds.
             | The sandbox doesn't prevent out-of-bounds reads/writes, but
             | it guarantees that they will only be able to access data
             | _inside_ the sandbox.
             | 
             | This means that logic bugs in the JIT compiler are no
             | longer immediately exploitable. They must be combined with
             | bugs in the sandbox implementation. The article's claim is
             | that, unlike compiler bugs, sandbox bugs tend to be
             | amenable to standard mitigation techniques.
             | 
             | This article isn't dismissing the value of memory-safe
             | languages. It's identifying a problem space where current
             | memory-safe languages can't help, and providing an
             | alternative solution. Currently, every browser JS engine is
             | written in C++, in part because Rust doesn't solve the big
             | correctness problems. If the sandbox approach works, then
             | using Rust for other parts of the engine becomes more
             | appealing.
        
         | wavemode wrote:
         | > talks about a hardening technique that does get helped by
         | Rust
         | 
         | What hardening technique discussed in this article would be
         | helped by Rust, and what specific feature of Rust would help?
        
           | vlovich123 wrote:
           | > This code makes the (reasonable) assumption that the number
           | of properties stored directly in a JSObject must be less than
           | the total number of properties of that object. However,
           | assuming these numbers are simply stored as integers
           | somewhere in the JSObject, an attacker could corrupt one of
           | them to break this invariant. Subsequently, the access into
           | the (out-of-sandbox) std::vector would go out of bounds.
           | Adding an explicit bounds check, for example with an
           | SBXCHECK, would fix this.
           | 
           | > Encouragingly, nearly all "sandbox violations" discovered
           | so far are like this: trivial (1st order) memory corruption
           | bugs such as use-after-frees or out-of-bounds accesses due to
           | lack of a bounds check. Contrary to the 2nd order
           | vulnerabilities typically found in V8, these sandbox bugs
           | could actually be prevented or mitigated by the approaches
           | discussed earlier. In fact, the particular bug above would
           | already be mitigated today due to Chrome's libc++ hardening.
           | As such, the hope is that in the long run, the sandbox
           | becomes a more defensible security boundary than V8 itself
        
             | wavemode wrote:
             | It's still not clear to me what Rust feature would prevent
             | what specific vulnerability here. Rust has bounds-checked
             | and non-bounds-checked array accesses depending on the
             | developer's preference, and so does C++. If there's some
             | point you're making with these quotes you're going to need
             | to simplify it for me since I'm not following.
        
               | rcxdude wrote:
               | The defaults are switched though: C++ is unchecked by
               | default, Rust is checked by default.
        
               | tubthumper8 wrote:
               | > Rust has bounds-checked and non-bounds-checked array
               | accesses depending on the developer's preference, and so
               | does C++
               | 
               | You're making it sound like these are the same, the
               | difference is the defaults
               | 
               | Unsafely accessing an element in C++
               | vec[i]
               | 
               | Unsafely accessing an element in Rust
               | unsafe { vec.get_unchecked(i) }
               | 
               | One of these is screamingly obvious that something
               | potentially unsafe is happening and should be audited
               | more closely, that's the real difference. The cause of
               | potential memory issues is isolated and searchable in
               | `unsafe` blocks rather than being potentially anywhere
        
               | wavemode wrote:
               | So the Rust feature is "screaming obviousness"? Your
               | argument is that the advantage of rewriting the module in
               | Rust is that finding array accesses is visually easier?
               | 
               | Why not just use grep?
        
               | tubthumper8 wrote:
               | I'm not on the V8 team, so can't say why grep didn't find
               | the vulnerabilities. That would perhaps be a good
               | suggestion to make to them!
        
               | olliej wrote:
               | Well that's just false in this context.
               | vec[i]
               | 
               | Is safe in v8, blink, jsc, webkit, etc. Rust has a huge
               | number of benefits over c++, but it hurts your argument
               | if you refuse to acknowledge the actual environment the
               | C++ is being used in and make objectively incorrect
               | statements. It implies a lack of understanding of C++ and
               | sounds like all you're doing is parroting other people's
               | critiques without understanding the core issues, which
               | undermines your message.
               | 
               | That said it's still not particularly relevant here,
               | because the issues being presented are bugs in the
               | runtime. e.g. the runtime logic and state results in
               | erroneous behavior. The bugs being discussed are not "you
               | did not use a safe vec" or "you did not use Rc", it's
               | "the size or bounds check in vec is incorrect" or "the
               | ref counting in Rc is incorrect". Rust does not
               | inherently stop those the runtime from having bugs, it
               | simply statically limits where the exposure to unsafe
               | operations can occur.
               | 
               | That's super relevant to program safety, but it's not
               | relevant to safety in the JS VM runtime, where they're
               | performing the operations that would be unsafe{} in rust
               | as well.
        
         | azakai wrote:
         | The technique here keeps a large set of objects from escaping
         | the sandbox. Those objects are accessed both by C++ and JIT
         | code. You are right that using Rust instead of C++ would help
         | on the C++ side, but it would not help at all on the JIT code
         | side, and that is by far the major source of exploits.
         | 
         | In other words, even if you write a JS engine in Rust you could
         | benefit greatly from this technique.
        
       | egnehots wrote:
       | They say that Rust is not enough and dismiss it quickly:
       | 
       | > V8 vulnerabilities are rarely "classic" memory corruption bugs
       | (use-after-frees, out-of-bounds accesses, etc.) but instead
       | subtle logic issues which can in turn be exploited to corrupt
       | memory. As such, existing memory safety solutions are, for the
       | most part, not applicable to V8. In particular, neither switching
       | to a memory safe language, such as Rust, nor using current or
       | future hardware memory safety features, such as memory tagging,
       | can help with the security challenges faced by V8 today.
       | 
       | But looking at the awesome list they provided:
       | 
       | https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtu...
       | 
       | There are a lot of use-after-frees and out-of-bounds accesses,
       | buffer overflow in there...
        
         | kevingadd wrote:
         | Type confusion is also a very common attack against JS runtimes
         | and V8 specifically. Of course, it's not trivial to build a
         | high-performance JS runtime without playing around with pointer
         | types pretty liberally, so I can understand saying "Rust won't
         | fix this" in regards to those attacks.
         | 
         | But those attacks would basically not be possible against a
         | runtime built on top of Java or C#.
        
           | sroussey wrote:
           | The attacks would not be possible against a runtime written
           | in JavaScript as well, by that reasoning.
        
             | olliej wrote:
             | Haha, I wish I had come up with that response :)
        
             | kevingadd wrote:
             | That's called self-hosting, and it's widely used in JS
             | runtimes to implement various built-ins instead of writing
             | them in C++. It provides superior safety and the ability to
             | inline builtins into their callers.
        
           | olliej wrote:
           | Yes because the attack would be against the .net or Java VM.
           | 
           | The JVM - especially in the era of applets - had an
           | illustrious history of VM bugs. We don't know how bad they
           | would have been because in the era of extremely complex
           | exploits applets essentially do no exist. Neither .net nor
           | the jvm are exposed to the degree of attacks the js engines
           | are, and there's no strong reason to believe they don't have
           | similar bugs today.
        
             | kevingadd wrote:
             | I'm not singing the praises of the JVM here, it's just a
             | simple fact that if you implement your runtime in a higher
             | level language you're exposed to a smaller number of
             | potential vulnerabilities. Unchecked array dereferences
             | turn into bounds-checked array dereferences; unchecked
             | typecasts turn into checked typecasts. Null pointer
             | dereferences turn into null reference exceptions. Etc.
             | 
             | Of course once you start jitting native code, all of that
             | is off the table. Unless you jit to java/.net bytecode, I
             | guess.
        
               | olliej wrote:
               | No, you're missing the point. The whole point is you're
               | implementing the runtime that defines the safety
               | semantics. Your proposal is essentially "implement your
               | JS engine GC on top of the JVM by just using the JVM's
               | GC", i.e. don't implement the GC yourself. The unsafe
               | code is now the JVM GC, and you've just moved the problem
               | from "implement the JS engine's GC" to "Implement the
               | JVM's GC", and they same problems continue to exist.
               | 
               | I am really struggling to understand where this gap in
               | understanding is occurring. It does not matter what
               | environment or language you implement a JS engine (or
               | whatever) in. The attacker is going to attack the unsafe
               | portion of the runtime. If you build you JS engine on top
               | of the JVM, then the attacker is not going to attack your
               | JS engine's runtime, they attack the JVM's.
               | 
               | The JVM, .NET, etc runtimes are not doing anything
               | different to what the JS engine runtimes are doing, and
               | aren't magically free of the same bugs. If anything
               | they're probably doing less to protect from or prevent
               | attacks, because they have a much much smaller attack
               | surface (because they aren't generally exposed to
               | everything on the internet) and the reason attackers
               | _have_ to target the JS engine runtime is because the JS
               | sandbox does not allow the general system access
               | "correct" and completely uncompromised .NET or JVM code
               | have. Attacks on the JVM and .NET generally mean
               | "convince the VM to load correct code that does something
               | that a specific app/service is not meant to do but the VM
               | generally allows applications to do", whereas a JS VM
               | does not allow an attacker to do anything outside of the
               | JS sandbox, so they _must_ compromise the runtime.
               | 
               | It may be easier to understand if we try to present this
               | in a different way:
               | 
               | JSC can be compiled as an interpreter for any cpu
               | architecture because there is a fall back C backend for
               | the interpreter code generator, so you can compile JSC to
               | WASM. Then you could make a version of webkit than
               | executed all JS through the WASM build of JSC running
               | under the native JSC runtime. You've now built your JS
               | engine on top of a safe runtime (WASM), but it should
               | hopefully be obvious that an attacker is simply going to
               | continue targeting the native JSC runtime.
        
           | vlovich123 wrote:
           | Would be interesting if they took hardening ideas from
           | kernels that try to solve this (e.g.
           | https://security.apple.com/blog/towards-the-next-
           | generation-...).
        
           | ngrilly wrote:
           | V8 is a runtime for JS exactly like the JVM is a runtime for
           | Java and CLR is for C#. Which means that whatever sandboxing
           | V8 needs, the JVM and the CLR would need it as well. I don't
           | know what makes you think that the JVM and the CLR have
           | already solved the problem, but not V8.
        
         | olliej wrote:
         | > There are a lot of use-after-frees and out-of-bounds
         | accesses, buffer overflow in there.
         | 
         | Yes, and they're in the runtime itself, which rust cannot
         | protect you from. Rust cannot protect lifetime enforcement for
         | GC objects any more than C++ already does, it can't protect you
         | against OoB when the reason for the OoB is the runtime is wrong
         | about the object size, etc.
         | 
         | Rust does not magically make it impossible to have errors, it
         | makes it harder by default, but the cases where these go wrong
         | are already largely using c++ to provide the same level of
         | memory safety rust can in the environment.
         | 
         | The easiest way to understand this is if you use `vec` you
         | won't get unsafe oob, but if there's a bug in `vec` rust (or
         | any language) cannot protect you. Eg if there's a JVM bug that
         | breaks arrays then the fact that Java is memory safe isn't
         | relevant.
        
           | ajross wrote:
           | Also worth pointing out that the specific problem area,
           | highly optimized runtimes for interpreted/JIT-compiled
           | languages, the borrow checker doesn't really have much to
           | offer. Rust's safe memory paradigm more or less requires an
           | "owner" for every pointer, and by definition the arbitrary
           | graph of pointers in the executed code aren't going to have
           | that. Any such runtime is going to be built on a metric ton
           | of unsafe.
        
           | vlovich123 wrote:
           | Sure, but a bug in Rust's vec is unlikely at this point &
           | thus as long as you're in safe Rust you have no possibility
           | of a memory error, which isn't the case for C++ vectors.
           | 
           | It can't protect you from lifetime issues with GC objects,
           | but it can for almost everything else you're doing. They
           | indicate 50% vulns are JIT and 50% are memory safety issues
           | in the runtime, where GC is only part of it. If the bulk of
           | the runtime issues are around GC lifetime confusion, I agree
           | that Rust maybe wouldn't help. It might help to make sure you
           | don't misuse the GC machinery which might be a significant
           | mitigation, but given the bugs I've seen in the field around
           | integrating with the GC, I doubt Rust would help with that
           | class of bugs.
        
             | tptacek wrote:
             | The problem is closer to that of an attacker that can write
             | unsafe{} blocks than it is to attackers finding a bug in
             | Vec.
        
             | johnnyjeans wrote:
             | >Sure, but a bug in Rust's vec is unlikely at this point &
             | thus as long as you're in safe Rust you have no possibility
             | of a memory error
             | 
             | It has nothing to do with the built-in data structures
             | because it doesn't even exist in the same space as them.
             | The flaws themselves are in an algorithm's reasoning, it's
             | not an issue that exists because somewhere in the codebase
             | there's an out-of-bounds access on a vector. The issues are
             | caused by said flawed reasoning generating bad machine code
             | with erronious pointer arithmetic. Note that it's the
             | reasoning itself generating bad pointer arithmetic, not
             | pointer arithmetic that exists explicitly in the codebase.
             | 
             | It's the kind of problem you need proof systems to solve. A
             | substructural type system (or a near-approximation like
             | Rust's ownership semantics) is simply not robust enough for
             | the problem domain, you need full blown dependent types for
             | this kind of thing, something that can guarantee logical
             | safety.
             | 
             | ATS can handle the job, but Rust can't.
        
             | olliej wrote:
             | You're missing the point. You're right there are unlikely
             | to be bugs in vec, but there are also unlikely to be bugs
             | in std::vector or WTF::Vector both of which error out on
             | OoB (chrome/v8 uses hardened libc++).
             | 
             | I was using `vec` as an example of runtime code that is
             | fundamentally implemented in unsafe code. The errors that
             | are being discussed are errors in the runtime - eg the
             | unsafe{} blocks of rust. It's very difficult to write code
             | in v8/blink (or JSC/webkit) that interacts with the
             | relevant JS runtime in ways that make the code unsafe -
             | just as you cannot normally interact with `vec` in a way
             | that causes a memory safety error - however the runtime's
             | implementation of the safe interface is still has to
             | eventually perform unsafe operations. The bugs that you see
             | in V8, JSC, etc are almost invariably in code that would
             | necessarily be unsafe region in rust that would not be
             | preventable _in rust_.
             | 
             | Another example: `Arc`, `Rc`, and `Box` etc all allocate
             | memory, and all your rust code can be built on those, and
             | be safe (assuming no bugs in the refcounting, no compiler
             | lifetime errors, etc), but the allocator beneath them still
             | has to do everything correctly and the operations it
             | performs are largely unsafe. There's nothing rust can do to
             | prevent a logic error from returning overlapping pointers.
             | You can create lots of abstractions to make it harder to
             | screw up, but you are the runtime at this point so the code
             | that is requiring safety rules is also the thing specifying
             | those rules. Eg if the erroneous state/logic that leads to
             | an incorrect allocation is the same state/logic you are
             | testing against to ensure you aren't making an erroneous
             | allocation. You can see how that impacts the safety profile
             | of the code.
             | 
             | When JSC or V8 have a use after free vulnerability it's
             | almost always a runtime error because the overwhelming
             | majority of allocations made by both engines are via their
             | own GCs, and so definitionally should be sound. But if
             | there's a bug in the runtime (a missing barrier, or a
             | scanning error in JSC), then objects can be erroneously
             | collected and that's how a UaF happens. There's nothing
             | rust or any safe language can do to make those errors
             | impossible or unexploitable. All the runtime can do is
             | structure the code to make errors as hard as possible, in
             | rust that means minimizing the amount of time in unsafe{},
             | and add mitigations such that any error that does happen is
             | hard to exploit.
             | 
             | When V8 and JSC have buffer overflows it's because the
             | metadata for an object says "there is this much memory
             | available" but that is incorrect. Again rust cannot protect
             | against this: you're in the position of a `vec` with
             | incorrect bounds information.
             | 
             | And that goes on for all the types of bug class. The vast
             | majority of the security benefits rust offers for a
             | language and vm _runtime_ are available - and used - in
             | c++. The bugs are in the code that would necessarily be
             | unsafe{}.
             | 
             | Now in blink/webkit the moment you get beyond the relevant
             | JS runtime you run straight into the standard C++ nightmare
             | that rust, swift, JS, C#,... prevent so that's another
             | thing altogether.
        
         | wavemode wrote:
         | You didn't read the article, then. They clearly explain how
         | even if Rust were used for the entirety of v8, there would
         | still be memory corruption, because the memory corruption is
         | happening in code that is JIT compiled.
        
           | vlovich123 wrote:
           | I think they did because all the vulnerabilities in the
           | hardening they talk about is because of C++ memory safety &
           | would be fixed by Rust (i.e. their hardening technique
           | doesn't target JIT exploits themselves).
        
             | tptacek wrote:
             | The whole article is about exploits that leverage the
             | compiler itself, with details.
        
               | rcxdude wrote:
               | No, it mentions that as an introduction, and then talks
               | about the system for mitigating them, which also has bugs
               | which they admit are of the simple kind that a memory-
               | safe language would prevent.
        
             | azakai wrote:
             | No, this very much does help protect against JIT exploits.
             | 
             | JIT code contains code that accesses the data structures
             | they are sandboxing. By sandboxing those objects, the JIT
             | code is limited in what it can do.
             | 
             | This might help you understand: An example the article
             | gives is if an optimization pass has a bug that forgets a
             | check. Then it may emit JIT code that will access a data
             | structure that it should not. But, thanks to this
             | sandboxing, that object cannot be outside the sandbox, nor
             | refer to anything outside the sandbox, so a JIT exploit is
             | limited in what it can achieve.
        
       | andy_xor_andrew wrote:
       | I'm confused about the fizzbuzz example they provide.
       | 
       | ```js
       | 
       | let array = new Array(100);
       | 
       | let evil = { [Symbol.toPrimitive]() { array.length = 1; return
       | 15; } };
       | 
       | array.push(evil);
       | 
       | // At index 100, the @@toPrimitive callback of |evil| is invoked
       | in
       | 
       | // line 3 above, shrinking the array to length 1 and reallocating
       | its
       | 
       | // backing buffer. The subsequent write (line 5) goes out-of-
       | bounds.
       | 
       | array.fizzbuzz();
       | 
       | ```
       | 
       | I'm probably missing the point, but I thought indexing into an
       | array in Javascript outside its bounds would result in an
       | exception or an error or something?
        
         | rafram wrote:
         | The example fizzbuzz() function is implemented in C++. (And
         | out-of-bounds indexing in JS actually doesn't generate an
         | exception/error; it just returns undefined. Great language!)
        
           | olliej wrote:
           | Not to be confused with undefined in c++! :D :D the best!
        
         | csjh wrote:
         | OOB indexing from the Javascript side would return undefined,
         | but OOB indexing on the engine side (lines 5/7/9 of
         | JSArray::fizzbuzz()) is the same as OOB indexing a pointer
        
         | eyelidlessness wrote:
         | Disclaimer: I have very little experience with C++, a bit more
         | with Rust code that bridges with JS in a manner similar to the
         | example, and zero experience with V8 dev. All of that said...
         | 
         | I think the technically correct responses you've gotten so far
         | _may be_ missing an insight here: wouldn't the V8 example code
         | be just as safe as the equivalent JS if it used the JS array's
         | own semantics? More to the point: presumably those JS semantics
         | are themselves implemented in C++ _somewhere else_ , and this
         | example is reimplementing an incorrect subset of them. While
         | it's likely inefficient to add another native/JS round trip
         | through JSValue to get at the expected JS array functionality,
         | it seems reasonably safe to assume the _correct behavior_ could
         | be achieved with predictable performance by calling into
         | whatever other part of V8 would implement those same JS array
         | semantics.
         | 
         | In other words, it doesn't _seem like_ you're missing the
         | point. It seems like this kind of vulnerability could be at
         | least isolated by applying exactly the thinking you've
         | expressed.
        
           | professoretc wrote:
           | You're essentially correct; in JS, if you write
           | for(var i = 0; i < arr.length; ++i)             ...
           | 
           | then the array's length is _not_ read once at the beginning
           | of the loop; it is read _every_ loop iteration. So if the
           | code inside the loop (or inside the getter for `length`
           | itself) modifies the length of the array, then it will be
           | caught when the condition is evaluated. The problem is that
           | the C++ code makes assumptions about JS (reading the length
           | of an array cannot change the array 's length) that don't
           | hold. But it's an easy mistake to make.
        
           | azakai wrote:
           | > wouldn't the V8 example code be just as safe as the
           | equivalent JS if it used the JS array's own semantics?
           | 
           | Yes, but imagine that the code we are talking about here is
           | JIT code that the compiler emitted. If the compiler JITed
           | code that was safe Rust then it could be safe. But JITs emit
           | machine code and a big part of their performance is exactly
           | in the "dangerous" areas like removing unneeded bounds
           | checks.
           | 
           | Say you have a bounds check in a loop. In some cases a JIT
           | can remove it, if it can prove it's the same check in each
           | iteration. Never removing the check would be safer, of
           | course, but also slower.
           | 
           | The point of the article here is that a lower-level
           | sandboxing technique can help such JIT code: even if a pass
           | has a logic bug (a bug Rust would not help with) and removes
           | a necessary check then the sandboxing limits what can be
           | exploited.
        
         | IainIreland wrote:
         | Those are the intended semantics of JS, but that doesn't help
         | you when you're the one implementing JS. Somebody has to
         | actually enforce those restrictions. Note that the code snippet
         | is introduced with "JSArray::buffer_ can be thought of as a
         | JSValue*, that is, a pointer to an array of JavaScript values",
         | so there's no bounds checking on `buffer_[index]`.
         | 
         | It's easy enough to rewrite this C++ code to do the right
         | bounds checking. Writing the code in Rust would give even
         | stronger guarantees. The key point, though, is that those
         | guarantees don't extend to any code that you generate at
         | runtime via just-in-time compilation. Rust is smart, but not
         | nearly smart enough to verify that the arbitrary instructions
         | you've emitted will perform the necessary bounds checks. If
         | your optimizing compiler decides it can omit a bounds check
         | because the index is already guaranteed to be in-bounds, and
         | it's wrong, then there's no backstop to swoop in and return
         | undefined instead of reading arbitrary memory.
         | 
         | In short, JIT compilation means that it's ~impossible to make
         | any safety guarantees about JS engines using compile-time
         | static analysis, because a lot of the code they run doesn't
         | exist until runtime.
        
           | foldr wrote:
           | >Those are the intended semantics of JS
           | 
           | They're actually not. Out of bounds indexing is fine, you
           | just get undefined as the result.
        
       | jiripospisil wrote:
       | > Similarly, disabling the JIT compilers would also only be a
       | partial solution: historically, roughly half of the bugs
       | discovered and exploited in V8 affected one of its compilers
       | while the rest were in other components such as runtime
       | functions, the interpreter, the garbage collector, or the parser.
       | Using a memory-safe language for these components and removing
       | JIT compilers could work, but would significantly reduce the
       | engine's performance (ranging, depending on the type of workload,
       | from 1.5-10x or more for computationally intensive tasks).
       | 
       | If you're willing to take the performance hit, Chromium actually
       | allows you to disable JIT easily in the Settings and add
       | exceptions for certain sites. Open the Settings and search for V8
       | Optimiser.
        
         | vlovich123 wrote:
         | > while the rest were in other components such as runtime
         | functions, the interpreter, the garbage collector, or the
         | parser
         | 
         | Notably memory safe languages wouldn't really help with the
         | garbage collector since it would have to use unsafe Rust & the
         | confusion about lifetime would still exist or you'd be using
         | something like Java/C# where you're just relying on the
         | robustness of that language's runtime GC.
         | 
         | However, the runtime functions, interpreter and parser would be
         | secured by something like Rust & I fail to see how well-written
         | Rust would introduce a 1.5-10x overhead.
        
           | fngjdflmdflg wrote:
           | I think the key is:
           | 
           | >and removing JIT compilers
           | 
           | If you read the article it makes more sense.
        
           | kaba0 wrote:
           | That's not quite true. It only depends on what level of
           | abstraction are you willing to do -- you can write a runtime
           | with GC entirely in safe rust (or a managed language).
        
             | dadrian wrote:
             | It doesn't matter if the JIT itself is written in a memory-
             | safe language or not if you're exploiting miscompiled JIT
             | output. If the machine code emitted by a JIT is wrong, it
             | can be exploited regardless of if the JIT itself is memory
             | safe or not.
        
               | kaba0 wrote:
               | I thought the abstraction I wrote sort of implied an
               | interpreted mode for the runtime, no JIT compilation.
               | Apologies for not being clear.
        
               | The_Colonel wrote:
               | That's just trivially true given all these languages are
               | Turing-complete.
        
             | chlorion wrote:
             | Yup you can write a garbage collected interpreter for a
             | programming language with no unsafe code at all, even for
             | languages that have complex data structures like doubly
             | linked lists in them.
             | 
             | Using something like a slotmap to store the languages
             | objects in is what I would do, and your GC would just
             | involve removing values from the map after marking
             | everything that's reachable.
             | 
             | The popular slotmap crate on crates.io does contain unsafe
             | code but nothing about the data structure inherently
             | requires unsafe.
        
           | azakai wrote:
           | The 1.5-10x overhead part is not talking about Rust, but
           | about disabling JITs.
        
           | o11c wrote:
           | Many memory-safe languages would be fine. Only languages with
           | the particular narrow opinions of Rust would likely be
           | vulnerable to this.
        
         | orangepanda wrote:
         | Does Safari in Lockdown Mode do anything more than just
         | disabling JIT?
        
           | madars wrote:
           | Yes, it also disables WASM, MP3 decoding, gamepad API, JPEG
           | 2000, SVG fonts, PDF previews, WebGL, Speech Recognition API,
           | Web Audio API. Pretty much web as it was meant to be ;-)
           | 
           | https://9to5mac.com/2022/07/25/lockdown-mode-
           | ios-16-restrict...
        
             | thefounder wrote:
             | I think you mean the html not the web
        
               | pjmlp wrote:
               | I still remember when that was all the Web offered, the
               | glory days of HTML 2.0.
        
         | grishka wrote:
         | > would significantly reduce the engine's performance (ranging,
         | depending on the type of workload, from 1.5-10x or more for
         | computationally intensive tasks).
         | 
         | And the downside being?
         | 
         | Seriously, JS was never meant to be performant. In the real
         | world, it's very rarely used for anything computationally
         | intensive.
        
           | zamadatix wrote:
           | Power usage
        
           | troupo wrote:
           | Google already says that 2.5 seconds to Largest Contentful
           | Paint is fast: https://blog.chromium.org/2020/05/the-science-
           | behind-web-vit...
           | 
           | Now multiply that by 1.5x.
        
           | jrajav wrote:
           | Very curious what your unique definition of 'computationally
           | intensive' is, that manages to not include one of the most
           | significant computational workloads worldwide, both in terms
           | of absolute volume and impact on human productivity. Namely,
           | web browser rendering performance.
        
           | eyelidlessness wrote:
           | > Seriously, JS was never meant to be performant.
           | 
           | If you mean "wasn't originally meant", that might be true.
           | But it's been meant to be performant for quite a long time,
           | with huge investments behind the realization of that intent.
           | 
           | It's fine if you have nostalgia for whatever you think was
           | the original vision behind JS. But that hasn't been the
           | operating vision for it for many years.
        
       | sanxiyn wrote:
       | Why so late? I think WebKit had this like five years ago, aka
       | Gigacage.
        
       | pciexpgpu wrote:
       | This is splendid work by Google that will benefit the rest of the
       | ecosystem - especially with the reward program.
       | 
       | I wonder how this impacts (positively) Cloudflare Workers/Fly.io-
       | style isolation (both use very different isolation mechanisms I
       | guess).
       | 
       | Perhaps, thinking out loud, CF Workers had the right level of
       | isolation to begin with (i.e. pure V8 isolation)?
        
         | throwitaway1123 wrote:
         | Fly uses Firecracker micro VMs rather than V8 isolates. Two of
         | the engineers behind both services had a friendly discussion
         | about it a few years ago:
         | https://news.ycombinator.com/item?id=31759170
        
       | winrid wrote:
       | Someday it would be really cool to execute NodeJS code in a
       | sandbox with a timeout without having to throw the work at a
       | subprocess.
        
       | amelius wrote:
       | > neither switching to a memory safe language, such as Rust, nor
       | using current or future hardware memory safety features, such as
       | memory tagging, can help with the security challenges faced by V8
       | today.
       | 
       | And here I thought Rust would fix my security issues ...
        
         | conradludgate wrote:
         | JIT engines are fundamentally unsafe since they would produce
         | unsafe machine code directly. And for performance sake the
         | runtime should use tactical unsafe. So memory safety is
         | definitely still something to worry about, even if it's less
         | likely to occur in my experience
        
       | elwell wrote:
       | Fizzbuzz with three conditions... criminal.
        
       | ladzoppelin wrote:
       | How is this different than the "sandboxing" advertised in Chrome
       | from the beginning , what am I missing?
        
       ___________________________________________________________________
       (page generated 2024-04-04 23:00 UTC)