[HN Gopher] Out of bounds memory access in V8 in Google Chrome p...
       ___________________________________________________________________
        
       Out of bounds memory access in V8 in Google Chrome prior to
       120.0.6099.224
        
       Author : keepamovin
       Score  : 117 points
       Date   : 2024-01-24 14:46 UTC (8 hours ago)
        
 (HTM) web link (nvd.nist.gov)
 (TXT) w3m dump (nvd.nist.gov)
        
       | tester756 wrote:
       | Where git diff can be found?
        
         | mccr8 wrote:
         | Here's the diff:
         | https://chromium.googlesource.com/v8/v8/+/389ea9be7d68bb189e...
        
       | eimrine wrote:
       | Does it work with Chrome 60?
        
         | 15457345234 wrote:
         | "Showing 1000 of 9099 matching CPE(s) for the range"
         | 
         | Yes it seems to go back to Chrome version 9 lol
         | 
         | uhoh
        
           | Manouchehri wrote:
           | Impacted versions in CVE listings are normally not validated.
           | 
           | For example, my CVE-2022-2007 in WebGPU also supposedly goes
           | back to Chrome version 9. That's impossible, as WebGPU wasn't
           | even a concept back then.
           | 
           | https://nvd.nist.gov/vuln/detail/CVE-2022-2007
           | 
           | It's relatively easy to find the offending commit by creating
           | a unit test and using git bisect. I usually don't do it for
           | public Chromium bug reports since it's extra work and $0 in
           | extra rewards.
        
         | themerone wrote:
         | Why are you concerned about a specific version released 7 years
         | ago?
        
           | olliej wrote:
           | Yet another unuqdated Electron app?
        
             | timcobb wrote:
             | Or a joke on this subject
        
           | mschuster91 wrote:
           | A lot of stuff embeds Chromium in one way or another.
        
       | masklinn wrote:
       | The Fedora mailing list message actually lists 3 different CVE:
       | https://lists.fedoraproject.org/archives/list/package-announ...
       | 
       | - a type confusion
       | 
       | - an out of bounds _read_
       | 
       | - an out of bounds _write_
       | 
       | All three are in V8, so node might also be affected?
        
         | mccr8 wrote:
         | If you are running untrusted code in Node, subtle JIT bugs are
         | probably the least of your problems.
        
           | christophilus wrote:
           | Everyone who runs Node runs untrusted code (depending on your
           | definition of untrusted). No one I've ever worked with made
           | an effort to review the source of the thousands of
           | dependencies they were slinging around.
        
             | GabrielTFS wrote:
             | I would expect "untrusted code" to mean "code in a sandbox"
             | or "code I'm not gonna run at all anytime soon", so running
             | code from thousands of dependencies in node is effectively
             | trusting all of that code, unless it is your direct
             | expectation that it is malicious (and even then, aren't you
             | trusting it to be malicious ?).
             | 
             | The trust we give to random dependencies like that is quite
             | arguably unwarranted to a large degree, but it doesn't mean
             | the code isn't trusted.
        
             | bee_rider wrote:
             | I'm pretty sure untrusted code means code you can't trust,
             | which includes any code that you haven't either analyzed
             | yourself or put through some sort of institutional review
             | with auditable sign-offs.
             | 
             | It is how these conversations always go:
             | 
             | There's a hole in the sandbox.
             | 
             | If you were trusting the sandbox, you were already doomed.
             | 
             | Nobody validates their code well enough to trust it. (we
             | are here)
             | 
             | The ecosystem and industry is just irreparably damaged.
             | 
             | What am I supposed to do about that?
             | 
             |  _Non-solutions, because it is an impossible problem to
             | actually fix_
        
               | graemep wrote:
               | Web browsers rely on the sandbox. Almost everyone runs
               | untrusted code single day. There are very few people who
               | do not trust the sandbox.
               | 
               | It does not directly affect servers if one rejects your
               | rather broad definition of untrusted, but does
               | indirectly.
               | 
               | > I'm pretty sure untrusted code means code you can't
               | trust, which includes any code that you haven't either
               | analyzed yourself or put through some sort of
               | institutional review with auditable sign-offs.
               | 
               | That is so broad that very people are running trusted
               | code. You would need to ensure your entire stack down to
               | the firmware it runs on had been analysed.
        
               | bee_rider wrote:
               | Sounds like the ecosystem and industry is just
               | irreparably damaged.
        
           | NotSammyHagar wrote:
           | What does trusted really mean? If you use node (or other JS
           | packaging systems), you are running code that someone else
           | wrote, that almost certainly you didn't review as it's huge,
           | changing, etc. How about companies that use v8 to run
           | JavaScript extensions to their app that their customers
           | wrote. This is many apps. Are you saying they are all
           | vulnerable?
           | 
           | The answer is they are all vulnerable, just because of
           | problems like this. Any user code (js in this case) is
           | untrustworthy, and everything has js extensions. What's the
           | safe way to run user JS? Running v8 in its own somehow
           | separately limited process maybe is what I think people do.
        
             | Dalewyn wrote:
             | >What does trusted really mean?
             | 
             | I agree; "trusting" third-party/remote code or not frankly
             | went out the window with the bathwater including baby when
             | we moved on to Web "JavaShit" 2.0, or was it 3.0.
             | 
             | Feels like we're in a worse security hellhole than we ever
             | were with Flash or ActiveX back in the day, frankly.
        
               | refulgentis wrote:
               | For context, assuming by JavaShit Web 3.0 you meant
               | JavaScript:
               | 
               | Flash and ActiveX were proprietary technologies that
               | often required users to install plugins or additional
               | software on their devices. These plugins operated with
               | high levels of access to the system, which made them a
               | significant security risk. They were notorious for being
               | frequently exploited by attackers as a way to run
               | malicious code on users' machines.
               | 
               | In contrast, JavaScript is a core part of the web and is
               | executed within the browser in a sandboxed environment.
               | This means that JavaScript operates with limited access
               | to the system's resources, reducing the risk of system-
               | level security breaches.
        
               | bee_rider wrote:
               | Either "sandbox" was the funniest and most appropriate
               | name ever chosen in tech, or the person who came up with
               | it has never actually seen a sandbox.
        
               | Amigo5862 wrote:
               | > In contrast, JavaScript is a core part of the web and
               | is executed within the browser in a sandboxed
               | environment. This means that JavaScript operates with
               | limited access to the system's resources, reducing the
               | risk of system-level security breaches.
               | 
               | Flash (and probably ActiveX) were also executed in a
               | "sandboxed environment", including "limited access to the
               | system's resources". All 3 have (or well, had, in the
               | case of Flash and ActiveX) regular vulnerabilities -
               | including JavaScript. JavaScript is not any better than
               | Flash or ActiveX and I really don't understand why people
               | pretend it is.
               | 
               | BTW, Flash was definitely a core part of the web in its
               | heyday, too.
               | 
               | ETA: Oh, and Java was also executed in a sandbox (and a
               | virtual machine!) and had plenty of vulnerabilities back
               | when applets were a thing.
               | 
               | At least with Flash, ActiveX, and Java you could choose
               | not to install them and most sites would continue
               | working. For JavaScript you have to install (and trust)
               | some third party extension to block it and then no sites
               | work...
        
               | cesarb wrote:
               | > Flash (and probably ActiveX) were also executed in a
               | "sandboxed environment", including "limited access to the
               | system's resources".
               | 
               | IIRC, the main issue with ActiveX was that it did _not_
               | execute in a sandboxed environment, unlike Flash and
               | Java. With ActiveX, all you had was a cryptographic
               | signature saying it came from a trusted publisher; past
               | that, the full Win32 API was available, with complete
               | access to the operating system.
        
               | Amigo5862 wrote:
               | That wouldn't particularly surprise me. I never used
               | ActiveX, so I can't really speak to that one. But then,
               | there also weren't many (public) websites that I ever ran
               | into that wanted to use it.
        
               | technion wrote:
               | As someone who still has to support users of several
               | ActiveX apps, turning off the "block unsigned ActiveX"
               | setting goes with the territory of using it.
        
               | josephg wrote:
               | > But then, there also weren't many (public) websites
               | that I ever ran into that wanted to use it.
               | 
               | As I understand it there were weird pockets where
               | organisations went hard in to activeX. IIRC it was used
               | heavily by the South Korean government, and a lot of
               | internal corporate intranet projects for all sorts of
               | things.
               | 
               | That obviously caused massive problems a few years later
               | when Microsoft tried to discontinue activex and make
               | IE/Edge a normal web browser.
        
               | acdha wrote:
               | Flash was never a core part of the web. That was the
               | problem: it was loosely bolted onto browsers but the
               | company behind it didn't understand or care about the
               | web, spent their time inventing random new things for
               | demos trying to get you to build on top of their platform
               | INSTEAD of the web, and was never willing to spend time
               | on support.
               | 
               | > JavaScript is not any better than Flash or ActiveX and
               | I really don't understand why people pretend it is.
               | 
               | Because it is. Both of those were hard to use without
               | crashing the browser - the primary selling point for
               | Chrome originally was that it used process sandboxing and
               | so when Flash crashed you wouldn't lose every open window
               | - whereas what we're seeing now are complex attacks
               | requiring considerable investment finding ways to get
               | around the layers of precautions. It's like saying that
               | there's no difference between leaving your money under
               | the mattress and putting it in the bank because banks
               | still get robbed.
        
             | Spivak wrote:
             | Whether or not you review your deps code is on you, it
             | doesn't make it untrusted. You're trusting them whether you
             | do the due diligence to see if that trust is warranted or
             | not. Untrusted means code that comes from outside your
             | system, like 3rd party extensions to your app and is
             | presumed to be completely broken and actively malicious and
             | nonetheless shouldn't crash your own program or reveal
             | sensitive data.
        
             | rezonant wrote:
             | There is a massive difference between the supply chain
             | risks of open source packages and actively fetching and
             | executing remote code provided as user input like the
             | browser inherently does.
             | 
             | The case of user provided extensions definitely falls a lot
             | closer to the supply chain threat model.
        
           | lq9AJ8yrfs wrote:
           | Serverless and CDN style edge compute are two scenarios that
           | this may be relevant to, where untrusted or semi-trusted code
           | may run in some construction on top of V8. Especially
           | providers of those services are probably tuned in right now
           | or ought to be.
        
       | mccr8 wrote:
       | There's not a lot of context in this submission, but presumably
       | it is being linked because the release notes for this CVE says
       | "Google is aware of reports that an exploit for CVE-2024-0519
       | exists in the wild."
       | 
       | https://chromereleases.googleblog.com/2024/01/stable-channel...
        
         | bri3d wrote:
         | https://blog.exodusintel.com/2024/01/19/google-chrome-v8-cve...
         | 
         | Here's the exploit writeup.
        
       | olliej wrote:
       | Oof, another reminder that it does not matter how aggressively
       | you fuzz, it's still fundamentally stochastic and you can miss
       | things like this for years :-/
        
         | pjmlp wrote:
         | Yeah, but apparently we keep getting the feedback that with
         | good engineering practices and good strong developers, this
         | never comes to be.
        
           | josephg wrote:
           | Most hackers aren't finding their own 0days like this. If a
           | motivated state level attacker targets you, you're in
           | trouble.
           | 
           | But most companies that get hacked fail in much dumber ways -
           | like failing to update chrome long after a bug like this has
           | been fixed. Or using stupid, guessable passwords.
           | 
           | Of course we should have sympathy for the 1% of companies who
           | fall victim to 0days from state level attackers. But I have
           | no sympathy for the other 99% of data leaks where the devs
           | forgot to put a password on a public mongodb instance, or let
           | their login cookies be enumerable or something equally
           | ridiculous. Just because we can't make security perfect
           | doesn't mean we shouldn't make it _good_.
        
             | mandevil wrote:
             | James Mickens gave a lecture at a conference (I think it's
             | this one: https://www.youtube.com/watch?v=tF24WHumvIc but I
             | don't have time to actually listen through it) about how
             | there are two different threat models for computer
             | security: bored five year olds and the Mossad. And bored
             | five year olds just try the same three things again and
             | again, and as long as you take basic precautions you can
             | protect yourself against them. You cannot protect yourself
             | against the Mossad, don't even try.
             | 
             | When I first started paying attention to computer security
             | I was at a start-up and tasked to rewrite their PCI
             | infrastructure for credit card processing, so I did some
             | research into the state of the art. That week news of a
             | casino hack came out. Billionaire casino/media magnate and
             | conservative donor Sheldon Adelson gave a speech in October
             | 2013 about how the US should threaten to use nuclear
             | weapons to destroy Iran's nuclear program. In February 2014
             | a 150 line VB virus was installed on his casino's network
             | which wiped all of the hard drives it could find, costing
             | the Sands hundreds of millions of dollars. This was at a
             | casino, some place which absolutely cares about their
             | computer security and presumably spends a lot on that
             | security, and they failed to successfully protect
             | themselves against a nation-state level threat.
             | 
             | So whatever the hell startup I was working at had no chance
             | if someone at that level wanted it. We could stop the five
             | year olds. Just not the nation-states.
        
               | tialaramex wrote:
               | None of these entities have unlimited resources or
               | unlimited political cover. At some point you're making
               | too big of a hole in the budget, too many people must be
               | sacrificed and it's not worth it.
        
           | olliej wrote:
           | There's also the component of "there are millions of lines of
           | existing C and C++ that continue to exist and the majority of
           | new 'safe' languages are not designed to support gradual
           | adoption/replacement". So you get gradual improvements to C++
           | that ostensibly make correct code easier (lets ignore
           | string_view, etc) but no real path to actual safety without
           | just rewriting everything all at once, which is an approach
           | with a long and illustrious history of failure and major
           | regressions even in "successful" attempts.
        
           | deschutes wrote:
           | Is there a better way for something like an optimizing (jit)
           | compiler? My barely informed understanding is that many of
           | these bugs aren't soundness issues in the compilation process
           | but instead the resulting executable code. I don't see how
           | rust meaningfully helps with this problem.
           | 
           | Formal verification is often thrown around but my
           | understanding is it can't scale to the complexity of
           | something like an optimizing compiler.
        
         | Voultapher wrote:
         | I see no reason we would expect them to succeed at something no
         | one ever has https://alexgaynor.net/2020/may/27/science-on-
         | memory-unsafet...
        
           | fleventynine wrote:
           | If you're generating machine code at runtime (JIT), you can
           | have memory bugs no matter what language your JIT is written
           | in.
        
             | Voultapher wrote:
             | Well maybe such a JIT is an inappropriate choice for a
             | sandbox. There are under-explored safe alternatives such as
             | isomorphic interpreters e.g.
             | https://blog.cloudflare.com/building-fast-interpreters-in-
             | ru... personally I've written one such implementation that
             | could hold it's own compared to a LLVM based JIT.
        
               | infamouscow wrote:
               | There's a lot of hobbyists that can compete with LLVM's
               | JIT, and they do it without any formal training in PLT or
               | compilers.
        
               | olliej wrote:
               | The whole point is the implementation and runtime for any
               | safe language _requires_ code that is fundamentally
               | unsafe.
               | 
               | If you write a safe interpreter your safety is still
               | dependent on the compiler and runtime for your safe
               | language having no errors.
               | 
               | This specific bug is in the implementation of a safe
               | language, and the same can happen in any other (safe)
               | language. People file bug reports on rustc, clang, llvm,
               | etc all the time, and the only difference is that
               | targeting them as an attack vector is not not
               | particularly valuable as it's much harder to get a target
               | to run your attack code, and the methods for doing so are
               | much harder to hide. Note that these errors do not have
               | to be any kind of safety error in the compiler for those
               | languages, they can easily be erroneous codegen. These
               | things happen.
               | 
               | As another example, I've seen use after free bugs in
               | WebKit and JSC come up on HN over the years, and people
               | blame C++ memory management, when the use after failure
               | is incorrect management of the JS GC state in the GC
               | runtime. If you have a GCd language, then code written in
               | that language is definitionally incapable of making
               | memory management errors, but if you are implementing the
               | runtime that provides that safety you are inherently and
               | unavoidably doing things that are "unsafe" under the
               | definition of the hosted language.
               | 
               | The decision of many people is that JS needs to be fast,
               | but if you disagree you can always see what the perf
               | delta is. IIRC for non-hot code the JSC baseline JIT is
               | maybe 10x faster than the interpreter (which is written
               | in a pseudo assembly, so it's not taking any C/C++
               | overhead), and if the code is hot enough that it starts
               | getting optimized that number just gets larger.
               | 
               | I'm not sure what language you were compiling where your
               | interpreter was holding its own to LLVM, but the options
               | that would make that plausible all imply the dominant
               | component of the runtime was not the code being
               | interpreted. e.g. along the same lines of people
               | comparing python performance by benchmarking C and
               | fortran math libraries.
        
               | Voultapher wrote:
               | Equating the provided memory safety of a Rust program
               | without direct unsafe usage to a complex JIT built in C++
               | seems willfully ignorant. The attack surface would still
               | be the mostly formally verified Rust type system and
               | standard library https://plv.mpi-sws.org/rustbelt/.
        
             | technion wrote:
             | Interestingly Edge gives you the option to disable the JIT
             | wholesale.
             | 
             | https://support.microsoft.com/en-us/microsoft-
             | edge/enhance-y...
             | 
             | I've had this in place on some workstations and I've never
             | actually noticed a performance difference.
        
           | vlovich123 wrote:
           | Is your implication that the problem is that v8 is written in
           | c++? I can't find the details for the vulnerability and while
           | it may be possible this is the problem the exploit could live
           | in the JIT itself (that's common for vulnerabilities within
           | v8 itself since v8 has a very thin runtime). We don't have
           | any memory safe techniques for the output of JITs (or
           | compilers by the way) - even if it were written in Rust or
           | Java it's possible the vast majority of security exploits in
           | this space would still exist.
        
             | mike_hearn wrote:
             | _> We don't have any memory safe techniques for the output
             | of JITs (or compilers by the way)_
             | 
             | That's not quite true.
             | 
             | BTW there is a writeup by Exodus Intelligence here:
             | 
             | https://blog.exodusintel.com/2024/01/19/google-
             | chrome-v8-cve...
             | 
             | It's a horrifyingly complicated vulnerability that takes 28
             | printed pages to explain and one line of code to fix. I
             | don't even want to know what kind of effort it took to
             | discover this one.
             | 
             | The bug is not _directly_ caused by V8 being written in
             | C++. It 's a miscompilation, and those can happen whatever
             | language your JIT is written in.
             | 
             | But. But but but. It could be argued that the problem is
             | indirectly related to it being written in C++. It's a
             | tricky argument, but it might be worth trying. I will now
             | try and make it. Please be aware though, that I am
             | commercially biased, being that I work part time at Oracle
             | Labs on the Graal team who make a JS engine (open source
             | though!). It should hopefully be obvious that these words
             | and thoughts are my own, not indicative of any kind of
             | party line or anything. So, yeah, take it all with a solid
             | dash of table salt.
             | 
             | One of the reasons a higher level language like Java or
             | even Rust is more productive than C++ is the capacity for
             | building higher level abstractions. Specifically, Java has
             | very good reflection capabilities at both runtime and also
             | (via annotation processors) compile time. High level and
             | safe abstractions are one of the key tools used to make
             | software more robust and secure.
             | 
             | If you wade through Exodus' explanation, the core of the
             | bug is that whilst implementing an optimization in very
             | complex code, someone forgot to update some state and this
             | causes the generated IR to be incorrect. This sort of bug
             | isn't surprising because V8 has several different
             | compilers, and they are all working with complex graph-
             | based IR in a low level and fairly manual way.
             | 
             | In the beginning the GraalJS engine worked the same way. JS
             | was converted into bits of compiler IR by hand. It was
             | laborious and slow, and unlike the Chrome team, the engine
             | was being written by a little research team without much
             | budget. So they looked for ways to raise the level of
             | abstraction.
             | 
             | There are very few fast JS engines. There's V8,
             | JavaScriptCore, whatever Mozilla's is called these days,
             | and GraalJS. The latter is unique because it's not written
             | in C++. It's not even just written in plain Java (if it
             | was, this argument wouldn't work, because the vuln here
             | isn't a memory corruption directly in V8's own code).
             | GraalJS is written in something called the "truffle dsl"
             | which is basically using Java syntax to describe how to
             | create fast JIT compiling virtual machines. You don't
             | directly implement the logic of a VM or compiler for your
             | language in this DSL. Instead you express the language
             | semantics by writing an interpreter for it in Java whilst
             | also using annotations, a class library, a code generator
             | that comes with the framework and a set of compiler
             | intrinsics to define how your language works and also
             | (crucially) how to make it fast through tricks like
             | specialization. The code of the interpreter is then
             | repeatedly fused with the data/code of the program being
             | interpreted, partially evaluated and emitted as compiled
             | machine code to execute.
             | 
             | This process doesn't guarantee the absence of
             | miscompilations. Obviously, there's still a compiler
             | involved (which is also written in Java and also uses some
             | self-reflection tricks to be written in a high level way).
             | But fundamentally, you aren't manually manipulating low
             | level IR to implement a high level language. Nor are you
             | writing several compilers. There's _one_ compiler that 's
             | general enough to compile any language.
             | 
             | There is a bit of a performance sacrifice from doing things
             | this way, in particular in terms of warmup time because
             | partial evaluation isn't free.
             | 
             | But this does let you minimize the amount of code that's
             | doing risky IR transformations! It's about as close as you
             | can get to a "memory safe technique for the output of JIT".
             | A JS engine built using this technique cannot corrupt
             | memory in the JS specific code, because it's not emitting
             | machine code or IR to begin with. That's all being derived
             | from a reflection of the interpreter, which is itself
             | already memory safe.
             | 
             | Also, obviously, working with complex graph structures in
             | C++ is super risky anyway, that's why Chrome uses a garbage
             | collected heap. V8 is super well fuzzed but if you don't
             | have Chrome's budget, implementing a JIT in standard C++
             | would be quite dangerous, just from ordinary bugs in the
             | parser or from UAFs in graph handling.
             | 
             | Like I said, this is kind of a tricky and perhaps weak
             | argument, perhaps analogous to arguing about safe Rust
             | (which still has unsafe sections), because there are still
             | graph based optimizations in Graal, and they can still go
             | wrong. But you can at least shrink those unsafe sections
             | down by a lot.
        
           | olliej wrote:
           | My comment was more in the vein of every security bug that
           | gets reported is followed with comments along the lines of
           | "don't they fuzz their software", or languages like Zig that
           | aren't memory safe but claim "debug allocators" and "support
           | for fuzzing" obviate the need for memory safety.
           | 
           | A memory safe language does not need fuzzing to try and
           | prevent memory safety errors from causing security
           | vulnerabilities, as definitionally such errors are not
           | exploitable. Obviously fuzzing is still valuable in such an
           | environment because crashing at runtime is a bad user
           | experience, but it's still superior to "trampling through
           | memory with wanton freedom and continuing to do whatever
           | attackers want".
        
           | pornel wrote:
           | Chrome had many vulnerabilities that could be avoided with a
           | safer language, but complex bugs in JIT are not one of them.
           | 
           | I think you could argue that the VM should be implemented in
           | a less dangerous way that can be formally verified or better
           | isolated, but that's not as simple as a rewrite in Rust.
        
       | ck2 wrote:
       | btw last version on Windows7 is 109
       | 
       | so maybe millions of vulnerable installs out there
       | 
       | there are 3rd party patches to install 120 on w7 but who would
       | trust them?
        
         | bee_rider wrote:
         | Windows 7 is like 15 years old.
        
           | toast0 wrote:
           | Age is irrelevant. Market share is. The first search result
           | [1] places Windows 7 at 3.34% of Windows marketshare
           | currently, which is significant, IMHO, but your market may
           | vary from whatever this statistic tracks. 8.1 at 1.66% and XP
           | at 0.64% could be significant, too, depending on your market.
           | If you're going to support 7, you may as well support 8 and
           | 8.1 too because it may not be any more work.
           | 
           | [1] https://gs.statcounter.com/windows-version-market-
           | share/desk...
        
             | Workaccount2 wrote:
             | I have the great honor of not only commandeering a windows
             | 7 laptop at work, but also a windows XP one.
        
               | thfuran wrote:
               | Can you yet get a laptop that fits XP all in cache?
               | 
               | Edit: It looks like you could probably pull it off with
               | some trimming and a sufficiently expensive Epyc, but I
               | think you'll have to wait a few years for laptops to
               | manage. Hopefully by then you can take XP out back and
               | but it once and for all.
        
               | Workaccount2 wrote:
               | It's used in an offline testing setup using a long
               | forgotten serial interface scripting language.
               | 
               | It's one of those "it works and there is no need to
               | change it so we just leave it"
        
         | znkynz wrote:
         | I would be a bit more worried about the last 3 years of no
         | security updates from Microsoft, if you're still running
         | Windows 7.
        
       | jiripospisil wrote:
       | > [$16000][1515930] High CVE-2024-0517: Out of bounds write in
       | V8. Reported by Toan (suto) Pham of Qrious Secure on 2024-01-06
       | 
       | > [$1000][1507412] High CVE-2024-0518: Type Confusion in V8.
       | Reported by Ganjiang Zhou(@refrain_areu) of ChaMd5-H1 team on
       | 2023-12-03
       | 
       | > [$TBD][1517354] High CVE-2024-0519: Out of bounds memory access
       | in V8. Reported by Anonymous on 2024-01-11
       | 
       | Damn. Google should really hire HN's C++ experts because they
       | would never make errors like this.
        
         | zeusk wrote:
         | Ironically they need all this complexity to make an interpreted
         | script run faster.
        
         | fwsgonzo wrote:
         | I wonder if any of these are avoided by disabling JIT, which I
         | have done. And also, just how much of the codebase is JIT
         | related? I also don't think C++ is always the problem. This is
         | a complex language run-time, and there's all sorts of issues
         | that can be language and language run-time related, regardless
         | of it being written in Rust or C++, that manifest as out-of-
         | bounds errors.
         | 
         | I couldn't find any information about work-arounds.
        
       | nolist_policy wrote:
       | It says severity high everywhere, but doesn't the attacker also
       | need to break out of the sandbox to do actual damage? With this
       | vulnerability alone you only gain remote code execution inside
       | the sandbox.
        
       ___________________________________________________________________
       (page generated 2024-01-24 23:01 UTC)