[HN Gopher] A buffer overflow in the XNU kernel
       ___________________________________________________________________
        
       A buffer overflow in the XNU kernel
        
       Author : jprx
       Score  : 94 points
       Date   : 2024-06-20 17:24 UTC (5 hours ago)
        
 (HTM) web link (jprx.io)
 (TXT) w3m dump (jprx.io)
        
       | bartvk wrote:
       | If you're still running the affected kernel, what are the
       | possible consequences?
       | 
       | Also, this has been public for months:
       | 
       | - February 17, 2024: I posted the hash of TURPENTINE.c to X on
       | Feb 17, 2024.
       | 
       | - May 13, 2024: macOS Sonoma 14.5 (23F79) shipped with
       | xnu-10063.121.3, the first public release containing a fix.
        
         | axoltl wrote:
         | The syscalls involved are in a lot of sandboxes, so worst (or
         | best, depending on your point of view) case scenario it's a
         | pretty universal privesc. There's a lot of steps to get there
         | though. I'm not super familiar with the mbuf subsystem
         | specifically but I'm going to guess mbufs are in their own
         | allocator zone. That means you're guaranteed to overwrite an
         | adjacent m_hdr structure. Those contains pointers that form a
         | linked list and at first glance I don't see linked list
         | hardening or zone checks in the MBUF macros. One could envision
         | being able to turn this bug into a kASLR leak as well as a
         | kernel r/w primitive and while that isn't the silver bullet it
         | used to be on XNU (because of a whole host of hardening Apple
         | put in) it's still pretty powerful.
        
       | throwaway71271 wrote:
       | I can't wait for gpt5 to be finetuned on all CVEs and their fixes
       | so that you can just give it the linux source and it spits out
       | exploits.
       | 
       | It will be absolute chaos.
       | 
       | Like when you can just send one icmp packet with `+++ath0` and
       | just disconnect someone's modem, or open people's cdroms randomly
       | at night haha.
       | 
       | edit: it was a joke :)
        
         | saagarjha wrote:
         | It's definitely nowhere near capable of doing that.
        
           | favorited wrote:
           | Is "with a sufficiently smart LLM" the new "with a
           | sufficiently smart compiler?"
        
             | st_goliath wrote:
             | "imagine feeding this into an LLM/ChatGPT" is the new
             | "imagine a Beowulf cluster of these"
        
           | brcmthrowaway wrote:
           | Have you used GPT-5?
        
             | JSDevOps wrote:
             | If you aren't using GPT-6a then you are years behind.
        
               | exe34 wrote:
               | you need to wake up at 4am and have a cold shower!
        
               | JSDevOps wrote:
               | 12 before 12, 12 cold showers before 12 allowing GPT-7 to
               | take care of my daily needs.
        
               | vips7L wrote:
               | GPT-69 is already far ahead of 6a.
        
               | JSDevOps wrote:
               | Been using 73 for months now.
        
           | speed_spread wrote:
           | GPT-5, maybe not. But somebody somewhere is building
           | something that can do that. And if they can't do it _now_
           | they have a plan that tells them what's missing. TLDR; it's
           | coming, soon.
        
             | TylerE wrote:
             | and lots of people are spending lots of time and money on
             | AI Coding Assitants... which is more or less the knowledge
             | base you need.
             | 
             | If they could use that structural training to answer
             | queries like "Is there any code path where
             | some_dangerous_func() is called without it's return value
             | being checked"...
        
               | axoltl wrote:
               | You can do this today by querying the AST output by a
               | compiler. Regardless, the parent comment was talking
               | about exploits, not vulnerabilities/bugs. Vulns are a
               | dime-a-dozen compared to even PoC exploits let alone
               | shippable exploits.
        
               | TylerE wrote:
               | Ok, so add "and generate a C program to exploit it" to
               | the prompt.
        
               | axoltl wrote:
               | You're either being sarcastic or wildly underestimating
               | how hard it is to write an exploit. I haven't written
               | about exploit dev publicly for a _long_ time, but I
               | invite you to read
               | https://fail0verflow.com/blog/2014/hubcap-chromecast-
               | root-pt... for what I consider to be a pretty trivial
               | exploit of a very "squishy" (industry term) target.
               | 
               | XNU isn't the hardest target to pop but it is far from
               | the easiest.
        
             | axoltl wrote:
             | Writing exploits is a bit of an art-form. Current
             | incarnations of GPT have trouble writing code at a level
             | more advanced than a junior developer.
        
           | sillywalk wrote:
           | Apparently GPT-4 has some capacity to conduct exploits this
           | by "reading" CVE reports. I don't know if it can autonomously
           | create exploits though:
           | 
           | GPT-4 can exploit vulnerabilities by reading CVEs
           | (theregister.com) 81 points by ignoramous 60 days ago | hide
           | | past | favorite | 29 comments
           | 
           | https://news.ycombinator.com/item?id=40101846
           | 
           | which links to a Register Article[0], which links to a
           | paper[1]:
           | 
           | "In this work, we show that LLM agents can autonomously
           | exploit one-day vulnerabilities in real-world systems. To
           | show this, we collected a dataset of 15 one-day
           | vulnerabilities that include ones categorized as critical
           | severity in the CVE description. When given the CVE
           | description, GPT-4 is capable of exploiting 87% of these
           | vulnerabilities compared to 0% for every other model we test
           | (GPT-3.5, open-source LLMs) and open-source vulnerability
           | scanners (ZAP and Metasploit). Fortunately, our GPT-4 agent
           | requires the CVE description for high performance: without
           | the description, GPT-4 can exploit only 7% of the
           | vulnerabilities."[1]
           | 
           | [0] https://www.theregister.com/2024/04/17/gpt4_can_exploit_r
           | eal...
           | 
           | [1] https://arxiv.org/pdf/2404.08144
        
             | saagarjha wrote:
             | Yes, that sounds about right. LLMs aren't quite good enough
             | to find novel bugs and exploit them like a human would.
        
         | lpapez wrote:
         | Writing an exploit is usually much more difficult than patching
         | the underlying bug.
         | 
         | Half of the work in fixing a bug report is getting a
         | reproducible example. Nay, more than half.
         | 
         | If there was a magic AI which could generate exploits, I'd
         | imagine there would be an equally magic AI patching the holes
         | right out.
        
           | vlovich123 wrote:
           | Maybe but keep in mind that there's often a substantial lag
           | in practice between a fixed vulnerability and its deployment
           | into production.
           | 
           | That said, I'm quite skeptical there's any AI's on the
           | horizon that can autogenerate exploits from CVEs.
        
         | chad1n wrote:
         | There is a bigger chance that a toddler smashing a keyboard
         | finds a bug than gpt5. LLMs can't understand intent, so they
         | literally work like `grep` with little to no understanding of
         | the context, so most of the time it will false flag good code.
         | 
         | There are already a lot of tools already to find bugs, like
         | fuzzers, but I am sure that LLMs won't be one of them.
        
           | exe34 wrote:
           | they don't need to understand intent, they just need to find
           | exploits. they don't even need to do it by reading code alone
           | - give them a vm running the code and let them throw
           | excrement at it until something sticks!
        
           | barkingcat wrote:
           | Llm powered / guided fuzzer would be pretty cool though.
        
             | zX41ZdbW wrote:
             | https://github.com/google/oss-fuzz-gen
        
         | jiveturkey wrote:
         | the real win will be when it can also generate the codename for
         | the exploit. FATEFATAL
        
       | lgdskhglsa wrote:
       | In case people missed it, the name of the exploit is a blink 182
       | song released around the time it was discovered.
        
         | jprx wrote:
         | You get it!!
        
       ___________________________________________________________________
       (page generated 2024-06-20 23:00 UTC)