[HN Gopher] The Dirty Pipe Vulnerability
___________________________________________________________________
The Dirty Pipe Vulnerability
Author : max_k
Score : 531 points
Date : 2022-03-07 12:01 UTC (10 hours ago)
(HTM) web link (dirtypipe.cm4all.com)
(TXT) w3m dump (dirtypipe.cm4all.com)
| Dowwie wrote:
| >Memory bandwidth is saved by employing the splice() system call
| to feed data directly from the hard disk into the HTTP
| connection, without passing the kernel/userspace boundary ("zero-
| copy").
|
| What are the memory savings of this splicing approach as compared
| to streaming [through userspace]?
| max_k wrote:
| What does "streaming buffers" mean? splice() avoids copying
| data from kernel to userspace and back; it stays in the kernel,
| and often isn't even copied at all, only page references are
| passed around.
| MayeulC wrote:
| Wouldn't this allow modifying a cached version of /sbin/su to nop
| the password check? This seems really easy to exploit for
| privilege escalation.
| max_k wrote:
| Yes. But you can also inject code into libc.so.6, and all
| running processes will have it.
| staticassertion wrote:
| Or /etc/passwd
| freemint wrote:
| Yes it would. That is implied because writing arbitrary files
| means you can also edit the permission systems
| staticassertion wrote:
| Another example of a vulnerability that is purposefully
| obfuscated in the commit log. It is an insane practice that needs
| to die. The Linux kernel maintainers have been doing this for
| decades and it's now a standard practice for upstream.
|
| This gives attackers an advantage (they are incentivized to read
| commits and can easily see the vuln) and defenders a huge
| disadvantage. Now I have to rush to patch whereas attackers have
| had this entire time to build their POCs and exploit systems.
|
| End this ridiculous practice.
| rocqua wrote:
| What is your threat model / situation that you care about
| attackers who reverse engineer patches, but are not in the
| small circle of people who would be informed before hand.
|
| To me, it seems like the average corporate security team is not
| going to worry about these kinds of attackers. Security for
| state secrets might, but they seem likely to be clued in early
| by Linux developers.
|
| I'm probably missing something tho.
| staticassertion wrote:
| > What is your threat model / situation that you care about
| attackers who reverse engineer patches, but are not in the
| small circle of people who would be informed before hand.
|
| Virtually every single Linux user. I think what you're
| missing is how commonplace and straightforward it is for
| attackers to review these commits and how _uncommon_ it is
| for someone to be on the receiving end of an embargo.
|
| Most exploits are for N days, meaning that they're for
| vulnerabilities that have a patch out for them. Knowing that
| there's a patch is universally critical for all defenders.
|
| For context, my company will be posting about a kernel (then)
| 0day one of our security researchers discovered. You can read
| other Linux kernel exploitation work we've done here:
| https://www.graplsecurity.com/blog
| rocqua wrote:
| By threat model I mean, who are you worried about attacking
| you.
|
| I get that every linux user could be attacked. But why
| would someone with the relevant knowledge that could pull
| this off attack a given linux user? Why are you worried
| about it? (Not trying to be sarcastic, trying to get a
| sense of what threats you are worried about).
| hvidgaard wrote:
| What should they do instead? You have to rush to patch in any
| case. If the maintainers start to label commits with "security
| patch" the logical step is that it doesn't require immediate
| action when the label is not there. Never mind that the bug
| might actually be exploitable but undiscovered by white hats.
|
| If you do not want to rush to patch more than you have to, use
| a LTS kernel and know that updates matter and should be applied
| asap regardless of the reason for the patch.
| gwd wrote:
| > What should they do instead?
|
| Well Xen for instance includes a reference to the relevant
| security advisory; either "This is XSA-nnn" or "This is part
| of XSA-nnn".
|
| > If the maintainers start to label commits with "security
| patch" the logical step is that it doesn't require immediate
| action when the label is not there. Never mind that the bug
| might actually be exploitable but undiscovered by white hats.
| If you do not want to rush to patch more than you have to,
| use a LTS kernel and know that updates matter and should be
| applied asap regardless of the reason for the patch.
|
| So reading between the lines, there are two general
| approaches one might take:
|
| 1. Take the most recent release, and then only security
| fixes; perhaps only security fixes which are relevant to you.
|
| 2. Take all backported fixes, regardless of whether they're
| relevant to you.
|
| Both Xen and Linux actually recommend #2: when we issue a
| security advisory, we recommend people build from the most
| recent stable tip. That's the combination of patches which
| has actually gotten the most testing; using something else
| introduces the risk that there are subtle dependencies
| between the patches that hasn't been identified.
| Additionally, as you say, there's a risk that some bug has
| been fixed whose security implications have been missed.
|
| Nonethess, that approach has its downsides. Every time you
| change anything, you risk breaking something. In Linux in
| particular, many patches are chosen for backport by a neural
| network, without any human intervention whatsoever. Several
| times I've updated a point release of Linux to discover that
| some backport actually broke some other feature I was using.
|
| In Xen's case, we give downstreams the information to make
| the decisions themselves: If companies feel the risk of
| additional churn is higher than the risk of missing potential
| fixes, we give them the tools do to so. Linux more or less
| forces you to take the first approach.
|
| Then again, Linux's development velocity is _way_ higher;
| from a practical perspective it may not be possible to catch
| the security angle of enough commits; so forcing downstreams
| to update may be the only reasonable solution.
| staticassertion wrote:
| > What should they do instead?
|
| When someone submits a patch for a vulnerability label the
| commit with that information.
|
| > You have to rush to patch in any case.
|
| The difference is how much of a head start attackers have.
| Attackers are incentivized to read commits for obfuscated
| vulns - asking defenders to do that is just adding one more
| thing to our plates.
|
| That's a huge difference.
|
| > the logical step is that it doesn't require immediate
| action when the label is not there.
|
| So I can go about my patch cycle as normal.
|
| > Never mind that the bug might actually be exploitable but
| undiscovered by white hats.
|
| OK? So? First of all, it's usually really obvious when a bug
| _might be_ exploitable, or at least it would be if we didn 't
| have commits obfuscating the details. Second, I'm not
| suggesting that you only apply security labeled patches.
| sirdarckcat wrote:
| for what is worth, the link gregkh pointed you to explains
| the answer for your first 2 points.
|
| Your last point is wrong. Simple example, which of the
| following thousand bugs are exploitable?
| https://syzkaller.appspot.com/upstream
|
| If you can exploit them, you can earn 20,000 to 90,000 USD
| on https://google.github.io/kctf/vrp
| staticassertion wrote:
| I've read the post before, I've seen the talk, and
| frankly it's been addressed a number of times. It's the
| same silly nonsense that they've been touting for decades
| ie: "a bug is a bug".
| roddux wrote:
| Don't know why your other comment got downvoted. Silently
| patching bugs has left many LTS kernels vulnerable to _old_
| bugs, because they weren 't tagged as security fixes. Also
| leads to other issues..:
| https://grsecurity.net/the_life_of_a_bad_security_fix
|
| See also: https://twitter.com/spendergrsec
| staticassertion wrote:
| Not just downvoted. Flagged lol
| marbu wrote:
| Are you saying that you are able to read all incoming linux
| patches, and easily identify changes which fixes a security
| problem, so that you can come up with a POC by the time the
| security issue is announced?
|
| If the patch was flagged as a security problem from the
| beginning, it would give advantage to attackers, since they
| would know that the particular patch is worth investigating,
| while the defenders would have to wait for the patch to be
| finalized and tested anyway.
| staticassertion wrote:
| You have it completely backwards.
| amluto wrote:
| Do you have actual evidence of that in a case like this?
|
| (This is not a rhetorical question. I can possibly influence
| this policy, but unsubstantiated objections won't help.)
| staticassertion wrote:
| Evidence of what, exactly? I can find you lots of evidence
| for hiding vulns, they don't even hide it - I'm sure Greg
| will admit to as much.
|
| Evidence of this being helpful to attackers and not
| defenders? IDK, talk to anyone who does Linux kernel exploit
| development.
|
| edit: There you go, Greg linked his policy, which explicitly
| notes this.
| rfoo wrote:
| Not OP, but please do try to influence this policy if you
| can:
|
| 1. The commit message [1] does not mention any security
| implication. This is reasonable, because the patch is usually
| released to the public earlier and it makes sense to do some
| obfuscation, to deter patch-gappers. But note that this
| approach is not a controversy-free one.
|
| 2. But there is also no security announcement in stable
| release notes or any similar stuff. I don't know how to
| provide evidence of "something simply does not exist".
|
| 3. Check the timeline in the blog post. The bug being fixed
| in stable release (5.6.11 on 2022-02-23) marks the end of
| upstream's handling of this bug. Max then had to send the bug
| details to linux-distros list to kick off (another separate
| process) distro maintainers' response. If what you are
| maintaining is not a distro, good luck.
|
| Is this wrong-sounding enough?
|
| [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/
| lin...
| amluto wrote:
| #1 is intentional, for better or for worse. It's certainly
| well-intentioned too, although the intentions may be based
| on wrong assumptions.
|
| #2: upstream makes no general effort to identify security
| bugs as such. Obviously this one was known to be a security
| bug, but the general policy (see #1) is to avoid announcing
| it.
|
| #3: In any embargo situation, if you're not on the
| distribution list, you don't get notified. This is
| unavoidable. oss-security nominally handles everyone else,
| but it's very spotty.
|
| Sometimes I wish there was a Linux kernel security advisory
| process, but this would need funding or a dedicated
| volunteer.
| mjw1007 wrote:
| If only there was some kind of foundation with a revenue
| of $177 million last year which had an interest in
| Linux's success.
| nisa wrote:
| they are busy doing blockchain projects :)
| sirdarckcat wrote:
| > Sometimes I wish there was a Linux kernel security
| advisory process, but this would need funding or a
| dedicated volunteer.
|
| This is already happening https://osv.dev/list?q=Kernel&a
| ffected_only=true&page=1&ecos...
| rfoo wrote:
| TBH the thing annoyed me most in this story is the
| "Someone had to start the disclosure process on linux-
| distros again and if they didn't no one would know"-part.
| There are certainly silent bug fixes where the author
| intentionally (or not) does not post to linux-distros or
| any other maillists even after stable release. It would
| take an hour to dig a good example tho. (Okay, maybe 10
| minutes if I'm going to read Brad Spengler's rants)
|
| I guess a Linux kernel security advisory process is
| needed to fix this, but yeah :(
| amluto wrote:
| For what it's worth, linux-distros has its own opinions
| that are not necessarily compatible with those of the
| upstream kernel.
| rocqua wrote:
| This is about the commit that fixed the bug, not the commit
| that introduced the bug. The accusation is not that linux
| developers intentionally introduced a vulnerability. Instead
| it is that linux developers hid that a commit fixed a
| vulnerability. Linux does this to prevent people from
| learning that the vulnerability exists.
| titzer wrote:
| This is why stable branches are a thing. I don't know the
| branching scheme that the Linux kernel uses, but the idea is
| that for the oldest (most stable) branch, _everything_ is a
| (sometimes backported) bugfix with security implications.
| gregkh wrote:
| I've described how we (the kernel security team) handles this
| type of things many times, and even summarized it in the past
| here: http://www.kroah.com/log/blog/2018/02/05/linux-kernel-
| releas... Scroll down to the section entitled "Security" for
| the details.
|
| If you wish to disagree with how we handle all of this,
| wonderful, we will be glad to discuss it on the mailing lists.
| Just don't try to rehash all the same old arguments again, as
| that's not going to work at all.
|
| Also, this was fixed in a public kernel last week, what
| prevented you from updating your kernel already? Did you need
| more time to test the last release?
|
| Edit: It was fixed in a public release 12 days ago.
| staticassertion wrote:
| bell-cot wrote:
| Attackers with the resources and patience to read and deeply
| analyze all the commits, over time... those guys were fairly
| likely to notice the bug back when it was introduced. Plain vs.
| obscure comments on the _patch_ don 't much matter to them.
| Low-resource and lower-skill attackers - "/* fix vuln.
| introduced in prior commit 123456789 */" could be quite useful
| to them.
| staticassertion wrote:
| I don't think you understand how attackers work.
|
| Attackers don't just crawl code at random. Starting with
| known crashes or obfuscated commits is always faster.
| camgunz wrote:
| Can you say what you're hoping to do? LK devs tag security
| fixes with "[SECURITY]" and then what? You would merge
| individual [SECURITY] commits into your tree?
|
| Currently the situation is that you can just follow
| development/stable trees right (e.g. [0])? Why would you only
| want the security patches (of which there look to be _a lot_
| just in the last couple weeks). Are you looking to not apply a
| patch because LK devs haven 't marked it as a security patch?
|
| [0]:
| https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...
| staticassertion wrote:
| Assume I patch my Linux boxes once a month. I see a commit
| where an attacker has a trivial privesc. I read the commit,
| see if it's relevant to me, and potentially decide to do an
| out of cycle patch. As in, instead of updating next month
| I'll just update now.
| camgunz wrote:
| Gotcha. Yeah it does seem like there's some space between
| the overpromising "I am a Linux Kernel Dev and I proclaim
| this patch is/is not a security patch" and the
| underpromising "I am a Linux Kernel Dev and have no
| knowledge of whether or not this is a security patch". It
| doesn't seem unreasonable to mark it somehow when you know.
|
| On the other hand, just on that page I linked, there's... a
| lot of issues in there I would consider patching for
| security reasons. I don't know how reasonable it is, given
| the existing kernel development model, to tag this stuff in
| the commit. The LTS branches pull in from a lot of other
| branches, so like, which ones do you follow? When
| Vijayanand Jitta patches a UAF bug in their tree, it might
| be hanging out on the internet for a while for hackers to
| see before it ever gets into a kernel tree you might
| consider merging from.
|
| I guess what I'm saying here is that it seems like a lot to
| ask that if I find a bug, I:
|
| - don't discuss it publicly in any way
|
| - perform independent research to determine whether there
| are security implications
|
| - if there are, ask everyone else to keep the fix secret
| until it lands in the release trees with a [SECURITY] tag
|
| - accept all the blame if I'm ever wrong, even once
|
| That too is a lot of overhead and responsibility. So I'm
| sympathetic to their argument of "honestly, you should just
| assume these are all security vulns".
|
| So maybe this is just a perspective thing? Like, there are
| a lot of commits, they can't all be security issues right?
| Well of course they can be! This is C after all.
|
| Like in that list, there's dozens of things I think should
| probably have a SECURITY tag. Over 14 days, let's just call
| that 2 patches a day. I'm not patching twice a day; it's
| hard for me to imagine anyone would, or would want to
| devote mental bandwidth to getting that down to a
| manageable rate ("I don't run that ethernet card", etc.)
|
| So for me, I actually kind of like the weekly batching? It
| feels pragmatic and a pretty good balancing of kernel
| dev/sysadmin needs. Can I envision a system that gave end-
| users more information? Yeah definitely, but not one that
| wouldn't ask LK devs to do a lot more work. Which I guess
| is a drawn out way of saying "feel free to write your own
| OS" or "consider OpenBSD" or "get involved in Rust in the
| kernel" or "try to move safer/microkernel designs forward"
| :).
| staticassertion wrote:
| I think some important context here is that the people
| who want commits obfuscated are never the ones making a
| decision about the security label. The people writing the
| commit already know it's a security issue.
| mltony wrote:
| 10 years ago I found even more outrageous bug in Windows 8.
|
| I was working in MSFT back than and I was writing a tool that
| produced 10 GB of data in TSV format , that I wanted to stream
| into gzip so that later this file would be sent over the network.
| When the other side received the file they would gunzip it
| successfully, but inside there would be mostly correct TSV data
| with some chunks of random binary garbage. Turned out that pipe
| operator was somehow causing this.
|
| As a responsible citizen I tried to report it to the right team
| and there I ran into problems. Apparently no one in Windows wants
| to deal with bugs. IIt was ridiculously hard to report this while
| being an employee, I can't imagine anyone being able to report
| similar bugs from outside. And even though I reported that bug I
| saw no activity in it when I was leaving the company.
|
| However I just tried to quickly reproduce it on Windows 10 and it
| wouldn't reproduce. Maybe I forgot some details of that bug or
| maybe indeed they fixed this by now.
| orwin wrote:
| Does this have a cvss yet? It seems really powerful and easy to
| exploit. And by easy to exploit I'm talking beginner CTF easy.
| cookiengineer wrote:
| Amazing write-up! This is a super example of a responsible
| disclosure.
|
| I mean, compiling 17 kernels alone takes so long that most people
| would've given up in between.
| Unklejoe wrote:
| > most people would've given up in between
|
| Nah, that's the most fun part. Once you have one kernel that
| works and one that doesn't, you can be pretty sure that you'll
| eventually find the cause of the bug. The part where I would
| have given up is the "trying to reproduce" part.
| mort96 wrote:
| Depends entirely on what sort of hardware you have. IIRC, I
| usually spend around 5 minutes when compiling Linux on my
| desktop, so not instant but not horrible. The agonizing part
| would be to have to manually install, boot and test those
| kernels, or to create a setup involving virtual machines which
| does that automatically to use `git bisect run`.
|
| But yeah, incredibly impressive persistence.
| blinkingled wrote:
| The offending commit was authored by Christoph Hellwig and
| possibly reviewed by Al Viro both of whom combined are close to
| 100% of Linux filesystems and VFS knowledge. Point being with the
| level of complexity you're just going to live the fact that
| they'll always be bugs.
|
| VFS/Page Cache/FS layers represent incredible complexity and
| cross dependencies - but the good news is code is very mature by
| now and should not see changes like this too often.
| xcambar wrote:
| > Point being with the level of complexity you're just going to
| live the fact that they'll always be bugs.
|
| I'd like to add, for the less tenured developers around: "with
| the level of _experience_ you 're just going to live the fact
| that there'll always be bugs."
| nomel wrote:
| Do you have an example in mind, for complex software that is
| bug free?
| dncornholio wrote:
| And the reason for the commit was to have 'nicer code'. The
| code was working perfectly fine before someone decided it was
| not nice enough?
| pengaru wrote:
| if it ain't broken, fix it til it is
| 0xbadcafebee wrote:
| Very good point. Often developers talk in cargo-cult
| terminology like "beautiful" or "nice" or "elegant" code, but
| there is no definition of what that even means or whether it
| empirically leads to better (or worse) outcomes. We know
| people _like it more_ , but that doesn't mean we should be
| doing it. A true science would provide hypothesis,
| experiment, repeated evidence, rather than anecdotes.
|
| (from the downvotes it seems like some people don't want
| software to be a science)
| vanviegen wrote:
| Easy: simpler is better. What is considered simple wildly
| varies based on ones experiences though.
| theamk wrote:
| While scientific approach would be nice, it is hard to do,
| and even harder to do correctly in the way applicable to
| the specific situation. And in the absence of the research,
| all we have is intuition and anecdotes.
|
| And they work both ways -- there are anecdotes that making
| code beautiful leads to better outcomes, and there are
| anecdotes that having ugly code leads to better outcomes.
|
| This means you cannot use lack of scientific research to
| give weight to your personal opinions. After all, that
| argument works in either direction ("There is no evidence
| that leaving duplicate code in the tree leaves to worse
| outcomes... A true science would provide...")
| max_k wrote:
| Your post sounds like it's a bad thing, but "nicer" code is
| easier to maintain, i.e. there will be fewer bugs (and fewer
| vulnerabilities). This bug is an exception of the rule - shit
| happens. But refactoring code to be "nicer" prevents more
| bugs than it causes. Two patches were involved in making this
| bug happen, and minus the bug, I value both of them (and
| their authors).
| Cthulhu_ wrote:
| "There are two ways of constructing a software design: One
| way is to make it so simple that there are obviously no
| deficiencies and the other way is to make it so complicated
| that there are no obvious deficiencies."
| -- C.A.R. Hoare, The 1980 ACM Turing Award Lecture
| dncornholio wrote:
| I might have sounded harsh but I think shit happens is not
| the way to look at this. Don't claim I'm a better
| developer, but I always try to shy away from making things
| look nicer.
|
| Experience have thought me, deal with problems when it is a
| problem. Dealing with could be problems can be a deep, very
| deep rabbit hole.
|
| The commit message gave me the feeling that we should have
| just trust the author.
|
| https://github.com/torvalds/linux/commit/f6dd975583bd8ce088
| 4...
| max_k wrote:
| There's no bug in that commit, the commit is correct, it
| only makes the bug exploitable. The buggy commit is
| older, it's https://github.com/torvalds/linux/commit/2416
| 99cd72a8489c944... but not exploitable.
|
| > I always try to shy away from making things look nicer
|
| That's understandable, though from my experience, lots of
| old bugs can be found while refactoring code, even at the
| (small) risk of introducing new bugs.
| ahartmetz wrote:
| As (almost) always, the expert's answer is: "It depends".
| How risky is the change, how big the consequences, how
| un-nice is the code before, how easy is it to test that
| the code still works afterwards, etc...
|
| FWIW, I tend to err on the side of "do it", and I usually
| do it. But I have been in a situation where a customer
| asked for the risk level, I answered to the best of my
| knowledge (quite low but it's hard to be 100% sure), and
| they declined the change. The consequences of a bug would
| have been pretty horrible, too. Hundreds of thousands of
| (things) shipped with buggy software that is somewhat
| cumbersome to update.
| Cthulhu_ wrote:
| While true, it's important to ensure there is adequate
| test coverage before trying to refactor, in case you miss
| something.
|
| Also, try to avoid small commits / changes; churn in code
| should be avoided, especially in kernel code. IIRC the
| Linux project and a lot of open source projects do not
| accept 'refactoring' pull requests, among other things
| for this exact reason.
| max_k wrote:
| Agree, but even 100% test coverage can't catch this kind
| of bug. I don't know of any systematic testing method
| which would be able to catch it. Maybe something like
| valgrind which detects accesses to uninitialized memory,
| but then you'd still have to execute very special code
| paths (which is "more" than 100% coverage).
| aaronmdjones wrote:
| Valgrind cannot be used for/in the kernel. However, the
| kernel has an almost-equivalent use-of-uninitialized-
| memory detector;
| https://www.kernel.org/doc/html/v4.14/dev-
| tools/kmemcheck.ht...
| cwilkes wrote:
| > try to avoid small commits / changes
|
| Not sure what you mean by that
| Traubenfuchs wrote:
| > I always try to shy away from making things look nicer
|
| Anyone who doesn't, hasn't been burnt enough so far, but
| will be burnt in the future.
| IshKebab wrote:
| Nonsense. It's just easy to blame refactoring when it
| breaks something. "You fool! Why did you _change_ things?
| It was perfectly fine before. ". Much harder to say "Why
| has this bug been here for 10 years? Why did nobody
| refactor the code?" even when it would have helped.
|
| Not refactoring code also sacrifices long term issues in
| return for short term risk reduction. Look at all of the
| government systems stuck on COBOL. I guarantee there was
| someone in the 90s offering to rewrite it in Java, and
| someone else saying "no it's too risky!". Then your
| ancient system crashes in 2022 and nobody knows how it
| works let alone how to fix it.
| theamk wrote:
| A lot of times, this is just shifting the problem to the
| future and making life harder.
|
| We have a team like this -- their processes often
| failing, and their error reporting is lacking in
| important details. But they are not willing to improve
| reporting / make errors nicer (=with relevant details),
| instead they have to manually dig into the logs to see
| what happens. They waste a lot of time because they "shy
| away from making things look nicer."
| Supermancho wrote:
| Caveat - I know this doesn't directly apply to the
| vulnerability at hand, but is a discussion of a
| tangential view.
|
| > Experience have thought me, deal with problems when it
| is a problem.
|
| Experience has taught me that disparate ways of doing the
| same thing tend to have bugs in one or more of the
| implementations. Then trying to figure out if a specific
| bug exists other places requires digging into those other
| places.
|
| Make it work. Make it good. Make it faster (as necessary)
| is the way my long-lived code tends to evolve.
| throwawayboise wrote:
| Unless the code is very well covered by unit tests, any
| refactoring can introduce bugs. If the code is well
| established and no longer changing, there is no ease of
| maintenance to be gained. There is only downside to
| changing it.
|
| If the code is causing more work to maintenance and new
| development, sure it may make sense to refactor it.
| Otherwise, like the human appendix, just leave it alone
| until it causes a problem.
| olliej wrote:
| My reading of the write up was that the new code didn't
| introduce the bug, but merely exposed a latent uninitialised
| memory bug?
| ho_schi wrote:
| I've the impression that most maintainers and project
| founders care about the project and the source. Contrary to
| what in industry happens often, where other things are more
| important {sales, features, marketing, blingbling}.
|
| One of the prevailing features of well driven open-sources
| project is - you're encouraged to improve the code i.e. make
| it better {readable, maintainable, faster, hard}. You're not
| encouraged to change it for the sake of change i.e impress
| people.
|
| I've the feeling it is the first case because it reduced the
| number of lines and kept source readable. Aside from that, I
| don't think good developers want to impress others.
| sylware wrote:
| This will be worst over time until "more planned obsolescence
| than anything else" code is committed into the linux kernel.
| Many parts of the linux kernel are "done", but you will have
| always some ppl which will manage to commit stuff in order to
| force people to upgrade. This is very accute with "backdoor
| injectors for current and futur CPUs", aka compilers: you
| should be able to compile git linux git with gcc 4.7.4 (the
| last C gcc which has beyond than enough extensions to write a
| kernel), and if someting needs to be done in linux code closely
| related to compiler support, it should be _removing_ stuff
| without breaking such compiler support, _NOT_ adding stuff
| which makes linux code compile only with a very recent
| gcc/clang. For instance, in the network stack, tons of
| switch/case and initializer statements don't use constant
| expressions. Fixing this in the network stack was refused, I
| tried. Lately, you can see some linux devs pouring code using
| the toxic "_Generic" c11 keyword, instead of using type
| explicit code, or new _mandadory_ builtins did pop up (I did
| detect them is 5.16 while upgrading from 5.13) which are
| available only in recent gcc/clang. When you look at the
| pertinence of those changes, those are more "planned
| obsolescence 101" than anything else. It is really
| disappointing.
| charcircuit wrote:
| >you should be able to compile git linux git with gcc 4.7.4
| (the last C gcc which has beyond than enough extensions to
| write a kernel)
|
| By this logic why not write the entire kernel in assembly?
| Tools evolve and improve over time and it makes sense to
| migrate to better tools over time. We shouldn't have to live
| in the past because you refuse to update your compiler.
| gmfawcett wrote:
| That's obviously not their logic at all. Trying to diminish
| this to "OP refuses to update compiler" is frankly
| disrespectful of them & their actual point.
| pjc50 wrote:
| Their claim is "you should be able to compile git linux
| git with gcc 4.7.4" which is a completely arbitrary
| requirement.
| mwcampbell wrote:
| It's not completely arbitrary. Notice that they said "the
| last C GCC". After that version, GCC started using C++ in
| the compiler itself. I can see why some people would see
| that as a complexity line that must not be crossed, as it
| makes bootstrapping harder.
| arghwhat wrote:
| What GCC is written in only matters if you intent to
| write your own compiler to compile it - which as you have
| no compiler yet would likely have to be written in
| assembly.
|
| Otherwise you need to download a prebuilt compiler
| anyway, and whether that is C11 or C++11 is rather
| unimportant.
| charcircuit wrote:
| To me their logic is that their old tool works just fine
| so they shouldn't have to upgrade it. He essentially said
| that having a plan to upgrade to a newer version of the
| language or to a more up to date toolchain is planned
| obsolescence. He seems to want to be able to use his
| specific version of his compiler to the end of time. To
| me I don't quite get the justifications of this
| perspective as GCC is free software and it is simple to
| upgrade.
| gmfawcett wrote:
| Thank you, that's a great reply to his comment. My first
| impression of his comment was that the kernel project
| shouldn't chase the latest-and-best compiler releases --
| or similarly the most recent C language changes; rather,
| a boring-technology approach is sensible for such a
| foundational project as Linux. I see your point, though,
| that GCC is simple to upgrade. (If I were making the tech
| decision here, I'd want to ensure that newer GCC's didn't
| introduce features that I thought were too risky for my
| project, or at least that I could disable/restrict those
| features with flags.)
| pm215 wrote:
| GCC 5.1 (released in 2015) is hardly latest-and-best,
| though: moving the version bar up only very slowly and
| with an eye to what distros are using as their compiler
| version is a pretty solid boring-technology approach, in
| my view.
| arghwhat wrote:
| This kind of argument is hypocritical: You want to use newer
| versions of the Linux kernel yourself (otherwise you could
| just stick to whatever builds with your toolchain!), but say
| that the Linux kernel must not use newer versions of things.
|
| The GCC version requirement is 5.1 (which is 7 years old).
| Before that, it was 4.9, 4.8, 4.6 and 3.2. It has never been
| 4.7.
|
| Use of newer versions of C than C89 which provides solutions
| to _actual issues_ is perfectly fine. C11 was picked because
| it does not require an increase in minimum GCC version to use
| it, making your entire argument pointless.
|
| The Linux kernel is already pretty lenient, as many
| alternatives have their a compiler on the tree and target
| only that.
| jwilk wrote:
| > A ZIP file is just a container for .gz files
|
| That doesn't sound right.
| zenexer wrote:
| GZIP (.gz) and PKZIP (.zip) are both containers for DEFLATE.
| GZIP is barely a container with minimal metadata, whereas PKZIP
| supports quite a bit of metadata. Although you can't quite
| concatenate GZIP streams to get a PKZIP file, it's pretty close
| --if I recall correctly, you just chop off the GZIP header.
| zenexer wrote:
| I'm past the edit period, but:
|
| > if I recall correctly, you just chop off the GZIP header.
|
| ...to get the raw DEFLATE stream, that is. You still need to
| attach any necessary metadata for PKZIP, which Max mentions.
| Their approach for converting between the two is pretty
| clever: it's so elegant and simple that it seems obvious, but
| I never would have thought of it. Very nifty, @max_k!
| greyface- wrote:
| Both PKZIP and gzip use DEFLATE:
| https://en.wikipedia.org/wiki/Deflate
| jl6 wrote:
| Yeah, a gzip file is itself a container for a DEFLATE stream.
| Gzip files can contain metadata such as timestamps, and
| comments.
| abofh wrote:
| Wow, awesome debugging - very impressed.
| egberts1 wrote:
| Extreme Debugger Par Excellence!
|
| What a superioritegrandeur!
| bananabiscuit wrote:
| I'm curious how git bisect was applied here. Wouldn't you have to
| compile the whole kernel somehow and then run your test program
| using that kernel? Is that really what was done here?
| CasperDern wrote:
| The kernel is relatively easy to compile and install, so I
| would think that's exactly what they did.
| rfoo wrote:
| Yes? This is faster and easier than you may think it to be.
| Building a reasonably small kernel only takes ~a minute. People
| usually have fully automated git-bisect-run scripts for build &
| test in qemu.
| bananabiscuit wrote:
| Oh, interesting, did not know it could be so fast.
| gengkev wrote:
| For me, at least, there's an important difference missing
| from the debate over the term "C/C++": compiling C code is
| always _much_ faster than you would expect, but compiling
| C++ code is always _much_ slower than you would expect...
| max_k wrote:
| Yes, that's what I did.
| gaius_baltar wrote:
| Fix was already merged to Android, however, there are millions of
| devices that will never be updated. The nice question: can this
| be used for temp-rooting? Vulnerabilities can be a blessing
| sometimes...
| rfoo wrote:
| > there are millions of devices that will never be updated
|
| Luckily, almost all (if not just all) these millions of devices
| which will never be updated never ever received the vulnerable
| version in the first place. The bug was only introduced in 5.8
| and due to how hardware vendors work phones are still stuck in
| 4.19 ages (or better, 5.4. but no 5.10 besides Pixel 6)
| max_k wrote:
| Yes. I have a working exploit, but havn't published it (yet).
| reasonabl_human wrote:
| Crazy. Just successfully pwnd my homelab box in the garage..
|
| Exciting for the implications of opening many locked down
| consumer devices out there.
|
| Nightmare for the greater cyber sec world...
| alanbernstein wrote:
| Dirty pipe.. how about "sewerbleed"?
| mjevans wrote:
| The exploit involves DIRTY (should be written back to disk)
| memory pages attached to a PIPE between processes.
| qwertox wrote:
| What a poster child. Deserves some kind of award.
| pantalaimon wrote:
| I like how they casually mention that they have basically
| written their entire stack themselves.
| ZYinMD wrote:
| I think you're quite gifted in story telling, you could be
| thriller book writer.
| kgraves wrote:
| Google's Fuchsia/Zircon cannot come fast enough.
| k4rli wrote:
| I wouldn't expect additional security from introducing an
| entirely new OS/kernel. Just unknown RCEs and other
| vulnerabilities waiting to be discovered.
| kgraves wrote:
| Just like Linux so no change there, We still need to move on.
| amelius wrote:
| Because they use formal methods preventing this kind of thing
| from happening?
| kgraves wrote:
| Even with your formal methods strawman, vulnerabilities like
| these are still possible in Linux and C. We need to move on.
| freemint wrote:
| You can formally verify C code against a spec though.
| nickelpro wrote:
| Excellent work and excellent write up Max. A feather in your cap
| to be proud of for sure.
| itvision wrote:
| This if f*cking scary. Such a simple code, so dangerous and it
| works. You can trivially add an extra root user via
| /etc/{passwd|shadow}. There are tons of options how to p0wn a
| system.
|
| Please update your devices ASAP!
| [deleted]
| pabs3 wrote:
| Those unsupported devices probably don't run Linux 5.8 or
| later, they are likely on older versions. It would be really
| useful to have this vuln on them though, it would help with
| getting root so you can get control of your own device and
| install your own choice of OS.
| itvision wrote:
| You're right, I though kernel 5.8 is a lot older than it
| actually is. I've edited my post.
|
| Sorry!
| lazide wrote:
| Eh, it's a limited subset of kernel versions (ones unlikely to
| be used in those devices), and requires local execution
| privileges and access to the file system. Linux in general has
| had numerous security issues (as has every other OS), often
| requiring far less access.
|
| Does it need patching? Of course. It's not a privilege
| escalation remote code execution issue though, and even if it
| was, it would be on a tiny fraction of running devices right
| now.
| itvision wrote:
| > and even if it was, it would be on a tiny fraction of
| running devices right now.
|
| That's correct and I misjudged the situation. Sorry!
| piratejon wrote:
| Wow, almost 10 months from the first reported file corruption
| until identification as an exploitable bug.
| Ensorceled wrote:
| I'll bite, why the "Wow"?
|
| It was a random, intermittent file corruption that didn't cause
| real harm to the authors organization and was, clearly, very
| tricky to track down.
| piratejon wrote:
| I don't have a basis for how long this might take. As the
| author mentions "All bugs become shallow once they can be
| reproduced.", but only after spending probably the largest
| amount of time waiting for new incident reports to come in,
| and then analyzing the reports (e.g. to determine most
| incidents occurred on the last day of the month), and hours
| staring at application and kernel code. It's very impressive,
| but certainly the largest amount of time in the 10 month
| duration was not actually debugging. The "moment of
| extraordinary clarity" probably sprung out of years of
| experience.
| silverfox17 wrote:
| Agreed, about 99% of admins I know would not be able to
| identify this error, and most likely most Hacker News
| reads. The last sentence on your post is very true.
| lazide wrote:
| If not 99.999%
|
| I've worked with (and been) a dev for several decades,
| and I can count on one hand the number of folks who would
| have a chance of figuring this out, and 2 fingers the
| number of folks who WOULD.
|
| Of course, most never try to optimize or go so deep like
| this that they would ever need to, so there is that!
| Ensorceled wrote:
| Ah, I guess my thinking is that they didn't really focus on
| it. It was annoying but not high priority ... until they
| started to get an inkling of what was actually going on.
| AviationAtom wrote:
| Since so many distros seem to lag a good ways behind on packages,
| and this vulnerability (in it's easiest exploited form) was
| introduced in kernel 5.8, it would seem a fair amount of Linux
| installs wouldn't actually be vulnerable to this. Is that
| somewhat correct?
| cryptonector wrote:
| There was a never-shipped bug in Solaris back around.. I want to
| say 2006? I don't remember exactly when, but there was a bug
| where block writes in a socketpair pipe could get swapped. I
| ended up writing a program that wrote entire blocks where each
| block was a repeated block counter, that way I could look for
| swapped blocks, and then also use that for the bug report. The
| application that was bit hard by this bug was ssh.
|
| Writing [repeated, if needed] monotonically increasing counters
| like this is a really good testing technique.
| DonHopkins wrote:
| Once I fell victim to The Dirty Bong Vulnerability, when the cat
| knocked the bong over onto my Dell laptop's keyboard. Fortunately
| I had the extended warranty, and the nice repairwoman just
| smelled it, laughed at me, and cheerfully replaced the keyboard
| for free. No way Apple would have ever done that.
| db48x wrote:
| C needs to die. Pro tip for language designers: require all
| fields to be initialized any time an object is created.
|
| Really impressive debugging too.
| turminal wrote:
| > Pro tip for language designers: require all fields to be
| initialized any time an object is created.
|
| This proposal sounds great until you find out that this is a
| hard problem to solve reasonably well in the compiler and no
| matter what you do there will be valid programs that your
| compiler will reject.
| pjc50 wrote:
| No? This seems to work perfectly fine for other languages.
|
| > valid programs that your compiler will reject.
|
| Do you mean "valid programs where everything is initialized
| when an object is created will somehow fail to detect that"?
| Arnavion wrote:
| >Do you mean "valid programs where everything is
| initialized when an object is created will somehow fail to
| detect that"?
|
| Valid programs where a field is left uninitialized at
| creation time, but the programmer makes sure it's
| initialized before it's used.
| pjc50 wrote:
| Sure, so we can easily change the definition of "valid"
| so that you have to initialize them at creation time.
| _shrug_
| max_k wrote:
| > require all fields to be initialized any time an object is
| created
|
| I'm not a fan of such a policy. That usually leads to people
| zero-initializing everything. For this bug, this would have
| been correct, but sometimes, there is no good "initial" value,
| and zero is just another random value like all the 2^32-1
| others.
|
| Worse, if you zero-initialize everything, valgrind will be
| unable to find accesses to uninitialized variables, which hides
| the bug and makes it harder to find. If I have no good initial
| value for something, I'd rather leave it uninitialized.
| lmm wrote:
| > I'm not a fan of such a policy. That usually leads to
| people zero-initializing everything. For this bug, this would
| have been correct, but sometimes, there is no good "initial"
| value, and zero is just another random value like all the
| 2^32-1 others.
|
| So use a language that has an option type, we've only had
| them for what, 50 years now.
| dundarious wrote:
| I think https://news.ycombinator.com/item?id=30588362 has
| shown that this wouldn't solve anything for this particular
| case.
| andrewzah wrote:
| Why can't things like option types be used? That solves the
| issue as you'd either have `Some<FooType>` or `None`, which
| could be dealt with separately.
| dundarious wrote:
| Mandatory explicit initialization, _plus_ a feature to
| explicitly mark memory as having an undefined value, is a
| great way to approach this problem. You get the benefit in
| the majority of cases where you have a defined value you just
| forgot to set and the compiler errors until you set it, and
| for the "I know it's undefined, I don't have a value for it
| yet" case you have both mandatory explicit programmer
| acknowledgement and the opportunity for debug code to detect
| mistaken reads of this uninitialized memory.
|
| But I think it would be troublesome to use such a
| hypothetical feature in C if it's only available in some
| compiler-specific dialect(s), because you need to coerce to
| any type, so it would be hard to hide to hide behind a macro.
| What should it expand to on compilers without support? It
| would probably need lots of variants specific to scalar
| types, pointer types, etc., or lots of #if blocks, which
| would be unfortunate.
|
| Zig is a nice language with this feature, and it fits into
| many of the same use cases as C:
| https://ziglang.org/documentation/0.9.1/#undefined
| dundarious wrote:
| Actually, https://news.ycombinator.com/item?id=30588362 has
| convinced me this wouldn't necessarily solve the bug in
| question either, since it's a bug caused by (quite
| legitimately) re-using an existing value. Though it would
| be easy to implement a "free" operation by just writing
| `undefined`, so it would still help quite a bit, and more
| than suggestions like "just use an Optional/Maybe type".
| gpderetta wrote:
| GCC has recently introduced a mode (-ftrivial-auto-var-init)
| that will zero initialize all automatic variables by default
| while still treating them as UB for sanitize/warning
| purposes.
|
| The issue is with dynamic memory allocation as that would be
| the responsibility of the allocator (and of course the kernel
| uses custom allocators).
| max_k wrote:
| Interesting compiler feature to work around (unknown)
| vulnerabilities similar to this one. However in this case,
| it wouldn't help; the initial allocation is with explicit
| zero-initialization, but this is a circular buffer, and the
| problem occurs when slots get reused (which is the basic
| idea of a circular buffer).
| abbeyj wrote:
| Would this get caught by KMSAN
| (https://github.com/google/kmsan)? Maybe the circular
| buffer logic would need to get some calls to
| `__msan_allocated_memory` and/or
| `__sanitizer_dtor_callback` added to it? If this could be
| made to work then it would ensure that this bug stays
| fixed and doesn't regress.
| max_k wrote:
| Yes, but as you said, it works only after adding such
| annotations to various libraries. A circular buffer is
| just a special kind of memory allocator, and as such,
| when it allocates and deallocates memory, it needs to
| tell the sanitizer about it.
|
| What bothers me about the Linux code base is that there
| is so much code duplication; the pipe doesn't use a
| generic circular buffer implementation, but instead rolls
| its own. If you had the one true implementation, you'd
| add those annotations there, once, and all users would
| have it, and would benefit from KMSAN's deep insight.
|
| Every time I hack Linux kernel code, I'm reminded how
| ugly plain C is, how it forces me to repeat myself
| (unless you enter macro hell, but Linux is already
| there). I wish the Linux kernel would agree on a subset
| of C++, which would allow making it much more robust
| _and_ simpler.
|
| They recently agreed to allow Rust code in certain tail
| ends of the code base; that's a good thing, but much more
| would be gained from allowing that subset of C++
| everywhere. (Do both. I'm not arguing against Rust.)
| max_k wrote:
| btw. this is how I would make the code more robust:
| https://lore.kernel.org/lkml/20220225185431.2617232-4-max.ke...
|
| I'm a C++ guy, and the lack of constructors is one of many
| things that bothers me with C.
| kevincox wrote:
| I love Rust, but would it have prevented this problem? IIUC
| there was no memory corruption at the language level here. This
| was really just a logic error.
| db48x wrote:
| Yes, it would have. Some code creates an instance of some
| struct, but doesn't set the flags field to zero. It thus
| keeps whatever value happened to be in that spot in memory,
| an essentially random set of bits. Rust would force you to
| either explicitly name the flags field and give it a value,
| or use `..Default::default()` to initialize all remaining
| fields automatically. Anything else would be a compile-time
| error.
|
| The fix: +++ b/lib/iov_iter.c @@
| -414,6 +414,7 @@ static size_t copy_page_to_iter_pipe(struct
| page \*page, size_t offset, size_t by return 0;
| buf->ops = &page_cache_pipe_buf_ops; + buf->flags =
| 0; get_page(page); buf->page = page;
| buf->offset = offset;
| amelius wrote:
| Wouldn't Lint have caught the error too?
| kevincox wrote:
| Ah thanks for explaining. I misunderstood the root cause
| and didn't read the patch. Rust definitely would have
| helped here. Or even just enforcing modern C practices such
| as overwriting the whole struct so that non-specified
| values would have been set to zero (although explicit is
| better than zero).
| rfoo wrote:
| No. Rust can not prevent this bug.
|
| The bug is that they are reusing (or, repurposing) an
| already-allocated-and-used buffer and forgot to reset
| flags. This is a logic bug, not a memory safety bug.
|
| In fact, this might be a prime example of "using Rust does
| not magically eliminate your temporal bugs because
| sometimes they are not about memory safety but logical".
| Before that my favorite such bug is a Use-After-Free in
| Redox OS's filesystem codes.
|
| Pro tip for random HN Rust evangelist: read the fucking
| code before posting your "sHoUlD HAVe uSED A BeTTER
| lANGUAGE" shit.
| gpderetta wrote:
| I agree with your sentiment. Only the most strict pure
| functional languages will prevent you from reusing
| objects.
|
| You could argue that some languages distinguish raw
| memory from actual objects and even when reusing memory
| you would still go through an initialization phase (for
| example placement new in C++) that would take care of
| putting the object into a known default state.
| deredede wrote:
| This is only partially fair. In Rust you would probably
| have assigned a new object into *buf here instead of
| overwriting the fields manually. It is good practice to
| do this (if the code is logically an object
| initialization, it should actually be an object
| initialization, not a bunch of field assignments), but
| it's clunky to do so in C because you can't use
| initializers in assignments.
| lanstin wrote:
| People writing C don't re-use allocated objects because
| it's clunky but to improve performance. The general
| purpose allocators are almost always much slower than
| something where you know the pattern of allocations. I've
| no idea if Rust has a similar issue. I would think that
| most kernel code, whether C or Rust, would need to handle
| "allocation fails" case and not depend on language
| constructs to do allocations, but that's just a guess.
| deredede wrote:
| I'm not saying you shouldn't reuse allocated objects. I'm
| talking about building a local object (no dynamic
| allocation) and assigning it to the pointer at once. This
| has the same runtime behavior (assuming -O1) as assigning
| the fields one by one.
|
| See https://godbolt.org/z/Wh5KcTaGY for what I'm talking
| about, the local allocation is easily eliminated by the
| compiler.
|
| The equivalent in C is to create a temporary local
| variable with an initializer list then write that
| variable to the pointer.
| dzaima wrote:
| you can assign a new object into *buf in C just fine,
| with "*buf = (struct YourType){.prop1 = a, .prop2 = b}";
| it even zero-initializes unset fields! So C and Rust give
| you precisely the same abilities here.
|
| edit: the "struct pipe_buffer" in question[1] has one
| field that even the updated code doesn't write -
| "private". Not sure what it's about, but it's there. Not
| writing unrelated fields like that is probably not much
| of an issue now, but it certainly can add up on low-power
| systems. You might also have situations where you'd want
| to write parts of an object in different parts of the
| code, for which you'd have to resort to writing fields
| manually.
|
| [1]: https://github.com/torvalds/linux/blob/719fce7539cd3
| e186598e...
| deredede wrote:
| Oh I was not aware of this syntax in C, thanks for
| bringing it up! I still think the pattern is more common
| and known in Rust but I might be wrong :)
|
| Re: your other points, "reusing a pre-allocated struct
| from a buffer" is basically object initialization, which
| is different from other times you want to write fields.
| In general an object initialization construct should be
| used in those cases, this whole thread being an argument
| why. Out-of-band fields such as the "private" field are a
| pain I agree, but they can be separated from an inner
| struct (in which case the inner struct is the only field
| that gets assigned for initialization).
|
| Taking a step back, the true solution is probably to have
| a proper constructor... And that can be done in any
| language, so I'll stand corrected.
| TonyTrapp wrote:
| The point is: You _could_ have done this in Rust, but you
| wouldn 't have been _required_ to do so, so the exact
| same logic bug could have emerged. Maybe it would be more
| _Rust-like_ to write the code like that, but it would
| have also been possible to write the code like that in C
| - and since we 're talking about the kernel here, even if
| this code was written in Rust a developer might have
| written it in the more C-like way for performance
| reasons.
| pdw wrote:
| You can't accidentally leave a field of a struct
| uninitialized in Rust or in other sane languages.
| parmezan wrote:
| It has been less than a month after fixes emerged for kernels and
| your PoC exploit has already been released into the public.
| Should you not have waited at least a bit longer (for example 2
| months) before disclosing this vulnerability so that
| people/companies can keep up with patching? Don't they need more
| time to patch their servers and legacy etc before this becomes
| yet another log4j exploitation fest? That is if this really is
| the new dirty cow vuln.
|
| I get responsible disclosure is important, but should we not give
| people some more opportunity to patch, which will always take
| some time?
|
| Just curious.
|
| Also, nice work and interesting find!
| nickelpro wrote:
| Once the commit is in the kernel tree it's effectively public
| for those looking to exploit it. Combing recent commits for bug
| fixes for the platform you're targeting is exploitation 101.
|
| The announcement only serves to let the rest of the public know
| about this and incentivize them to upgrade.
| staticassertion wrote:
| It's the absolute opposite. It's insane that this commit wasn't
| flagged as a patch for a major vulnerability. Why am I finding
| out about this now? Why is it now my job to comb through
| commits looking for hidden patches?
|
| It puts me, as a defender, at an insane disadvantage. Attackers
| have the time, incentives, and skills to look at commits for
| vulns. I don't. I don't get paid for every commit I look at, I
| don't get value out of it.
|
| This backwards process pushed by Greg KH and others upstream
| needs to die ASAP.
| amluto wrote:
| Max did everything right here, and in this case I'm not sure
| the distribution process exists to have done better.
|
| (Thanks Max for handling this well and politely and for putting
| up with everyone's conflicting opinions.)
| staticassertion wrote:
| FWIW, if it in any way comes off like I'm blaming _Max_ for
| this, I 'm not. Anyone blaming Max for how vulnerabilities
| are disclosed is completely ignorant of the kernel reporting
| process.
| MauranKilom wrote:
| Just wanted to note that your replies come off as quite
| confrontational/aggressive. I think you have valid points,
| and it's clear that this topic is important to you, but
| you're heating up the atmosphere of the thread more than
| necessary.
| staticassertion wrote:
| That part I'm ok with. Upstream has treated security
| researchers with contempt for decades.
| jesprenj wrote:
| This affects kernels from 5.8 and was fixed in 5.16.11, 5.15.25
| and 5.10.102. Exploit code is public and available on the linked
| page.
| baggy_trough wrote:
| It's disturbing that despite prior disclosure on distro lists,
| Ubuntu doesn't have an update available, with public exploits
| circulating now.
| [deleted]
| raesene9 wrote:
| < 5.8 not being affected is probably a saving grace for quite a
| few enterprises as I'd expect that LTS distributions may not
| have got that version included as yet.
| gchamonlive wrote:
| CentOS 7 is already at 5.10 so it should affect lots of
| production systems
| LinuxBender wrote:
| Are you by chance using a 3rd party kernel repo such as
| ElRepo to work around a limitation? Or could someone at
| your org be compiling a custom kernel?
| emrvb wrote:
| *blinks*
|
| *stares at kernel-3.10.0-1160.59.1.el7*
| samus wrote:
| Is this a smartphone? I'm on 3.18!
| baggy_trough wrote:
| How about Ubuntu?
| zenexer wrote:
| The relevant CVE page returns a 500 error:
| https://github.com/canonical-web-and-
| design/ubuntu.com/issue...
|
| 21.10 appears to be lacking the patch.
| baggy_trough wrote:
| The CVE page returns now, with a whole bunch of "needs
| triage".
|
| https://ubuntu.com/security/CVE-2022-0847
| greyface- wrote:
| Debian stable (bullseye) is still vulnerable: https://security-
| tracker.debian.org/tracker/CVE-2022-0847
| deng wrote:
| That page is not up-to-date, fix was released today:
|
| https://lists.debian.org/debian-security-
| announce/2022/msg00...
| greyface- wrote:
| It wasn't available via `apt-get update && apt-get dist-
| upgrade` as of when I drafted that comment, but I confirm
| that 5.10.92-2 seems to be released now.
| deng wrote:
| Well the fix was released ~30 minutes ago, so that checks
| out. ;-)
|
| The security-tracker site is now updated as well.
| sublimefire wrote:
| An example that needs to be in the textbooks. A detailed
| explanation and a timeline along with the code snippets. It
| succinctly shows you the complexities involved. Kudos to Max for
| putting it all into the post.
|
| > Blaming the Linux kernel (i.e. somebody else's code) for data
| corruption must be the last resort.
|
| ^^^ I can only image the stress levels at this point.
| xcambar wrote:
| The magic for me was the two little C programs that
| demonstrated the bug.
|
| Circa 10 lines of C. Beautiful.
| misnome wrote:
| Yes, this is an extremely well written and to the point
| writeup.
| deutschew wrote:
| usually I dont read too deeply into CVE because they are too
| complex but this article made me go holy sh-
|
| wish more would be written like this
| moltke wrote:
| I've personally found bugs in unpopular kernel APIs. I spent
| days thinking it was my code until I went and read the Linux
| implementation.
| girvo wrote:
| ESP-IDF has so many bugs that it's often the first thing to
| blame when we hit issues, even if it is our code after all
| haha
| aetherspawn wrote:
| The sort of bug that could have been caught by unit tests I
| suppose.
___________________________________________________________________
(page generated 2022-03-07 23:00 UTC)