[HN Gopher] Dear Linux Kernel CNA, what have you done?
       ___________________________________________________________________
        
       Dear Linux Kernel CNA, what have you done?
        
       Author : odood
       Score  : 62 points
       Date   : 2024-03-07 10:27 UTC (12 hours ago)
        
 (HTM) web link (amanitasecurity.com)
 (TXT) w3m dump (amanitasecurity.com)
        
       | bhaney wrote:
       | > Known vulnerabilities are in practice defined as 'something
       | with a CVE'
       | 
       | Then change that definition and stop operating off of it. It has
       | never been correct.
        
         | Retr0id wrote:
         | Most people don't get a choice of which legislation/regulations
         | to comply with.
        
           | michaelt wrote:
           | There's plenty of people in that position, but they're all
           | working at huge corporations. Nobody ends up having to chase
           | things like SOC2 and PCI-DSS without getting paid for it.
           | 
           | Why should unpaid volunteers working on the Linux kernel do
           | compliance work for FAANG-sized companies without getting
           | paid for it? If these companies want the reports carefully
           | triaged, they can send some employees to carefully triage
           | them.
        
             | Retr0id wrote:
             | Nobody (at least, not me) is calling for the Linux
             | foundation to do additional work.
             | 
             | They've taken it upon themselves to assign a CVE to every
             | bugfix, and it's being pointed out that that doesn't seem
             | to be helping anyone.
        
       | viraptor wrote:
       | While I understand the problems raised in this post, I think
       | they're going a bit too far. The CVEs assigned to the kernel were
       | already specific to various parts of it. You're not running
       | linux-x.y.z, but rather linux-x.y.z + specific config. That means
       | vendors already needed to look at CVEs and decide what applies to
       | them and what doesn't. It's up to NVD records to include how
       | likely something is to be a problem and give it some description
       | / score.
       | 
       | Choosing a random selection of CVEs posted so far... they look
       | reasonable. They're actual issues and they'll potentially affect
       | someone.
       | 
       | This reminds me of the cookie banners situation. Many people
       | complain about the cookie banners being visible rather than about
       | the companies doing things that requires them to notify you. Now
       | if you say you care about the published vulnerabilities, you get
       | to actually see them all. And potentially change the policies
       | around how you worked with them. (yes, it's not a great analogy,
       | I'm not blaming linux for having each of those vulnerabilities)
        
         | Avamander wrote:
         | > That means vendors already needed to look at CVEs and decide
         | what applies to them and what doesn't.
         | 
         | So many vendors don't and it's tedious to say the least.
        
         | throwaway11460 wrote:
         | Perhaps people don't care about companies doing it and they
         | don't want to be notified about it?
        
       | phh wrote:
       | This article largely misses the point from the Linux kernel's
       | point of view.
       | 
       | They have always said "Every bug is a security bug". I don't know
       | about more global content, but at Kernel Recipes (2019?) gregkh
       | took a Pixel that was running latest Google security patches that
       | contained all CVEs. Then looked at non-CVE patches he merged in
       | his LTS. And it took him less than an hour to find a DoS
       | vulnerability.
       | 
       | I understand the author's frustration from the Linux Kernel
       | community to not want to classify bugs, but the reality is that a
       | huge portion of the bug fixes are actually security fixes [1], so
       | between requiring 20% of the patches to be merged, and 100%, is
       | there really a point?
       | 
       | The author mentions Cyber Resilience Act, and I believe that
       | Linux Kernel team created this CNA /on purpose/ to have an impact
       | on the CRA. They believe that the only way to have a secure Linux
       | Kernel is to have an up-to-date Linux kernel. (cf
       | https://social.kernel.org/notice/ARWvggnOvXny0CUCIa ). With the
       | CRA enacted, doing such a every-bug-is-a-security-bug CNA is a
       | way for them to enforce their view.
       | 
       | [1] FWIW, my personal opinion there is that this shows that
       | Linux's monolith architecture is getting old, but I see nothing
       | that could reasonably replace it. I think that "the dream" would
       | be to have a LKL-like linux "arch" to compile every driver as an
       | independent process of Hurd, with GKI-like stable-ish ABI.
        
         | Avamander wrote:
         | > They have always said "Every bug is a security bug".
         | 
         | If you can't reason about your codebase to a sufficient extent
         | to actually determine that then something is very wrong.
         | 
         | If everything is a CVE, nothing is. That approach just wastes a
         | lot of time and effort making people not so familiar with the
         | codebase (as the maintainers) do triage.
         | 
         | I hope they get burnt quick by this approach.
        
           | viraptor wrote:
           | > If you can't reason about your codebase to a sufficient
           | extent to actually determine that then something is very
           | wrong.
           | 
           | The environment where we write critical code the way we do
           | now is very wrong. It's actually not that easy to figure out
           | if something is exploitable or not. What if you add heap
           | grooming? What if you enable another specific feature? What
           | if an application fights for the same lock? What if measuring
           | the time it takes to fail allows you to defeat aslr? People
           | use exploit chains rather than independent ones these days
           | and there are examples of clever cases of single-byte
           | overflows turning into RCE.
           | 
           | Sure, there are going to be cases where you're really really
           | sure something can't be used, because for example the bug
           | only produces a null dereference and an oops. Then someone
           | else comes along and proves you wrong
           | https://googleprojectzero.blogspot.com/2023/01/exploiting-
           | nu...
        
             | Avamander wrote:
             | > The environment where we write critical code the way we
             | do now is very wrong. It's actually not that easy to figure
             | out if something is exploitable or not.
             | 
             | Then the correct approach is not to cause "CVE fatigue"
             | that can cause significant second-order effects. Not to
             | mention the fact that who else is better suited to make
             | that assessment? It's unavoidable that an assessment still
             | has to be made because fundamentally there are use-cases
             | where touching a working system has to have a really good
             | reason. This will result in actually important things not
             | getting patched because not-kernel-experts had to make that
             | decision.
             | 
             | I also can't imagine large vendors being forced to follow a
             | significantly more frequent update cadence also choosing to
             | retain their current level of QA. Best case we're going to
             | get more frequent less tested updates, worst case we're
             | going to deploy an actual vulnerability due to some low-
             | importance bugfix (with an assigned CVE).
        
               | mnau wrote:
               | CVE is just a identifier. CVSS should assign a score.
               | 
               | I would require all CVE to ha attached exploit demo code.
               | Otherwise it's shouldn't be CVE
        
           | eqvinox wrote:
           | > If you can't reason about your codebase to a sufficient
           | extent to actually determine that then something is very
           | wrong.
           | 
           | Linux kernel developers are entirely capable of assessing
           | this. They're just refusing to do it for someone else's
           | definition of a "security bug".
           | 
           | "Every bug is a security bug" means "we fix things when they
           | need fixing, categorizing the fixes is not our job and you'll
           | need to do that yourself".
           | 
           | As such, the current new approach is in fact a concession,
           | there's now a broad pre-categorizing of fixes you can work
           | off.
           | 
           | > making people not so familiar with the codebase (as the
           | maintainers) do triage
           | 
           | You seem to be under the impression that you hadn't needed to
           | do that before. Which, to be fair, worked for a long time.
           | From an engineering perspective this was always a case of
           | "skipping inspections and verification", because the Linux
           | community never agreed to do that work on top of providing
           | the system.
           | 
           | > I hope they get burnt quick by this approach.
           | 
           | How would they get burnt by this? Social pressure from other
           | kernel developers (or even outside) isn't going to have that
           | effect. The only possible influence would be from employers
           | paying for Linux work -- in which case it's a perfectly
           | reasonable discussion about spending paid time on security
           | issues.
        
             | Avamander wrote:
             | > Linux kernel developers are entirely capable of assessing
             | this. They're just refusing to do it for someone else's
             | definition of a "security bug".
             | 
             | Then instead of this, don't? It's utterly childish.
             | 
             | > How would they get burnt by this? Social pressure from
             | other kernel developers (or even outside) isn't going to
             | have that effect.
             | 
             | Fewer organisations willing to cooperate with them, for
             | one? Social pressure comes in many forms and shapes,
             | there's no way it won't have any effect.
             | 
             | > The only possible influence would be from employers
             | paying for Linux work
             | 
             | They're going to be paying someone else to provide a clean
             | feed instead of the organization that deliberately hinders
             | these efforts.
        
               | eqvinox wrote:
               | > They're going to be paying someone else [...]
               | 
               |  _And that 's perfectly fine, it's open source software_.
               | Either way someone gets paid to look at the patches,
               | _which is my point_.
               | 
               | If you want to do it in a cost-effective manner, you'll
               | find other people with the same requirements, since the
               | work result is "shareable".
               | 
               | > [...] instead of the organization that deliberately
               | hinders these efforts.
               | 
               | There is no such organization, and it feels like you have
               | very little understanding of the organizational (and
               | funding) structures behind the Linux kernel. I really
               | can't extend my comments into a full-blown explanation of
               | this, sorry.
               | 
               | (No, the Linux Foundation does not perform the role
               | you're implying: they don't currently and likely never
               | will sell a "clean feed".)
               | 
               | > Fewer organisations willing to cooperate with them[...]
               | 
               | I have no data on this but it is entirely reasonable (and
               | I believe it likely) that the current behavior was
               | requested (or encouraged) of involved organisations and
               | people by cooperating organisations and people.
        
               | Avamander wrote:
               | I think you got a bit confused.
               | 
               | > There is no such organization
               | 
               | There is such an organization, the Linux Foundation is
               | the CNA being the hindrance to these efforts. And yes,
               | they won't perform the role, someone else will and they
               | will be paid for it.
               | 
               | For some that's fine, I find it a significant amount of
               | wasted effort, confusion and potential issues.
        
               | eqvinox wrote:
               | >> They're going to be paying someone else to provide a
               | clean feed instead of the organization that deliberately
               | hinders these efforts.
               | 
               | You were implying the Linux Foundation is attempting to
               | get paid for providing said "clean feed".
               | 
               | Anyway, this has devolved far enough.
               | 
               | [Ed.: the Linux Foundation isn't even the CNA, shame on
               | me for accepting that without verifying. The actual CNA
               | is kernel.org. https://www.cve.org/Media/News/item/news/2
               | 024/02/13/kernel-o... ]
        
       | mnau wrote:
       | CVE DOS - aka denial of service through legislative/regulatory
       | requirements instead of technical attack is going to be fun.
       | 
       | Edit: by that I mean filing bogus report or just non-security
       | related CVEs. That is also reason why a lot of projects are
       | trying to register themselves as CNA (see curl etc).
        
         | Avamander wrote:
         | It's going to be fun when companies pick Windows instead of
         | Linux because it doesn't cause an awful to handle patch cycle
         | in contexts where things have to work within some regulatory
         | bounds (that make pointless updates cost a lot of time, effort
         | and money, maybe even cause risk to human life).
        
           | eqvinox wrote:
           | > It's going to be fun when companies pick Windows instead of
           | Linux [...] work within some regulatory bounds
           | 
           | You can get FuSa (functional safety) certified Linux; to my
           | knowledge this just does not exist for Windows. There may be
           | other situations where the choice does exist, but considering
           | Windows and Linux widely equivalent in this context is not
           | possible.
           | 
           | > maybe even cause risk to human life
           | 
           | Neither Windows nor Linux are, to my knowledge, certified for
           | SoL (safety-of-life) applications. And to no surprise
           | considering this is close to (but not quite) a mathematical
           | proof your system can't hang/crash/starve, which is pretty
           | much impossible for anything beyond an RTOS with current
           | tooling.
        
             | Avamander wrote:
             | > You can get FuSa (functional safety) certified Linux;
             | 
             | And they're going to ask how much for the recertification
             | for each CVE fixed? I doubt that'd be cheap.
             | 
             | > Neither Windows nor Linux are, to my knowledge, certified
             | for SoL (safety-of-life) applications.
             | 
             | I didn't have exactly SoL applications in mind, there are
             | plenty of other situations where the stability of a system
             | could cause a risk. Be it just an emergency call center
             | server or a field laptop for looking up license plates -
             | can't leave them unpatched (especially with some of the new
             | legislation) but also downtime from poor updates could be
             | really bad.
        
               | eqvinox wrote:
               | > And they're going to ask how much for the
               | recertification for each CVE fixed? I doubt that'd be
               | cheap.
               | 
               | FIPS has created an off-kilter perception about
               | "recertification" because they require essentially the
               | entire process when you change a single bit somewhere.
               | Most certifications are not that harebrained.
               | 
               | Also if you need "certified" Linux, you are either
               | already spending resources on it yourself, or paying
               | someone else to do it. This might need adjusting for this
               | new CVE practice, but it's going to be an _adjustment_
               | and not a _reset_.
               | 
               | > [...] can't leave them unpatched (especially with some
               | of the new legislation) but also downtime from poor
               | updates could be really bad.
               | 
               | Then pay someone to test and deliver.
        
               | Avamander wrote:
               | > Then pay someone to test and deliver.
               | 
               | That's the thing, resources aren't infinite. Linux
               | offloading that work elsewhere will not have a net
               | positive effect.
               | 
               | The path of least resistance will be taken, which is
               | going to be proportionally less QA, if there was any to
               | begin with.
        
       | raesene9 wrote:
       | For another opinion on this topic
       | https://jericho.blog/2024/02/26/the-linux-cna-red-flags-sinc...
       | 
       | Having a large number of new, unscored, CVEs in the Linux kernel
       | is going to make things... interesting. From their lists
       | https://lore.kernel.org/linux-cve-announce/ these just have a CVE
       | and not really enough detail for anyone to assign a score without
       | a lot of additional analysis, which reduces their usefulness.
       | 
       | To an extent it could be suggested they're just exposing an
       | existing flaw in the system (CVSS scores which may be taken to be
       | scientifically applied, are actually just matters of opinion in
       | many cases), but it will cause a lot of problems with automated
       | tooling and compliance.
        
         | the8472 wrote:
         | > Notably, SyzScope has classified 183 bugs out of 1,170
         | fuzzerexposed bugs as high-risk. KOOBE has managed to generate
         | 6 new exploits for previously non-exploitable bugs.
         | 
         | While the rate is low it does show that some bugs were indeed
         | exploitable without that being known to the kernel devs. If an
         | attacker is willing to invest more time than the kernel devs
         | combing through commits to find vulnerabilities in the some
         | older stable kernel then a big unlabeled pile saying "there's
         | probably a vulnerability in there, go update" is correct.
        
         | aryca wrote:
         | This way of thinking is how almost everyone approaches CVEs,
         | but is also out of date now. There are millions of open source
         | projects (tens of millions really). This attitude of treating
         | security bugs as some sort of special snowflake isn't realistic
         | 
         | There are easily hundreds of thousands of security
         | vulnerabilities fixed every year that get no IDs because the
         | current process is rooted in security from 1999 (the number is
         | probably way way higher, but you get the idea)
         | 
         | Rather than obsessing over individual vulnerability IDs, we
         | should be building systems that treat this data as one of many
         | inputs to determining risk
        
           | raesene9 wrote:
           | Accurately determining risk relies on decent starting data,
           | otherwise you run the risk of Garbage-in, Garbage-out. Whilst
           | things like VEX and EPSS can help, they are based on the
           | starting point that is CVE assignment and CVSS score.
           | 
           | I don't particularly think that CVE+CVSS has been the "right"
           | way to do things ever (definitely not in the last 10 years)
           | but my thoughts don't really matter whilst regulators and
           | governments apply special significance to them, which they
           | do.
           | 
           | Security bugs _are_ special if a regulator can deem you in
           | non-compliance if you have too many of them.
           | 
           | This is of course leaving the whole area of attackers who
           | actively try to exploit them to one side :).
        
         | Arch-TK wrote:
         | It's possible to take a somewhat unopinionated approach to
         | CVSS, the issue is that such CVSS scores exist in a vacuum, and
         | vulnerabilities exist in environments. It's not possible to
         | really apply a CVSS score to a vulnerability in a specific
         | environment without understanding the vulnerability and more or
         | less ignoring the CVSS score.
         | 
         | In summary, CVSS scores can be very objective, but in those
         | cases they're also worthless.
        
       | gtirloni wrote:
       | _> Because of this, the CVE assignment team is overly cautious
       | and assign CVE numbers to any bugfix that they identify_
       | 
       | Shouldn't this strategy lead to the opposite? By being overly
       | cautious they should only assign CVEs for real demonstrable
       | security issues.
        
         | martijnvds wrote:
         | You can think of it as a "fail-safe" situation.
         | 
         | Being cautious here means "it's better to assign a CVE when
         | it's not a vulnerability, than to NOT assign a CVE when it's
         | actually a vulnerability"
        
       | michaelt wrote:
       | _> Typically, security researchers are held to higher standards
       | when disclosing vulnerabilities. The expectation is that CVEs are
       | assigned for 'meaningful' security vulnerabilities, and not for
       | any software fixes that 'might' be a security vulnerability._
       | 
       | Maybe that's the aspiration, but it's clearly not the case in
       | practice.
       | 
       | I reported a firefox bug 12 years ago where a malicious SVG could
       | cause a hang - basically a 22-year-old XML bomb, adapted to SVG
       | patterns. My bug turned out to be a duplicate of a 16 year old
       | firefox bug.
       | 
       | No way of stealing user data. No sandbox escape. Not a crash that
       | might indicate a buffer overrun. With a process per tab, it
       | doesn't even crash the browser. It's just a file that takes a
       | very long time to load - and it's not even an image type that
       | user-generated-content sites like facebook and reddit allow you
       | to upload. Reasonably enough, 12 years ago it was triaged as a
       | minor performance issue.
       | 
       | Apparently in 2023, this counts as a CVE.
        
         | gertop wrote:
         | 12 years ago, Firefox wasn't multi process. So your bug would
         | likely freeze the entire browser, including the UI. Considering
         | that, back then, Firefox reloaded all tabs back when you
         | reopened it, it would keep freezing even if you force closed
         | it. Fun times.
        
           | toast0 wrote:
           | > Considering that, back then, Firefox reloaded all tabs back
           | when you reopened it, it would keep freezing even if you
           | force closed it.
           | 
           | That was always an option, as I recall. I think a non-default
           | option, too. Not sure when they started adding the question
           | about if you wanted to restore when you started up after a
           | crash/unsafe shutdown.
        
         | Arch-TK wrote:
         | CVSS 3.1 score is 4.3 (AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:L).
         | (You can somewhat argue UI:N but I don't think it applies in
         | this case.)
         | 
         | Lots of corps would spend a non-trivial amount of effort to
         | remediate something with such a score.
        
       | kuschku wrote:
       | Right now, the vast majority of CVEs reported are bullshit filed
       | by wannabe security researchers for resume padding. Look at all
       | the useless CVSS 9.8's filed against curl. With LLMs, even more
       | bogus reports get filed every single day.
       | 
       | CVEs assigned to every linux commit are more valid than each and
       | every one of those bogus CVEs. Each and every one of them is
       | associated with an actual change in a security-critical project.
       | 
       | If you want the flood of useless CVEs to stop, you have to clean
       | your own house first.
        
         | Avamander wrote:
         | Bad CVEs elsewhere aren't an excuse.
        
           | kuschku wrote:
           | It's not elsewhere, it's bad CVEs _everywhere_. Curl is just
           | a particularly good example because they document it so well.
        
             | Avamander wrote:
             | There are many more good and useful CVEs. I'd also kindly
             | request you to suggest a better system.
        
               | kuschku wrote:
               | Filing a CVE used to be a dialog between the researcher,
               | developers, and third-party domain experts. Accepting
               | every random LLM-generated report and granting it a 9.8
               | score is not useful in any way.
               | 
               | I have to patch hundreds of CVEs in a month, and only a
               | handful are actually valid. The vast majority is "CVSS
               | 9.8: regex complexity explosion in $library" which my
               | project only uses during build. But I've got to patch it,
               | because it's definitely absolutely critical.
               | 
               | While the standard library bug that causes SSL
               | connections to fall back to TLS1.1 instead of TLS1.3 by
               | default is considered WONTFIX and gets REJECTED for a
               | CVE.
        
       | denton-scratch wrote:
       | Is it true that the Linux Kernel has traditionally deprecated the
       | idea of "security bugs"? I thought the kernel crew took the view
       | that a bug is a bug.
       | 
       | So perhaps this policy is a kind of spoiler response to efforts
       | to require all security bugs to have a CVE allocated.
        
       | eqvinox wrote:
       | Good. Forcing downstream consumers of open source projects to
       | spend resources on identifying and fixing security issues is not
       | just entirely appropriate, but direly needed.
       | 
       | If you're already paying someone to maintain Linux for you, this
       | shouldn't be causing that much trouble; it might need some
       | contractual adjustments but you're already set up to get a stream
       | of "good" updates. The patch frequency may be higher, but other
       | people already do the majority of the work for you.
       | 
       | If you were just ingesting Linux "for free"... well, tough luck.
       | You're profiting from the work of others already, you don't get
       | to complain about not being spoon fed exactly what you need.
       | 
       | In practice, a small number of commercial entities (likely a mix
       | of commercial distributions and designated security companies)
       | will probably offer "Linux as a service". People _could_ do the
       | same work on their own, but that 's not cost effective.
       | 
       | Either way, this shift in responsibilities has been long overdue.
        
         | Retr0id wrote:
         | Linux as a service is most of Redhat and Canonical's business
         | models.
         | 
         | grsecurity does this from a security angle specifically - in
         | fact they're boasting about it on their homepage right now
         | (fair enough!)
         | 
         | >Are Your Products Drowning in Linux Kernel CVE Noise?
         | 
         | >We know your products can't be updated every week based off
         | unverified CVE information. Address true risk by protecting
         | against entire classes of vulnerabilites and exploitation
         | techniques. Our Pro Support ensures you make the most of attack
         | surface reduction and our proactive defense in your products.
         | 
         | https://grsecurity.net/
        
       | dang wrote:
       | Recent and related:
       | 
       |  _Linux Is a CVE Numbering Authority (CNA)_ -
       | https://news.ycombinator.com/item?id=39406088 - Feb 2024 (10
       | comments)
       | 
       |  _The Linux kernel project becomes a CVE numbering authority_ -
       | https://news.ycombinator.com/item?id=39361511 - Feb 2024 (24
       | comments)
        
       | tptacek wrote:
       | The purpose of CVEs is to ensure that people discussing
       | vulnerabilities are talking about the same thing. CVEs aren't a
       | checklist, they aren't a perfect enumeration, and it shouldn't
       | matter if a CVE is issued for a nonissue.
       | 
       | People who are burdened by requirements to ship (or produce
       | rolling updates) to address every Linux kernel CVE are living in
       | a state of sin. It doesn't make sense for the kernel CNA to alter
       | its behavior to accommodate them.
        
         | tsujamin wrote:
         | It does matter, because they are inputs into other processes,
         | and the signal to noise has gone down. There'll be a lot more
         | time wasted in orgs triaging non-security-relevant bugs in the
         | future.
        
           | tptacek wrote:
           | It does not matter, because the signal was never there or
           | meant to be there to begin with. CVEs solve a problem of
           | multiple researchers and developers talking past each other
           | about the same vulnerability (or vulnerable subsystem or line
           | of code). It has never been a reliable enumeration of
           | vulnerabilities. Organizations triaging CVEs line-by-line are
           | abusing the system. The system should not bend itself to
           | accommodate that abuse; that just harms everybody else who
           | isn't abusing it.
        
             | gnfargbl wrote:
             | Let's continue your reductive line of thinking. If the only
             | purpose is to ensure that developers are talking about the
             | same issue, then why does the kernel need need CVEs at all?
             | Their existing bug tracking mechanisms should be entirely
             | adequate, no?
        
               | saagarjha wrote:
               | Some platforms do in fact do this. If you run the entire
               | stack, more power to you. But the kernel is forked by a
               | hundred different people and everyone has their own bug
               | trackers for that, so having an identifier for a security
               | bug is actually useful to unify those.
        
             | tsujamin wrote:
             | you're missing the CVSS aspect which is intrinsically tied
             | to CVE issuance (atleast when issued through the CNA-LR).
             | It's not _just_ an identifier, it's a entirely valid and
             | useful tool of triage and classification
        
           | mike_d wrote:
           | > because they are inputs into other processes
           | 
           | CVEs should never be the input to anything except a triage
           | pipeline, which in turn feeds other processes. If you don't
           | have a competent pair of eyeballs (either internally or from
           | a vendor) looking at CVEs with the context of how the
           | impacted product is used in your organization, all you are
           | doing is busy work.
           | 
           | Almost all end user organizations (not software vendors, OS
           | distributors, etc) should pretend CVEs don't exist. Blindly
           | apply all your OS and software patches within 24 hours of
           | them being available and be done with it. You are much more
           | likely to suffer a business loss as the result of a
           | vulnerability than you are a patch application.
        
       | throwawaaarrgh wrote:
       | I just assume that there is always a 0day lurking in my kernel.
       | If you can execute any code on my system I assume it's game over
        
       | ch33zer wrote:
       | This article does a good job explaining the Linux kernel position
       | on cves: https://lwn.net/Articles/961978/
       | 
       | The relevant part:
       | 
       | > Kroah-Hartman put up a slide showing possible "fixes" for CVE
       | numbers. The first, "ignore them", is more-or-less what is
       | happening today. The next, option, "burn them down", could be
       | brought about by requesting a CVE number for every patch applied
       | to the kernel.
       | 
       | They intend to burn down the cve system, and complaining about it
       | is not a plan to stop it.
        
       | codedokode wrote:
       | It is often difficult to assess the consequences of a bug,
       | especially in large and complicated project like an OS kernel. It
       | could take lot of time, and it is easier just to fix the bug and
       | err on the safe side by calling it a potential "vulnerability".
       | Especially when nobody pays a bounty for proof-of-concept.
        
       ___________________________________________________________________
       (page generated 2024-03-07 23:01 UTC)