[HN Gopher] CVE Stuffing
___________________________________________________________________
CVE Stuffing
Author : CapacitorSet
Score : 214 points
Date : 2021-01-02 12:41 UTC (10 hours ago)
(HTM) web link (jerrygamblin.com)
(TXT) w3m dump (jerrygamblin.com)
| easterncalculus wrote:
| Lots of CVEs are illegitimate. You have people creating whole
| "vulnerabilities" that are just long known features of various
| technologies. The worst one I'm remembering is the "discovery" of
| "Zip Slip" and "ZipperDown", which were both just gotchas in the
| zip format that have been known about for decades now. Both got
| trendy websites just like Spectre and Meltdown, and loads of
| headlines. ZipperDown.org is now an online slots website.
|
| - https://snyk.io/research/zip-slip-vulnerability
|
| - http://phrack.org/issues/34/5.html#article
|
| - https://www.youtube.com/watch?v=Ry_yb5Oipq0
| [deleted]
| eyeareque wrote:
| Mitre is a us gov supported team, and previously they could not
| scale to the need of their efforts. They did the best they could,
| but they still had a lot of angry people out there. The whole
| world uses CVEs but it is US funded by the way.
|
| In come new CNAs, scale the efforts through trusted teams, which
| makes sense. The mitre team can only do so much on their own.
|
| Unfortunately I don't think anyone will be as strict and
| passionate about getting CVEs done right, like the original mitre
| team has.
|
| Here is to hoping they can revoke cna status from teams who
| consistently do not meet a quality bar.
| tamirzb wrote:
| The problem though is that issues with CVEs are not caused only
| by bad CNAs. MITRE (understandably) doesn't have the resources
| to verify every CVE request it receives, which have resulted in
| bad CVE details being filed on multiple occasions.
|
| I wonder if maybe, instead of trying to fix CVEs, we could try
| to think about creating alternatives? I know some companies
| already use their own identifiers (e.g. Samsung with SVE), so
| perhaps a big group of respected companies can come together to
| create a new unified identifier? Just an idea though.
| tptacek wrote:
| I understand the frustration, and I'm pretty sure the root cause
| is straightforward ("number of CVEs generated" is a figure of
| merit in several places in the security field, especially
| resumes, even though it is a stupid metric).
|
| But the problem, I think, contains its own solution. The purpose
| of CVEs is to ensure that we're talking about the same
| vulnerability when we discuss a vulnerability; to canonicalize
| well-known vulnerabilities. It's not to create a reliable feed of
| all vulnerabilities, and certainly not as an awards system for
| soi-disant vulnerability researchers.
|
| If we stopped asking so much from CVEs, stopped paying attention
| to resume and product claims of CVEs generated (or detected, or
| scanned for, or whatever), and stopped trying to build services
| that monitor CVEs, we might see a lot less bogus data. And,
| either way, the bogus data would probably matter less.
|
| (Don't get me started on CVSS).
| currymj wrote:
| this sounds similar to problems with peer review in academia.
| it mostly works fine as a guardrail to enforce scholarly norms.
|
| however many institutions want to outsource responsibility for
| their own high-stakes decisions to the peer review system.
| whether it's citing peer-reviewed articles to justify policy,
| or counting publications to make big hiring decisions.
|
| It introduces very strong incentives to game the system -- now
| getting any paper published in a decent venue is very high-
| stakes, and peer review just isn't meant for that -- it can't
| really be made robust enough.
|
| i don't know what the solution is in situations like this,
| other than what you propose -- get the outside entities to take
| responsibility for making their own judgments. but that's more
| expensive and risky for them, so why would they do it?
|
| It feels kind of like a public good problem but I don't know
| what kind exactly. The problem isn't that people are overusing
| a public good, but that just by using it at all they introduce
| distorting incentives which ruins it.
| tptacek wrote:
| My basic take is: if "CVE stuffing" bothers you, really the
| only available solution is to stop being bothered by it,
| because the incentives don't exist to prevent it. People
| submitting bogus or marginal CVEs are going to keep doing
| that, and CNAs aren't staffed and funded to serve as the
| world's vulnerability arbiters, and even if they were, people
| competent to serve in that role have better things to do.
|
| The problem is the misconception ordinary users have about
| what CVEs are; the abuses are just a symptom.
| currymj wrote:
| I suspect for both peer review and CVEs, and probably some
| similar situations I'm not thinking of, it's not just a
| misconception, it's often more like wishful thinking.
|
| People really want there to be a way of telling what's good
| and important that doesn't cost them any money or effort.
| Ironically these systems can sort-of work for that purpose,
| only if people don't try to use them for that purpose.
| astrobe_ wrote:
| I think both are instances of Goodhart-Campbell-
| Strathern's law: "When a measure becomes a target, it
| ceases to be a good measure."
| fractionalhare wrote:
| That sucks. Perhaps the most annoying part of modern infosec is
| the absolute deluge of noise you get from scanning tools.
| Superfluous CVEs like this contribute to the sea of red security
| engineers wake up to when they look at their dashboards.
| Unsurprisingly, these are eventually mostly ignored.
|
| Every large security organization requires scanning tooling like
| Coalfire, Checkmarx, Fortify and Nessus, but I've rarely seen
| them used in an actionable way. Good security teams come up with
| their own (effective) ways of tracking new security incidents or
| vastly filtering the output of these tools.
|
| The current state of CVEs and CVE scanning is that you'll have to
| wrangle with bullshit security reports if you run any nontrivial
| software. This is especially the case if you have significant
| third party JavaScript libraries or images. And unfortunately you
| can't just literally ignore it, because infrequently one of those
| red rows in the dashboard will actually represent something like
| Heartbleed.
| futevolei wrote:
| The non stop stream of emails every day certainly sucks but
| falls far short of my employers false positive process which
| requires several emails explaining why it's false positive and
| following up to make sure the waiver is applied so as to not
| impact our security rating instead of just reassigning the jira
| ticket and adding false positive label.
| bartread wrote:
| We use Nessus and it's not too bad on the false positive front.
| I usually check the scan results every week or two to see if it
| finds anything new, and I know our Head of IT also keeps an eye
| on them. In an ideal world we'd automate this away but have a
| raft of more pressing priorities.
|
| We also use tools like Dependabot to keep an eye out for
| vulnerabilities in our dependencies, and update them to patched
| versions. This is genuinely useful and a worthwhile timesaver
| on more complex projects.
|
| It's easy to be cynical about automated scanning (and pen-
| testing for that matter) and, although it's often needed as a
| checkbox for certification, it can certainly add value to your
| development process.
| mnd999 wrote:
| > The current state of CVEs and CVE scanning is that you'll
| have to wrangle with bullshit security reports if you run any
| nontrivial software.
|
| Especially if you have customers who outsourced their infosec
| to the lowest bidder who insist every BS CVE is critical and
| must be fixed.
| whydoyoucare wrote:
| This ^^^. I have experienced it first hand for the last year
| or so, and it gets really annoying!
| hendry wrote:
| Communication breakdown.
|
| It's a bit naughty how "security researchers" don't appear to
| make a good effort to communicate upstream.
|
| And the fact that Jerry has problems reaching out to NVD or Mitre
| is worrying.
| dx87 wrote:
| I think this goes hand-in-hand with people naming security
| vulnerabilities and trying to make it a big spectacle. Sometimes
| it is a legit serious vulnerability, like shellshock or
| heartbleed, but a lot are just novices trying to get their 15
| minutes of fame. I remember a few years back there was a
| "vulnerability" named GRINCH, where the person who discovered it
| claimed it was a root priviledge escalation that worked on all
| versions of Red Hat and CentOS. They made a website and
| everything for it, and tried to hype it up before disclosing what
| it was. Turns out the "vulnerability" was members of the wheel
| group being able to use sudo to run commands as root.
| tptacek wrote:
| It's hard for me to think of a serious downside for named
| vulnerabilities. People who try to name sev:lo bugs get made
| fun of; it backfires.
| dx87 wrote:
| It just causes extra annoyance at work. There have been a few
| times when some named vulnerability gets covered by a generic
| tech website, and the next day at work my inbox has 2-3
| meeting invites from non-technical project managers to
| discuss what needs to be done to mitigate the vulnerability,
| regardless of its severity, and without even knowing if our
| organization is vulnerable to it.
| brohee wrote:
| Didn't check who filled those bugs, but I've seen companies
| requiring having discovered CVE to apply for some jobs, and the
| natural consequence is gaming the system...
| [deleted]
| sanxiyn wrote:
| I checked, it seems to be a student of Seoul National
| University, South Korea. https://github.com/donghyunlee00/CVE
| rvp-x wrote:
| Huh. I wonder if it's a student doing an assignment and not
| realizing they're submitting to a real database.
|
| Their other GitHub work is following tutorials, labs and
| courses.
| hoppla wrote:
| A second guy is also doing this. CVEs have a reference to
| third party advisories such as
| https://github.com/koharin/koharin2/blob/main/CVE-2020-35185
|
| This repository does no longer exists.
| bregma wrote:
| I'm a command-line development tools maintainer for an OS. I am
| not unfamiliar with high-level CVEs in my inbox with the likes of
| "gdb crashes on a handcrafted core file causing a DoS". I am
| unfamiliar with a real world in which a simple old-fashioned
| segfault in a crash analysis tool is truly a denial of service
| security vulnerability, but our security department assures us we
| need to drop all revenue work and rush out a fix because our
| customers may already be aware that our product is shipping with
| a known CVE.
|
| There are occasions in which I recognize a CVE as a vulnerability
| to a legitimate possible threat to an asset. By and large,
| however, they seem to be marketing material for either
| organizations offering "protection" or academics seeking
| publication.
|
| I think like anything else of value, inflation will eat away at
| the CVE system until something newer and once again effective
| will come along.
| raverbashing wrote:
| Ah yes, this also fits with the famous "no insecure algorithms"
| in which an auditor will check a box if your use md5, even if
| for a feature totally unrelated to security.
| Macha wrote:
| Our security team at a previous employer previously added a
| systemwide checker to our github enterprise installation that
| would spam comments on any change to a file in which
| Math.random is used. The idea is that anyone using random
| numbers must be implementing a cryptographic protocol and
| therefore should not be using Math.random as it's not a
| CSPRNG.
|
| So all the AB tests, percentage rollouts etc. started getting
| spam PR comments until they were made to turn it back off
| again.
|
| Frankly if a teammate was writing their own crypto algorithm
| implemntation in the bog standard web app we working on, that
| would be more concerning than which RNG they're using.
| consp wrote:
| I've seen exactly this many times in audits (gets them a high
| score!). If they flag it and not check the usage I know they
| didn't bother putting anyone good on the audit or only ran
| automated stuff and it is pretty much useless. The same can
| now be said for sha1, gets them results quickly and looks
| good on the final report.
| beardedwizard wrote:
| Maybe you can get on the phone with your customers, their
| security teams, and their compliance teams and explain every
| single day why these known vulnerabilities are not serious and
| can never be leveraged. You can convince them all of these
| latent bugs will never pose a serious risk. You can do this all
| day every day. Or you can just patch, and maintain a capability
| to do so quickly because bugs don't just affect security and
| the inability to update dependencies is really a reflection of
| awful development practices.
| nullc wrote:
| It's sad, however, when a highly non-exploitable crash is
| treated as a five alarm fire while a "silently corrupts users
| data" falls to the wayside because people don't generally
| write security vulnerability reports for those.
|
| I've heard from some people that they have considered filing
| security CVEs against non-security but high user impact bugs
| in software that they're working on, just to regain control
| of priorities from CVE spammers.
| beardedwizard wrote:
| Agree, but having to make these judgement calls at all is a
| mistake. We need to get to 'just fix it'.
| bartread wrote:
| > really a reflection of awful development practices.
|
| You don't know a thing about GP's development practices so
| perhaps you should be a bit slower to hurl accusations.
| arp242 wrote:
| "Never fix it" is one extreme.
|
| "Drop all revenue work and rush out a fix" is another.
|
| The previous poster didn't say it should never get fixed, but
| rather that there's some nuance to be had in these things,
| and that fixing it in e.g. the next release is usually just
| fine too.
| beardedwizard wrote:
| No disagreement here. What is dangerous for me is the idea
| that difficulty upgrading for security fixes does not
| predict the same difficulty for other fixes. It's not that
| security bugs are uniquely hard to patch, it's that
| dependency management on the whole is neglected and
| security gets the blame.
|
| Those crusty old dependencies and the processes around them
| are an operational risk, we should be lowering the bar to
| just patching rather than picking and choosing.
| tsimionescu wrote:
| You are assuming that this is about dependencies. OP's
| example is explicitly "gdb crashes when opening on a
| malformed core dump and can be used for DoS". If you were
| working on GDB and got this bug report, would you
| consider it a fire to be put out immediately? Or would it
| be a low-impact bug to be looked at when someone gets
| some free time?
|
| The OP is complaining that, if there is a CVE associated
| for whatever stupid reason, the bug suddenly jumps from a
| "might fix" to "hot potato".
| beardedwizard wrote:
| That's fair
| arp242 wrote:
| Who is talking about "crusty old dependencies"? Or
| processes which are an "operational risk"? The previous
| poster never mentioned any of those things.
| beardedwizard wrote:
| They get old and crusty when you have to choose not to
| patch, or de prioritize those not so serious bugs because
| the operational cost is too high.
|
| Developers shouldn't have to make this call, the cost
| should be zero.
| hoppla wrote:
| It will probably be less effort to patch (increment version
| number) a non existing vulnerability than to explain it to
| every customer that comes with an report from a third party
| auditor.
|
| CVEs for non-vulnerabilities is like corporate trolling
| [deleted]
| smsm42 wrote:
| I feel this is the consequence of paying people for security bugs
| reporting (and only _security_ bugs reporting). People start to
| inflate the number of reports and no longer care about proper
| severity assignment as long as it get them that coveted
| "security bug" checkbox. I mean I can see how bounty programs and
| projects like hackerone can be beneficial, but this is one of the
| downsides of it.
|
| CNA system actually is better since it at least puts some filter
| on it - before it was Wild West, anybody could assign CVE to any
| issue in any product without any feedback from anybody
| knowledgeable in the code base and assign any severity they
| liked, which led to wildly misleading reports. I think CNA at
| least provides some sourcing information and order to it.
| lmilcin wrote:
| CVE DoS -- post so many CVEs to paralyze the system completely.
| [deleted]
| TimWolla wrote:
| See additional context in this issue in docker-library/memcached:
| https://github.com/docker-library/memcached/issues/63#issuec...
|
| And this issue in my docker-adminer:
| https://github.com/TimWolla/docker-adminer/issues/89
| jart wrote:
| I remember when people in the security community started filing
| CVEs against the TensorFlow project, claiming that code execution
| was possible with a handcrafted TensorFlow graph, and the team
| would have to try and explain, "TensorFlow GraphDefs _are_ code
| ".
| belval wrote:
| The whole situation around CVE in Tensorflow is very painful,
| you get GitHub security notifications for any public repository
| using TF because of a "known CVE", even though it's basically
| just a train.py script that is not deployed anywhere.
| jebronie wrote:
| A security auditor once reported a Adobe generator comment in an
| SVG file as a moderate "version leak vulnerability" to me.
| smsm42 wrote:
| This is a staple of audit report stuffing. Somebody got an idea
| that disclosing a version of anything anywhere is a huge
| security hole, so now any publicly visible version string
| generates a "moderate" (they are usually not as brazen as to
| call it "critical") security report.
| MrStonedOne wrote:
| Way back when I saw a report on hackernews about secret exposure
| from websites that deployed directly via a git repo as a webroot
| and didn't block access to .git/
|
| I added a cheeky message to my site's .git/ folder if you
| attempted to view it.
|
| About 2 or 3 months later I started getting "security reports" to
| the catch all, about an exposed git folder that was leaking my
| website's secrets.
|
| Apparently because my site didn't return 404, their script
| assumed i was exposed and they _oh so helpfully_ reported it to
| me.
|
| Got like 4 or 5 before i decided to make it 404 so they would
| stop, mainly because i didn't want to bring false positive
| fatigue on to "security exploit" subject line emails.
|
| I have a feeling CNAs are bringing this kind of low effort zero
| regard for false positive fatigue bullshit to CVEs. Might as well
| just rip that bandaid off now and stop trusting anything besides
| the debian security mailing list.
| seanwilson wrote:
| > Apparently because my site didn't return 404, their script
| assumed i was exposed and they oh so helpfully reported it to
| me.
|
| There's no good reason that folder should exist except for a
| joke, so how is this not a helpful message in the vast majority
| of cases? All lint rules have exceptions, doesn't make them not
| useful.
| arp242 wrote:
| I didn't ask you to lint my code (or server) though.
|
| There's plenty of cases where a .git directory is just
| harmless; I've deployed simple static sites by just cloning
| the repo, and this probably exposed the .git directory. But
| who cares? There's nothing in there that's secret, and it's
| just the same as what you would get from the public GitHub
| repo, so whatever.
|
| That some linting tools warns on this: sure, that's
| reasonable.
|
| That random bots start emailing me about this without even
| the slightest scrutiny because it _might_ expose my super-
| duper secret proprietary code: that 's just spam and rude.
| seanwilson wrote:
| > That some linting tools warns on this: sure, that's
| reasonable.
|
| To clarify, I'm not condoning annoying spam but if say e.g.
| Netlify or GitHub added a ".git folder should not exist on
| a public site" lint rule when you personally deploy your
| site, I would say it would be a net benefit.
|
| > There's plenty of cases where a .git directory is just
| harmless
|
| Pretty much all lint rules have false positives so this
| isn't a good yardstick. Can it potentially cause harm when
| you do it and is there's no beneficial reason to do it? If
| yes to both then it's an ideal candidate for a lint rule.
| Kalium wrote:
| > Pretty much all lint rules have false positives so this
| isn't a good yardstick. Can it potentially cause harm
| when you do it and is there's no beneficial reason to do
| it? If yes to both then it's an ideal candidate for a
| lint rule.
|
| A responsible person running such a linter does a sanity
| check before taking their positive and bugging someone
| else with it. An irresponsible one potentially causes
| harm by assuming every single hit is a major finding that
| should turn into a bounty payout.
| seanwilson wrote:
| > A responsible person running such a linter does a
| sanity check before taking their positive and bugging
| someone else with it. An irresponsible one potentially
| causes harm by assuming every single hit is a major
| finding that should turn into a bounty payout.
|
| I already tried to clarify that I was talking about the
| general concept of good lint rules, not about people
| emailing for bounty payouts. We're in agreement.
| waihtis wrote:
| Well according to the post, the OP returned a cheeky message
| and any MK I Eyeball should clearly spot it as an intended
| condition. Automated scan-spam gets on your nerves pretty
| quickly.
|
| I run a small vulnerability disclosure program and receive a
| ton of it - people clearly run automated scanners, which I
| presume create automated vulnerability reports, on things
| that are not even remotely dangerous AND have been
| specifically ruled out of scope for the program.
|
| It's not helpful, it's time consuming and often people will
| complain if you don't answer their reports.
| ryanlol wrote:
| This is not a helpful message in the vast majority of cases.
| Lots of servers out there that always return 200
| seanwilson wrote:
| > Lots of servers out there that always return 200
|
| That's poor configuration for most public websites that you
| want indexed by search bots that's worth fixing. It's
| called a soft 404, and makes it troublesome to detect when
| links are invalid, break or have moved. Google will even
| warn you about it: https://developers.google.com/search/doc
| s/advanced/crawling/...
| csnover wrote:
| Be thankful you only receive automated security reports about
| an open .git directory. There is some guy/company who goes
| around running a web spider connected to some shitty antivirus
| which automatically submits false abuse reports to site _ISPs_
| claiming that their customers are hosting viruses. This
| happened to me twice; I think after the second time my ISP
| started rejecting these reports outright since I haven't seen
| any new ones for a few years now, even though they're clearly
| still at it (or, maybe, finally stopped last year after getting
| DDoSed?)[0].
|
| Automated security scanning by people who don't know what they
| are doing has become an enormous hassle in so many ways and
| really is damaging the ability to find and handle true threats.
|
| [0]
| https://twitter.com/badlogicgames/status/1267850389942042625
| cperciva wrote:
| Speaking of "security exploits" consisting of reading publicly
| available information: Tarsnap has public mailing lists with
| public mailing list archives, and at least once a month I get
| an email warning me that my "internal emails" are accessible.
| pixl97 wrote:
| Is there a way to return a custom 404 error handler for .git
| and a different one for a regular 404 in Apache? Never tried
| that before.
| TonyTrapp wrote:
| Check the ErrorDocument directive for .htaccess files.
| hnlmorg wrote:
| That directive doesn't have to reside in .htaccess files.
| It works just as well inside a Directory, Virtual Host and
| Server contexts as well. ErrorDocument
| 404 /404.php <Directory "/.git">
| ErrorDocument 404 "Ah ah ah! You didn't say the magic word"
| </Directory>
|
| https://httpd.apache.org/docs/2.4/mod/core.html#errordocume
| n...
| cipherboy wrote:
| > I have a feeling CNAs are bringing this kind of low effort
| zero regard for false positive fatigue bullshit to CVEs. Might
| as well just rip that bandaid off now and stop trusting
| anything besides the debian security mailing list.
|
| Red Hat (my employer), Canonical, and SUSE are also CNAs. I can
| only speak to ours, but I think our prodsec team does a great
| job with the resources they've been given. Nobody is perfect,
| but if you take the time to explain the problem (invalid CVE,
| wrong severity, bad product assignment, ...) they consistently
| take the time to understand the issue and will work with
| whatever other CNA or reporter to fix it. Generally we have a
| public tracker for unembargoed CVEs, so if it affects us and
| isn't legitimate or scoped correctly, you might get somewhere
| by posting there (or the equivalent on Ubuntu/SUSE's tracker).
|
| Perhaps it is just the nature of the open source community
| Linux distros are a part of, though, that lets them apply it to
| CVEs as well.
|
| Doesn't help with personal reports though. :-)
|
| Curious, did you get CVE assignments against your personal
| site? 0.o
| Shank wrote:
| This is quite common. If you run a security@ mailbox at a
| company, you're bound to receive hundreds of bug
| bounty/responsible disclosure requests because of known
| software quirks or other design choices. They'll cite precisely
| one CVE or HackerOne/BugCrowd report, and then proceed to
| demand a huge payment for a critical security flaw.
|
| I've seen reports that easily fail the airtight hatchway [0]
| tests in a variety of ways. Long cookie expiration? Report.
| _Any_ cookie doesn 't have `Secure`, including something like
| `accepted_cookie_permissions`? Report. Public access to an
| Amazon S3 bucket used to serve downloads for an app? Report.
| WordPress installed? You'll get about 5 reports for things like
| having the "pingback" feature enabled, having an API on the
| Internet, and more.
|
| The issue is that CVEs and prior-art bug bounty payments seem
| "authoritative" and once they exist, they're used as reference
| material for submitting reports like this. It teaches new
| security researchers that the wrong things are vulnerabilities,
| which is just raising a generation of researchers that look for
| the entirely wrong things.
|
| [0]:
| https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...
| bostik wrote:
| Yup, according to these "researchers" having robots.txt on
| your website is enough to warrant a CRITICAL vulnerability.
|
| No, I'm not joking. That's one of the reports I saw in
| November. I've also had to triage the claim that our site
| supposedly has a gazillion *.tar.xz files available at the
| root. All because the 404 handler for random [non-production
| relevant] paths is a fixed page with 200 response.
|
| As far as I'm concerned, running a bulk vulnerability scanner
| against a website and not even checking the results has as
| much to do with security research as ripping wings off of
| flies has to do with bioengineering.
| wiredfool wrote:
| Oh god. One client I work for does automated scans, and we
| had an s3 bucket set up as a static site.
|
| They freaked out when /admin/ returned permission errors,
| essentially a 404, because it was information leakage about
| admin functions of the website.
| VectorLock wrote:
| You're absolutely right I get a barrage of these. I've got to
| think someone out there is selling software to scan for these
| and spam them around.
| STRML wrote:
| Can confirm this; we've gotten more than 20 reports and
| demands for bounties for "public access" on our open data
| subdomain (backed by S3), which literally is `public.`.
|
| Then they beg to have the report closed as "informative". We
| don't comply unless it really is an honest mistake; I don't
| like the idea of low-quality reporters evading consequences
| again and again, sending scattershot bug reports in a
| desperate attempt to catch a team not paying attention.
| thaumasiotes wrote:
| > I have a feeling CNAs are bringing this kind of low effort
| zero regard for false positive fatigue bullshit to CVEs.
|
| Yes, being the discoverer of a CVE is a major resume item. Pen
| testers who have a CVE to their name can charge more. Companies
| can charge more for sending them.
| RyJones wrote:
| We get dozens of "high-priority" security issues filed that are
| resolved with "we're an open-source project; this information is
| public on purpose".
|
| Our bug bounty clearly outlines that chat, Jira, Confluence, our
| website - all out-of-bounds. Almost all of our reports are on
| those properties.
| hannob wrote:
| The whole problem is that at some point people started seeing
| CVEs as an achievement, as "if I get a CVE it means I found a
| REAL VULN". While really CVEs should just be seen as an
| identifier. It means multiple people talking about the same vuln
| know they're talking about the same vuln. It means if you read an
| advisory about CVE-xxx-yyy you can ask the vendor of your
| software if they already have a patch for that.
|
| It simply says nothing about whether a vuln is real, relevant or
| significant.
___________________________________________________________________
(page generated 2021-01-02 23:01 UTC)