[HN Gopher] The security scanner that cried wolf
       ___________________________________________________________________
        
       The security scanner that cried wolf
        
       Author : feross
       Score  : 90 points
       Date   : 2021-03-10 16:03 UTC (2 days ago)
        
 (HTM) web link (pythonspeed.com)
 (TXT) w3m dump (pythonspeed.com)
        
       | carlosf wrote:
       | Great read. I believe that's an issue not just with container
       | security scanners, but with automated security in general.
       | 
       | One example: I enabled AWS Security Hub to have some sort of
       | security score on my account and although it found a few
       | interesting, it generates way too much noise to be usable.
       | 
       | Microsoft also has this sort of security score for Azure and
       | Office 365 environments and they have the same issues.
        
       | rurban wrote:
       | I find the reasoning weird, first coming with the highly disputed
       | allegation that Debian is a "well trusted and secure" distro, and
       | explaining away the complete lack of security process in Debian.
       | The Debian lack of security is legendary.
       | 
       | Redhat came up with 0 security issues, whilst Debian came up with
       | 63 illegal/wontfix issues. Just do your homework, Debian!
        
       | glsdfgkjsklfj wrote:
       | on a related note, most antivirus companies get paid to mark
       | things such as open source printer driver hacks that allow you to
       | use refillable cartridges and what not. They will show up on
       | scans with names like "malware.generic.12345"
        
       | angry_octet wrote:
       | There's no way to specify what you'll be doing with the
       | container, or how the vuln is triggered, so of course it can't
       | provide useful guidance. The whole CVE report mechanism would
       | have to change to allow low FAR scanning.
        
       | aidenn0 wrote:
       | Why am I not surprised that the glibc maintainers dispute a
       | security vulnerability. They might actually be right in this
       | case, but glibc has a long history of breaking programs and not
       | caring, which combined with strongly preferring dynamic linking
       | makes it a mess to deal with.
        
       | raesene9 wrote:
       | This article shows an interesting contrast between how different
       | types of scanning tools address the issue of valid CVEs for which
       | there are no updates.
       | 
       | The question is "do you want to know about a valid CVE if there's
       | no patch available"?
       | 
       | For high security environments, the answer might be yes (you want
       | to evaluate individual patches to see if they impact your
       | application)
       | 
       | For many environments the answer is likely (as is the case with
       | OP) no, you only want to know about issues for which there is an
       | updated package available.
       | 
       | Interestingly (well to me :) ) more established scanning tools
       | (e.g. Tenable Nessus) default to "no" on this question and
       | container scanning tools mostly default to "yes". (I did an
       | example comparison
       | https://raesene.github.io/blog/2020/11/22/When_Is_A_Vulnerab...)
       | 
       | The important part is that your organization makes an informed
       | choice about which works best for them.
       | 
       | As a sidenote there are also differences in severities and how
       | vulnerabilities are counted (e.g. if you have multiple issues in
       | a package do you count each one or just count once for that
       | package) which can lead to different numbers of issues being
       | reported by different scanners.
       | https://raesene.github.io/blog/2020/06/21/Container_Vulnerab...
        
         | jascii wrote:
         | Just because there are no patches available, doesn't
         | necessarily mean no mitigating actions can be taken. So, yeah,
         | I do want to know about them.
        
         | thaeli wrote:
         | The logic I really want is "hide old unfixed but show me
         | unfixed that's less than X days old" so new stuff surfaces
         | briefly but these zombie vulns aren't constant noise.
        
           | raesene9 wrote:
           | I don't think I've seen that exact logic, but it probably
           | could be implemented via scanners that have a "white-list"
           | feature. you could build lists of things you know about but
           | don't want flagged, but I'd guess the maintenance could be a
           | pain.
        
             | wizzwizz4 wrote:
             | Aptitude's "new packages" feature works fine.
        
         | WrtCdEvrydy wrote:
         | Here's the question, if you default to "yes", wouldn't you want
         | to throw "There might be CVEs in this image that don't have a
         | CVE assigned" for every container...
        
           | raesene9 wrote:
           | well I'd say there's a difference there, I guess between "no
           | patch but CVE" and "no known CVE" :)
        
         | acdha wrote:
         | The big thing for me is _why_ no patch is available: the
         | maintainer concluding there's no meaningful impact is different
         | than nobody actually working on the project.
         | 
         | I'm especially thinking about things like (IIRC) that old
         | Debian man packaging issue where the exploit only affected the
         | post-install script and since it was so late in the life of
         | that release they basically said they wouldn't ship an update
         | unless there was another issue forcing it.
        
       | collsni wrote:
       | This isn't only a problem with docker security scanners it's a
       | problem with linux security scanners in general. There are
       | vulnerabilities out there that just aren't patched and they show
       | up on scans and you need to be able to differentiate what you can
       | mitigate and what you can't.
        
       | _wldu wrote:
       | This is one reason I switched to Alpine from Debian as a base.
       | It's has literally 0 security issues. The auditors love it and I
       | love the small size and speed. Win win.
        
         | eeZah7Ux wrote:
         | It has 0 issues that you know of. Yet, the kernel it just the
         | same...
        
         | cratermoon wrote:
         | From the article:
         | 
         | > this image has every security update put out by Debian
         | 
         | But the scanner still flagged it.
        
           | _wldu wrote:
           | Right, but audit doesn't care about that. They just see the
           | number of reported vulnerabilities and flip out. Make that
           | better! Fix that! Alpine to the rescue.
           | 
           | Do you want to spend time in meetings explaining why 100+
           | CVEs are nonsense or would you rather get Alpine up and
           | running? I know which I would have more fun doing ;)
           | 
           | I know, it's not really an issue (from a security
           | perspective) but try explaining that to an audit manager who
           | only wants to get a 'good' report for the board.
           | 
           | This is really more of an indication of what is wrong with
           | the security industry.
        
             | cratermoon wrote:
             | That's fair, but you might run into another wall if the
             | people worrying about the nonsense CVEs are also resistant
             | to Alpine for other reasons.
        
             | 0xbadcafebee wrote:
             | Every industry and workplace has robotic people who just
             | want to check off a form. In each case you have to engage
             | with them (or somebody else) to address the immediate
             | problem, like "this report is inaccurate". Usually I send
             | an email explaining my position and CC my boss, my boss's
             | boss, and the robot's boss, and if they're good managers,
             | they'll fix it.
        
         | raesene9 wrote:
         | Alpine is useful, but there are some things to
         | watch.Specifically it has quite a short support lifecycle, so
         | if you don't bump the tags it'll get out of support quite
         | quickly.
        
       | cratermoon wrote:
       | > If everything has security vulnerabilities, nothing has
       | security vulnerabilities
       | 
       | This reminds me of the era when everyone thought that the way to
       | prevent users from doing things that permanently delete data and
       | can't be undone was to throw up a big red "Are you sure?"
       | confirmation. After a while users use got habituated to clicking
       | "Yes", which made the warnings useless.
        
         | genewitch wrote:
         | I was attempting to reboot a VM inside of a proxmox "HA"
         | cluster, and somehow i rebooted one of the hypervisors. There
         | was no "are you sure?" for rebooting the metal.
         | 
         | I revoked my own "mess with the hyper" credentials that time. I
         | didn't even know i could do that.
        
       | williesleg wrote:
       | We used to call them script kiddies, now they're the ISO
        
       | jascii wrote:
       | I can't help but feel the author is a little misguided about the
       | purpose of security scanners.
       | 
       | Security is a dynamic process, and scanners are a tool to help
       | compare the current state of your systems with the policies you
       | have set. Out of the box, the scanners can not be aware of your
       | policies and will by necessity produce false positives.
        
         | whydoyoucare wrote:
         | Most of the times these scanners are used to set the current
         | state.
        
         | vbezhenar wrote:
         | In my experience those scanners were used for software
         | certification. You're paying money, they run those scanners on
         | your system, they spew meaningless warnings, you're doing
         | meaningless actions (hopefully harmless) to mitigate those
         | warnings and you've passed a certification, congratulations.
         | Nobody checks whether those warnings make any sense, nobody
         | checks what exactly did you do to mitigate those warnings, it's
         | security theater. And that was extremely expensive scanner, I
         | checked their website and they're selling it for big money.
         | 
         | Probably other people find it more useful.
        
           | brendoelfrendo wrote:
           | > Nobody checks whether those warnings make any sense
           | 
           | I mean, a professional does. I'm not sure who you're working
           | with but mapping an automated finding to a real risk is par
           | for the course. If anyone ever just hands you the output of a
           | Nessus scan with no context, then you just wasted your money
           | on their services.
        
           | dumpsterdiver wrote:
           | I've heard this sentiment repeated quite a bit by non-
           | technical people who hover around the fringes of the security
           | field (compliance officers and the like), that noone actually
           | cares and we're just checking off boxes. That's not fair for
           | those of us who really do care. As someone in offensive
           | security, I can assure you that there's a huge difference
           | between forgetting to check off a box to pay lip service and
           | leaving a critical server wide open to hackers. The former
           | will simply bounce back until you check it, the latter might
           | put your company in the news or even out of business.
        
             | jascii wrote:
             | In my experience with compliance (mostly NIST 800-53)
             | auditors will not just want to see scans, but more
             | importantly an assessment of whether found issues were
             | applicable and what actions are being taken to mitigate.
             | Maybe I have just been lucky to work with particularly good
             | auditors though.
        
       | tptacek wrote:
       | I think this is directionally right about security scanners.
       | 
       | But while "no patch available" is a problem, I think it's the
       | wrong problem to think about. One obvious concern about that line
       | of thinking is that an exposure is an exposure regardless of
       | whether there's a patch available.
       | 
       | The bigger problem, not just with container scanners but with all
       | sorts of scanners (Node dependency scanners are another good
       | example) is that there's a huge incentive to "enumerate"
       | vulnerabilities with CVEs, and most of these vulnerabilities are
       | meaningless.
       | 
       | I spent a couple years at my previous gig triaging security
       | scanner results. Like pretty much everyone who does this
       | professionally, I got so I could do this pretty quickly, and
       | without much though about my actual environment; whole classes of
       | vulnerabilities are pretty much garbage, and most people would be
       | better off if their scanner required a special flag to alert on
       | them.
       | 
       | What you really need to know is, given a vulnerability, is it
       | likely to constitute an exposure in my environment? Clearly, a
       | scanner vendor can't do this analysis perfectly. But they can do
       | more than the zero they do now.
       | 
       | I started my computer security career at a firm called Secure
       | Networks. We sold a security scanner called Ballista. We competed
       | with a much-better-known firm called ISS, who sold the eponymous
       | ISS scanner. The big debate between us and in the market was how
       | to "count" vulnerabilities; ISS, for instance, had a huge
       | collection of Windows registry best-practices rules, and claimed
       | every one of them as (in effect) a vulnerability, while we tried
       | to claim only what our scanner could actually exploit.
       | 
       | We lost that debate, obviously, and the CVE system followed
       | shortly thereafter.
       | 
       | I think the general thing to keep in mind is that, for the most
       | part, modern security scanner teams are pretty small. They're
       | driven off vendorsec and CVE feeds because that's automatable.
       | Like ISS vs SNI, they compete with each other, and their figure
       | of merit is often "how many", not "how important" --- a metric
       | they're not staffed to generate anyways. I wouldn't put much
       | stock in their results.
        
       | franciscop wrote:
       | Note: sorry if I sound harsh, but these companies are raising
       | millions to basically add a burden to the backs of hard-working
       | OSS authors instead of helping the ecosystem, it's really
       | shameless IMHO. Positive note on the end!
       | 
       | I've seen multiple known OSS authors speaking publicly
       | unfavorably about these security scanners. It seems everyone is
       | offering a "solution" to the vulnerability issue with Open
       | Source, but they do it in a way that at best it's a reminder to
       | update your dependencies (as noted in the article) and at worst
       | it's a burden to OSS authors.
       | 
       | Their interests are also very misaligned with everyone else in
       | the field; best security practices are to communicate security
       | issues with the author privately, have you or them make a fix
       | with a reasonable timeline and then publish a fix for everyone
       | (or reveal the issues publicly after X days). That'd help both
       | authors and users, but unfortunately it doesn't help selling the
       | security tools.
       | 
       | Instead, due to this misalignment of incentives, the current
       | player's workflow include bad practices like:
       | 
       | - First time someone notices a potential vulnerability they will
       | register it in their system, and then if you are lucky you'll get
       | a public issue. Responsive disclosure sounds like "no publicity
       | for us so nah".
       | 
       | - Paint features as security vulnerabilities. Example: your
       | "arbitrary command execution tool" can "execute arbitrary
       | commands" or "your JSON reading tool can read JSON". Well,
       | thanks, that's totally useless.
       | 
       | - Tell users that your dev libraries, that never end up in
       | production, have critical vulnerabilities. Sure if my CSS
       | compilation tool has one it won't matter since it doesn't end up
       | in the production server (unless it's a vuln. in the output).
       | 
       | Edit: for a positive note, these are the current champions IMHO
       | that _are_ helping the ecosystem in this field (besides authors
       | themselves ofc):
       | 
       | - Github's free Actions for OSS. This has been amazingly helpful
       | for testing for free and easily, and basically my turning point
       | on believing that Microsoft has evolved for the better.
       | 
       | - Hundreds or thousands of volunteer fellow OSS authors
       | disclosing vulnerabilities privately. They are mostly anonymous,
       | but def a huge force.
       | 
       | - Companies releasing open source, since they are in a better
       | position to offer big well-rounded libraries that reduce your
       | dependency graph.
        
         | px43 wrote:
         | > Paint features as security vulnerabilities. Example: your
         | "arbitrary command execution tool" can "execute arbitrary
         | commands"
         | 
         | This is pretty funny to me.
         | 
         | I live on the other side of this, and let me tell you, what you
         | don't often hear, and what we legally can't tell you, is that
         | there are dozens of financial institutions running your
         | "command execution as a service" tool open to the internet with
         | default admin creds because they think it's a simple uptime
         | monitor.
         | 
         | I have broken into multiple Fortune 500 companies using these
         | sorts of bugs, and because I actually do want to live in a
         | world where people can trust their technology, I do the right
         | thing and report them. These bugs often get de-prioritized, or
         | closed as WONTFIX because they're only applicable in really
         | obscure situations that would never happen in the real world.
         | 
         | My favorite example of this happening is when Homokov was
         | trying to get mass assignment disabled by default, and his
         | issues on GitHub kept getting closed because it was a feature,
         | and only an idiot would screw up mass assignment, so he
         | exploited a mass assignment bug in GitHub to create an issue
         | from 1000 years in the future warning the past about letting
         | mass assignment bugs persist. Since it was from the future, it
         | was always on top of the issues list, serving as a constant
         | reminder to the rails developers that their security posture is
         | a joke.
        
           | franciscop wrote:
           | Well and SQL injection is still #1 security vulnerability
           | AFAIK. Do we mark any script that can run an arbitrary SQL
           | query as "vulnerable"? Or educate the devs to not concatenate
           | strings from untrusted inputs and instead either validate the
           | text or use named parameters?
           | 
           | If you want powerful tools these are also dangerous. Sure we
           | should try to make our libraries as safe as possible and pick
           | the safer version everything being the same, but everything
           | is not the same.
           | 
           | "there are dozens of financial institutions running your
           | "command execution as a service" tool open to the internet
           | with default admin creds because they think it's a simple
           | uptime monitor" that's 9 out of 10 on the companies' side in
           | my experience (except for the "default admin creds", which we
           | all know should be prompted/created on the first run).
        
             | cratermoon wrote:
             | > educate the devs to not concatenate strings from
             | untrusted inputs
             | 
             | What if the dev is the CTO?
             | https://arstechnica.com/gadgets/2021/03/rookie-coding-
             | mistak...
        
               | PradeetPatel wrote:
               | The CTO is still susceptible to mistakes, when that
               | happens they must also be challenged.
               | 
               | Of course, you can't just simply reject a PR from the
               | CTO. Getting them to change their ways often require
               | delicate skills in key stakeholder management and re-
               | aligning them with core company vision.
        
         | WrtCdEvrydy wrote:
         | It's because the risk of a failure is high.
         | 
         | If you end up not flagging something and it causes a
         | vulnerability to become a data leak, you will end up getting
         | blamed as a classic "noone could have seen this coming"
        
           | aequitas wrote:
           | The risk of a failure is _sometimes_ high and this will lead
           | to something called alert fatigue:
           | https://en.wikipedia.org/wiki/Alarm_fatigue
        
             | WrtCdEvrydy wrote:
             | The risk of failure to the scanner company is high.
             | 
             | The risk of failure to you is low.
             | 
             | That's the difference, they are shifting the burden of
             | security to you... since if you don't patch the "exploit"
             | which may not be an exploit, they can always point back to
             | you in case there is a security breach because you didn't
             | listen to the "magical scanner".
             | 
             | It's a clear conflict of interest from twisted incentives.
             | Additionally, if the thing never throws up any exploit
             | warnings, you might wonder why you're paying so much for
             | this fancy alerting thing.
        
               | aequitas wrote:
               | The value of a scanner with this kind of s/n ratio is
               | very low. Because it would still cost me a lot time (thus
               | money) to investigate every alert every time. What I
               | would pay for is a scanner that is contextually aware and
               | alerts when needed, give or take a few false positives.
               | Not ones that have false positive as their default.
        
               | WrtCdEvrydy wrote:
               | Now you're getting it.
               | 
               | Now think about the fact that most auditors want you
               | running a scanner on your network (and it's required for
               | certain legislation like GDPR)
        
               | px43 wrote:
               | Sounds like what you're looking for is a scanner with
               | more false negatives than false positives. One that won't
               | alert you about a certain percentage of actual critical
               | issues that it detects. You're not alone here, which is
               | why things are so insecure.
               | 
               | Enterprise security tools care mostly about catering to
               | people like you who can't be bothered to investigate
               | security issues.
               | 
               | There are a totally different set of tools for those of
               | us who actually want to find bugs, and the false positive
               | rate (false:true ratio) on the best of those tools is
               | more like 100:1. Often those tools have false positive
               | rates of 1000:1 or even 10000:1, and people have built
               | good machine learning models to sift through the findings
               | automatically and sort them by likelihood of
               | exploitability.
               | 
               | It blows me away to hear people complain about the
               | existence of false positives at ratios like 1:1. If only
               | 50% of the findings in your tool are false positives, you
               | are absolutely missing a _ton_ of real bugs that will
               | eventually be found by people who get paid to find real
               | bugs.
        
           | yardstick wrote:
           | When we review vulnerabilities in our dependencies we
           | classify their likelihood of being exploited, and their
           | potential impact on the business if they were exploited.
           | Based on these we have a matrix that outputs an overall
           | low/medium/high risk and that lets us prioritise reporting
           | and fixing on medium/high issues. Works well for us. YMMV.
           | 
           | (Making something up here)
           | 
           | CVE: OpenSSL: RSA is broken
           | 
           | Likelihood of exploitation: high
           | 
           | Impact on our business: none. We migrated off RSA years ago.
           | 
           | Overall risk: Low
        
             | WrtCdEvrydy wrote:
             | Yeah, we've had this as well... for example, if you're
             | running a certain version of node, you might have a CVE on
             | the crypto module but if your API just accepts some data,
             | processes some data and writes it to a local file without
             | using crypto... are you really vulnerable?
        
               | Sebb767 wrote:
               | It depends. Of course, if i.e. a malicious ciphertext can
               | do an RCE, the app right now is not vulnerable. But at
               | some point you might let the user backup his encrypted
               | data and restore it. You colleague knows you already have
               | the code, it's only decrypting the data, you checked it
               | for security, what could go wrong?
               | 
               | Of course, you shouldn't panic just because this issue
               | exists, especially if you know it does not affect you
               | right now. But leaving this unattended, especially
               | without the _huge_ exclaimer that you can not accept
               | untrusted ciphertext, can easily come back to bite you.
               | Maybe it was exactly what caused this (theoretical) issue
               | in the first place.
        
             | thaeli wrote:
             | An overall risk of "low" is incorrect here. The risk is
             | "none" because it's not applicable to your environment.
             | 
             | Or do you consider "low" a won't fix / not applicable
             | categorization?
        
           | astrobe_ wrote:
           | That's risk aversion, and that's why this domain is so
           | unhealthy.
        
       | scjody wrote:
       | We've been shipping a Docker-based app to customers for years,
       | and every now and then one of them runs a security scanner on our
       | images. I have yet to see a scan that isn't a disaster of false
       | positives (for the reasons outlined in the article and more!)
       | 
       | One of the craziest recent examples was a scan using a tool
       | called Twistlock. Many of our images are built from an upstream
       | image that may have outdated apt dependencies, so one of the
       | first things we do is upgrade them. Twistlock flagged _every
       | instance_ of this because "Package binaries should not be
       | altered" (in other words, between subsequent layers in an image).
       | I am baffled how anyone at Twistlock decided that this was a
       | useful thing for their product to detect, or why any Twistlock
       | customer trusts it given issues like this.
        
         | Kalium wrote:
         | > I am baffled how anyone at Twistlock decided that this was a
         | useful thing for their product to detect, or why any Twistlock
         | customer trusts it given issues like this.
         | 
         | If I was injecting something malicious into your containers via
         | updates, this is exactly how I would go about doing it and
         | exactly what would catch it.
         | 
         | What I'm seeing here is that Twistlock and other tools don't
         | reliably do a good job of explaining _why_ something is flagged
         | in a way that 's understandable and accessible to developers.
         | Though honestly I've yet to find _any_ approach to informing
         | developers that actually works.
         | 
         | My favorite was giving them a clear link in the error message
         | about why the build was failing and how to fix it.
        
         | nonameiguess wrote:
         | It flags that because it could indicate someone got onto your
         | system and injected their own code or changed machine
         | instructions at the binary level, which is a pretty common way
         | to get a remote shell.
         | 
         | It is annoying to have to mark false positives, but that's just
         | the nature of the beast when it comes to being thorough about
         | security. More annoying with this check than when you update
         | packages in a container image instead of starting clean is that
         | this same technique is often used to compare hashes of packages
         | managed by an installer versus what is actually on disk, and
         | thus flags every single package in a JIT-compiled language that
         | caches byte code on disk as altered.
        
           | CameronNemo wrote:
           | I have used Prisma Cloud / twistlock. The tampering detection
           | is only useful for detecting changes to _running containers_
           | , not for changes to binaries between layers. The latter is
           | just dumb and causes anti-productive false positives like
           | above.
        
           | acdha wrote:
           | It's because they're implementing the feature so they can
           | show a CISO a big scary report and say "good thing you paid
           | us - otherwise you wouldn't have known!"
           | 
           | If they were serious about build errors they could use the
           | built-in features of APT, YUM, etc. to only report binaries
           | which don't match the canonical distribution's hashes, as has
           | been standard sysadmin practice for aeons.
        
       | dec0dedab0de wrote:
       | So the same CVEs show up for debian but not redhat even though
       | they're not being fixed upstream.
       | 
       | Is IBM/redhat paying someone here? I don't understand why it
       | would be different.
        
         | raesene9 wrote:
         | AFAIK the reason it's different is that Debian/Ubuntu publish
         | lists of unpatched CVEs but I don't believe that RH do.
         | 
         | What scanners are often doing is pulling the published Sec-db
         | for a distribution and using that as their data source. So as a
         | result they can only report on things that are in the data
         | source.
        
           | dec0dedab0de wrote:
           | Maybe I just don't understand CVEs. I thought it would be
           | published at a third party and linked to glibc, not to a
           | specific distro.
        
             | CameronNemo wrote:
             | Trivy is open source. You can check where they get their
             | data from. But I imagine they use this for Debian:
             | https://security-tracker.debian.org/tracker/
        
             | raesene9 wrote:
             | There is a list of CVEs available via NVD or similar,
             | however the challenge for a scanner is "how do I know if
             | the binaries I'm seeing have been patched"
             | 
             | The common way to do that is to lean on the linux
             | distribution's package management system, which is why the
             | way it works varies from distro to distro.
        
         | [deleted]
        
       | c7DJTLrn wrote:
       | My impression of container scanners has always been that they are
       | security theatre. Something that lets companies say 'look how
       | security conscious we are'. Same for code scanners.
       | 
       | If an automated process really could find vulnerabilities to the
       | same fidelity as a human pentester, that would be groundbreaking.
       | In most cases, companies don't want to fork out the cost for
       | _real_ security, so they run these useless scanners instead.
        
         | cube00 wrote:
         | I lost all respect for those scanners after I found one was
         | flagging libraries (even the latest) because it had an open
         | ended version range. Upon digging into why it was because "the
         | library has an insecure mode in it" and the warning will remain
         | until the library authors remove that mode.
         | 
         | Here's a better idea, how about you flag when I actually use
         | the mode? If I don't use it, don't flag and let me get on with
         | my real job.
         | 
         | To top off the scanner didn't support suppressing single
         | vulnerabilities so our options were (1) the build remained
         | "unstable" forever or (2) we ignore the scanning result so we
         | can get back to a stable build.
        
         | brendoelfrendo wrote:
         | I mean, plenty of human pen-testers also use automated tools.
         | It's common knowledge (or at least I thought it was) that these
         | scanners produce contextless output that is going to require
         | analysis in order to be useful. The scanner is just looking for
         | known vulnerabilities; that's a useful tool, because you don't
         | necessarily need to know that a CVE was published to know that
         | it exists in your environment. A human still needs to step in
         | and match that finding to a real risk of exploitation.
        
       ___________________________________________________________________
       (page generated 2021-03-12 23:01 UTC)