[HN Gopher] Ridiculous vulnerability disclosure process with Cro...
       ___________________________________________________________________
        
       Ridiculous vulnerability disclosure process with CrowdStrike Falcon
       Sensor
        
       Author : detaro
       Score  : 291 points
       Date   : 2022-08-22 08:12 UTC (14 hours ago)
        
 (HTM) web link (www.modzero.com)
 (TXT) w3m dump (www.modzero.com)
        
       | stefantalpalaru wrote:
        
       | guardiangod wrote:
       | > The PoC that has been sent to CrowdStrike was flagged as
       | malicious. The msiexec call of the deinstaller was also flagged
       | as malicious.
       | 
       | As someone who was once part of an endpoint security team, I
       | wouldn't be so quick to judge.
       | 
       | If CrowdStrike operates like any anti-virus software company,
       | there are multiple teams. There would be a small engine team, a
       | team that deals with Windows integration, then there would be a
       | much much bigger malware analyst team(s), then another team that
       | deals with 'active machine learning' (CrowdStrike's bread and
       | butter). Then some senior managers oversee them all.
       | 
       | It's possible that the engine team and the analyst team have a
       | case of 'left hand doesn't know what the right hand is doing.'
       | They both got the report, and they behave differently as the
       | engine team has a different goal from the analyst's team.
       | 
       | From the analyst team's point of view, their job is to detect all
       | potentially malicious threat, sources be damned. While the engine
       | team takes their sweet time, the analyst team just figures "hey
       | this binary attacks our software. We don't know if the engine
       | team would have fix the bug then, or if the bug is even real. We
       | should blacklist it just-in-case or else when the binary in the
       | public, some joker with an auto scanner will use this binary to
       | show they got around our detection. Then our team would get blame
       | for it. Better be safe than sorry."
       | 
       | Yes we all know detecting binary PoC are close to useless, but if
       | you don't do it, then you'd get a flood of (useless) reports
       | later...
        
         | detaro wrote:
         | This doesn't really address the part where they then said the
         | issue doesn't exist.
        
           | guardiangod wrote:
           | I am not defending CrowdStrike here (my work laptop, that I
           | am typing this on, is molasse-like thanks to them), but their
           | PSIRT team (they have one right?) is just another team in the
           | corp machinery. What PSIRT team and the engine team decide to
           | respond have no effect on what the analyst team decides to
           | do.
           | 
           | Do no harm and cover your ass. No one is going to complain a
           | false positive on a custom PoC binary....right?
           | 
           | Everything else, yeah those are pretty shitty.
        
             | dont__panic wrote:
             | Seriously, what does CrowdStrike Falcon _do_ to slow down
             | work computers so much? I recently switched gigs and no
             | longer use Crowdstrike, and I didn 't realize just how bad
             | it was until I no longer had to deal with it.
             | 
             | I've heard there are workarounds to disable or remove
             | CrowdStrike, but I was too concerned that the IT overlords
             | would come after me at my previous employer.
        
             | ImPostingOnHN wrote:
             | you're dancing around the issue though, which is
             | crowdstrike lying in saying there's no vulnerability when
             | they clearly tested it and found there was one
        
               | guardiangod wrote:
               | I didn't address anything except for the quoted line from
               | the blog post, which is the binary PoC detection part.
               | 
               | From the blog post- >The PoC that has been sent to
               | CrowdStrike was flagged as malicious. The msiexec call of
               | the deinstaller was also flagged as malicious.
               | 
               | Everything else I have no opinion I wish to share. There
               | are many posters in this thread that would satisfy your
               | desire to engage on the other issues.
        
       | josephcsible wrote:
       | I hate that the component the vulnerability was in even exists.
       | As far as I'm concerned, if a program tries to keep a local
       | administrator from uninstalling it, _for any reason_ , it's
       | malware.
        
       | mrjin wrote:
       | Somehow I feel those security companies are the source of the
       | security problems.
        
         | Thorrez wrote:
         | Well in this case the vulnerability is the ability to uninstall
         | the program when you're not supposed to be able to uninstall
         | it.
         | 
         | So yes, if the program didn't exist at all, there would be no
         | way to uninstall it in an unauthorized manor. So the
         | vulnerability wouldn't exist. You wouldn't necessarily be any
         | more secure though.
         | 
         | If you have 10 layers of security and 5 have holes in them, you
         | have 5 vulnerabilities, but you're reasonably secure. If you
         | have 0 layers of security and thus 0 holes in them, you might
         | arguably say you have 0 vulnerabilities, but you would be less
         | secure than the 5 vulnerability system. In the early days of
         | computing you would log in with your username only, no
         | password. Their threat model didn't consider intentional
         | attacks, thus there were no vulnerabilities, but anyone could
         | use anyone else's account.
        
           | fisf wrote:
           | What is the vulnerability here? An admin user can do admin
           | stuff. Shocking.
        
             | Thorrez wrote:
             | Yeah, I'm not 100% sure. I'm not sure what the threat model
             | for Uninstall Protection is. The official description page
             | isn't completely clear.
             | 
             | >Uninstall protection prevents unauthorized users from
             | uninstalling the Falcon Agent
             | 
             | >The "Maintenance Manager" role is available which grants
             | permission to access the maintenance tokens. This role must
             | be enabled against the Falcon user's account in order to
             | obtain maintenance tokens or manage policy related to
             | Uninstall Protection.
             | 
             | Putting those 2 sentences together seems to lead to the
             | conclusion that if someone doesn't have the "Maintenance
             | Manager" role, that person will be prevented from
             | uninstalling the Falcon Agent. It's unclear to me if all
             | admin users are considered to have the Maintenance Manager
             | role.
             | 
             | https://www.crowdstrike.com/blog/tech-center/uninstall-
             | prote...
        
             | ImPostingOnHN wrote:
             | if you believe you know the answer to the question, why ask
             | it?
        
         | nopcode wrote:
         | There's very little incentive to actually "solve" the problem.
        
       | jeroenhd wrote:
       | Sounds like they expect everyone to be out for a bounty rather
       | than to improve someone else's software so they probably have a
       | contract with HackerOne to let them do all the annoying hard work
       | dealing with security researchers.
       | 
       | Personally, I would've released the PoC back in July when they
       | said the problem was resolved. No need to ask if the quote can be
       | used, it's exactly what they told the security researchers after
       | all.
        
         | lazyier wrote:
         | Sounds more like these companies are using bounty programs to
         | get researchers to sign NDAs so that they can control public
         | perception of their products.
         | 
         | Effectively paying people (and heaping ego gratification on top
         | of that) to be silent about found vulnerabilities.
        
           | tptacek wrote:
           | Well, sure. That's essentially the trade you're making with a
           | bounty program: you pay people for finding stuff, and get to
           | establish terms for disclosure. If you're not OK with those
           | terms, you can almost always just ignore the bounty and
           | publish directly.
        
             | jcims wrote:
             | Is there still a safe harbor if you go that route?
        
               | tptacek wrote:
               | You don't need "safe harbor" to test software you install
               | on your own machine (which is what Crowdstrike is), and
               | if you're testing someone else's server, you'd better
               | have permission already.
        
               | roblabla wrote:
               | The problem is with the publishing part. It's pretty
               | unclear - to me at least - what the legal status of
               | publishing 0days is around the world. In the USA, I'd
               | expect it to be protected by free speech, but even then I
               | wouldn't be 100% sure.
        
               | tptacek wrote:
               | Publishing vulnerabilities in the US is protected speech.
               | You get in trouble with disclosing vulnerabilities in the
               | US in four ways, ordered from most to least common:
               | 
               | 1. You tested someone else's servers, and not software
               | running on your own computer, and you didn't get
               | permission or adhere to the rules of engagement the
               | target established. Now you're not a researcher, you're
               | an intruder, subject to CFAA. There's a bright line in US
               | law between your computer and someone else's computer.
               | 
               | 2. You tested software running on your own computer, but
               | you acquired that software by agreeing to a contract
               | prohibiting research, reverse engineering, or disclosure
               | (ie: any NDA). Now you've violated a contract, and can be
               | sued civilly for doing so. This comes up a fair bit when
               | testing stuff on behalf of big enterprises, where all the
               | software acquisitions come with big, enforceable
               | contracts. I've had to cave a bunch of times on
               | disclosures because of this; most memorably, I got locked
               | in a vendor's suite at Black Hat an hour before my talk
               | redacting things, because that vendor had a binding
               | contract with my consulting client.
               | 
               | 3. You were wrong about the vulnerability, or it could
               | plausibly be argued that you were wrong, and you made a
               | big stink about it. You're still a researcher, but you've
               | also possibly defamed the target, which is a tort that
               | you can be sued for.
               | 
               | 4. You disclosed a vulnerability that you'd previously
               | leaked, or provided exploit tooling regarding, to a
               | criminal enterprise. Now you're not a researcher, you're
               | an accomplice to the criminal enterprise. This has come
               | up with people writing exploits for carding rings ---
               | they weren't (or couldn't be proved to be) carders
               | themselves, but they explicitly and knowingly enabled the
               | carding.
               | 
               | As you can see, disclosing vulnerabilities isn't the
               | _least_ scary thing you can do with speech in the US, but
               | it 's not that much more scary than, say, leaving a nasty
               | Yelp review for a dentist's office (something that also
               | gets people sued). Just (a) don't test servers and (b)
               | don't give secret bugs to organized criminals.
        
         | ajross wrote:
         | > Personally, I would've released the PoC back in July when
         | they said the problem was resolved.
         | 
         | Dropping a zero day on the public is never acceptable,
         | regardless of how disingenuous a device manufacturer is being.
         | Bug disclosure without a known remedy has to be an absolute
         | last resort kind of thing, and it's actually a little upsetting
         | that modzero used that tactic as a kind of threat ("As the
         | issue was not considered valid, we informed CrowdStrike that we
         | would release the advisory to the public.")
        
           | vvillena wrote:
           | I'd argue any kind of vulnerability announcement, even a
           | zero-day exploit, is better than having the vulnerability be
           | exploited under the radar by malicious actors. The existence
           | of an exploit allows people affected by the vulnerability to
           | know there's a threat, and to act accordingly.
        
           | buscoquadnary wrote:
           | Releasing information increases the transparency of the
           | market, allowing customers to make informed decisions. To
           | hide things is not beneficial to the customers.
           | 
           | Always assume a bad guy has the 0-day before a security
           | researcher.
        
           | ndsipa_pomu wrote:
           | I think it's a reasonable progression if the company refuses
           | to accept that the exploit is valid and won't open up equable
           | discussion about it. Trying to force someone to sign an NDA
           | is not really acceptable behaviour when someone is going out
           | of their way to help the company (and their customers).
        
           | outworlder wrote:
           | > Bug disclosure without a known remedy has to be an absolute
           | last resort kind of thing, and it's actually a little
           | upsetting that modzero used that tactic as a kind of threat
           | 
           | I don't think it is upsetting at all.
           | 
           | "We found a vulnerability"
           | 
           | "There's no vulnerability"
           | 
           | "No, you misunderstand, here's how it works and how to
           | exploit"
           | 
           | "Naah, no vulnerability"
           | 
           | "Ok, if there's no vulnerability as you claim, you don't mind
           | us releasing our findings to the public, right?"
        
             | ajross wrote:
             | The thing is, "releasing our findings to the public" puts
             | the vendor's _customers_ at risk, it 's not just some
             | imagined Just Punishment For The Guilty, innocents get
             | hurt.
             | 
             | Imagine if you took a new job and they had a bunch of
             | hardware sitting around from such a vendor. Would you be OK
             | if someone published an exploit for _your_ systems?
             | 
             | (In this case, the vulnerability seems minor, so it's sort
             | of academic. But I'm not understanding the mindset of the
             | people here who want to see this as a two party adversarial
             | kind of relationship between Modzero and CrowdStrike. It's
             | not!)
        
               | rodgerd wrote:
               | The vendor's customers are already at risk. It's a
               | peculiar arrogance to imagine otherwise.
        
               | detaro wrote:
               | You can't let vendors hide security problems by just not
               | doing anything about them. People deserve to know if the
               | product they rely on has vulnerabilities, because just
               | because you aren't exploiting it doesn't mean nobody else
               | will find it.
               | 
               | Modzero was even following a more conservative playbook
               | here: not setting a deadline from the start, but only
               | talking about release once the vendor indicated there was
               | no issue (anymore).
        
               | ajross wrote:
               | > You can't let vendors hide security problems by just
               | not doing anything about them.
               | 
               | Telling people that the product they rely on has
               | vulnerabilities is clearly not the same thing as "release
               | the vulnerability report" though, is it? I still remain
               | amazed at the absolutism in the arguments here. There is
               | a spectrum of responses that can be explored before
               | dropping bombs. But everyone wants to see stuff burn?
        
               | jeroenhd wrote:
               | Telling the customers that there's a vulnerability will
               | make them turn to their vendor. The vendor will obviously
               | lie and say there's no vulnerability, everything is fine,
               | ignore the panic.
               | 
               | Without the necessary details telling the public about a
               | vulnerability is like shouting at a wall. Publishing the
               | details forces vendors to release fixes when they deny
               | the existence of the vulnerability.
               | 
               | This isn't some hidden password or evil DoS attack, this
               | is an attack only processes with admin access can
               | leverage on infected machines. This command is either
               | executed by a computer user (which should flag a warning
               | in the management software) or it's executed by a virus
               | with admin permissions that went undetected by the
               | antivirus solution on the machine. The stakes are low
               | enough and the vendor is irresponsible enough that I
               | don't see a problem with publishing a PoC when vendors
               | lie about these kinds of bugs.
               | 
               | Of course remote exploitation and auth bypass PoCs
               | shouldn't be released into the wild without trying to get
               | a patch out first, but even still vendors like D-Link
               | just don't seem to care if you don't at least threaten
               | them with releasing the details to the public.
        
               | xphos wrote:
               | This kind of thinking does not protect customers though.
               | We can all pretend there is no vulnerability but that
               | does not mean there isn't a vulnerability. This is kind
               | of "Don't look up" thinking. The spectrum here was they
               | tried with the OEM and the OEM said let it burn by
               | snubbing the researchers. The researchers than attempted
               | to let customers know since the OEM did not want to
               | protect them.
        
               | paulryanrogers wrote:
               | So what's the intermediate step on the spectrum? Publicly
               | call out the vendor and model, yet vaguely enough that
               | others will have to find the specific gap(s)? Tell the
               | press?
               | 
               | If one publicly discloses some mitigation it's usually
               | enough to give malicious actors enough to go on anyway.
        
               | matheusmoreira wrote:
               | > puts the vendor's _customers_ at risk
               | 
               | They were already at risk and didn't even know before
               | disclosure. If anyone's to blame for anything, it's the
               | corporation. They were told a vulnerability existed. If
               | it got to the point people are releasing things to the
               | public, it's certainly because they neglected to do what
               | needed to be done.
        
               | xphos wrote:
               | I don't think its the job of Security Researchers to
               | defend a OEMs customers. That's the OEMs job? If the OEM
               | does not want to protect its customers after a research
               | has literally given the vulnerability to them its no
               | longer on the researcher. In fact its probably bad ethics
               | for the researcher not to publish because just because
               | there is no report does not mean the bug is not exploited
               | or known by malicious actors. The researchers last
               | recourse is to publish a notification to let the OEMs
               | customers there is an issue since the OEM won't
               | acknowledge it.
        
               | paulryanrogers wrote:
               | > The thing is, "releasing our findings to the public"
               | puts the vendor's customers at risk
               | 
               | If vendor denies there is a problem despite repeated
               | submissions of evidence then customers are already at
               | risk indefinitely.
        
               | conioh wrote:
        
           | detaro wrote:
           | So what is an appropriate point in time to release
           | information about an issue the vendor claims doesn't exist?
        
           | matheusmoreira wrote:
           | It is absolutely acceptable. Corporate negligence harms
           | everyone. If disclosing vulnerabilities is what it takes to
           | force them to do something about them, so be it.
        
         | BeefWellington wrote:
         | What's interesting is there are roles out there where you are
         | not permitted to participate in bug bounty programs. I've
         | worked a few positions where the company outright refused to
         | allow compensation and I wouldn't have been authorized to sign
         | an NDA on behalf of my company just to report issues to a third
         | party.
         | 
         | It created a lot of friction but I kind of understand the
         | policy.
        
       | tjpnz wrote:
       | >modzero's Sr Leadership
       | 
       | What planet did they get their MBA from?
        
       | tgv wrote:
       | That doesn't give me the impression of a company focussed on
       | security. That they marked the installer of the PoC as
       | "malicious" shows they do have a process, but don't take it
       | seriously.
        
         | jupp0r wrote:
         | It's the snake oil industry. They don't sell security, they
         | sell CISO get out of jail cards in the form of client agents
         | that constantly remind you of their divine presence by being on
         | top of the CPU utilization sorted process list.
        
           | TecoAndJix wrote:
           | There is a LOT of snake oil in the industry, but endpoint
           | protection IS useful. Not every person is a Hacker News
           | reading tech enthusiast. People download and do dumb shit.
           | That is not to say every device needs it and at a max "check
           | every process/file activity" level, though.
        
             | busterarm wrote:
             | Endpoint Protection may be useful but credibility is
             | everything in the industry.
             | 
             | CrowdStrike has done loads to damage their own credibility
             | to anyone paying attention, but because they've chosen to
             | be favorable to certain power players along political lines
             | there are folks out there that treat them like the be-all-
             | end-all of the industry.
             | 
             | As someone in-industry, hearing people parrot press
             | releases from CrowdStrike has me looking at them sideways.
             | 
             | Edit/Addendum: Just to lay bare my opinion of them...
             | CrowdStrike is a clown-ass company run by clown-ass people
             | and with a clown-ass product.
        
             | nikanj wrote:
             | 10% of Falcon is blocking dumb shit people do. 90% is
             | blocking things people are supposed to be doing, and have
             | been doing successfully so far.
             | 
             | Nothing starts your week better than "After the latest
             | definitions update, Falcon heuristic started quarantining
             | your core business tools as suspicious".
        
               | TecoAndJix wrote:
               | I feel your pain at a deep and spiritual level. I have
               | been in charge of at least half a dozen endpoint
               | protection products over the years (deployment,
               | configuration, management, etc.). Once a user experiences
               | what you just described they are (rightfully) suspicious
               | and sour towards endpoint protection.
               | 
               | Questions i would ask in your example: 1) Was the core
               | business tool excluded from the more intrusive protection
               | modules or does the tool have a significant risk surface?
               | 2) What was the threshold set for quarantining? Does it
               | make sense in this case? 3) Is/should your device be part
               | of a "Developer" policy that is more permissive? Are all
               | users of the tool impacted? 4) Does this happen
               | frequently? If so, should definitions be manually pushed
               | in batches so everyone is not nerfed at once. 5) What is
               | the process for the developer to report/fix the false
               | positive? Is the response time sufficient?
               | 
               | I'm probably forgetting a few. The point is, shit happens
               | (especially with technology). You respond, fix, and
               | hopefully learn. If shit happens a lot, its either
               | because the tool owner doesn't give a shit or the product
               | is shit itself. The delicate balance of security and
               | business operations/innovation is all about weighing and
               | evaluating risk/benefit.
        
               | semi-extrinsic wrote:
               | Can I ask, since you're as a person who has administered
               | endpoint protection products: how much legitimate stuff
               | do they actually catch?
        
               | syntheticcorp wrote:
               | I work in offense and they can be a huge impediment.
               | Significant work goes into bypassing or staying
               | undetected from these products. While not all the
               | detection occurs at runtime, they report a lot of data
               | back from the endpoint so historical detection can
               | happen.
               | 
               | However what I see is essentially their true positive and
               | false negative rate, I would be interested to know what
               | the false positive rate is.
        
               | zdragnar wrote:
               | Not OP, but I've worked for an endpoint protection
               | product company. Part of the onboarding was them loading
               | up a virtual machine with the endpoint installed, then
               | demonstrating several attacks (installing malicious
               | software, running scripts off the internet, etc) and
               | showing the logs of what the endpoint detected, and at
               | what point it shut down the malicious behavior.
               | 
               | The examples shown were behavior based, not hash based.
               | It didn't look up a file in a dictionary, it detected
               | priviledge elevations and such.
               | 
               | No product is perfect, but if you have a need to be
               | protected (especially if you are at risk from adversaries
               | such as in banking, health care, government work, or
               | against corporate espionage) I'm quite confident in
               | saying that you're much better off with it than without.
               | 
               | The same company would also, at random times, attempt to
               | phish us or send us fake emails to get us to click on
               | links, to help educate us on the kinds of threats our
               | customers faced I consider myself fairly savvy, and even
               | I fell for one of them.
               | 
               | I ended up leaving for a variety of reasons, but "losing
               | faith in the product" was not one of them.
        
               | Melatonic wrote:
               | I'm not the guy above but the better ones can be very
               | useful. And if they are behaving that badly and impeding
               | your work THAT much then it is most likely that the
               | person in your org configuring it just sucks as their
               | job.
        
               | pja wrote:
               | Nobody else is running your binaries, so they don't match
               | the hashes of any of the trusted binaries in the
               | database. Obviously they're suspect & should be
               | quarantined immediately!
        
               | boondaburrah wrote:
               | They fixed it with an update this month, but CrowdStrike
               | was hooking /every/ single call to NtCreateUserProcess on
               | my work machine last month, and you /know/ how electron-
               | based apps work. VSCode took so long to launch its sub
               | processes it would pop up a crash reporter. "Hello World"
               | compiled from C++ would take a minute to launch
               | sometimes. WSL straight up could not be started because
               | the TTY timed out waiting for it.
               | 
               | For some reason java.exe startup was A-OK though so I
               | started using JEdit again.
               | 
               | Aggravatingly, it would occasionally disappear my builds
               | and then flag me to IT. My dude, I am hired as a
               | developer of native windows C++ applications why the hell
               | is this trash on my would-be workstation-class machine?
        
               | nikanj wrote:
               | Because your organization's customers demanded your
               | employer get some security certificate, and part of that
               | certification is hoisting that BS on all users
        
               | [deleted]
        
               | jupp0r wrote:
               | java.exe was probably excluded.
        
               | wyldfire wrote:
               | I know what I'm calling my next exploit ;)
        
               | jupp0r wrote:
               | These are usually hash-based so you'll need to actually
               | write it in Java or something more modern running on the
               | JVM. Good thing is you'll only need to write it once and
               | it will run anywhere!
        
               | politician wrote:
               | Your organization and your IT department expect you to
               | work around these issues by doing development on your
               | personal machine, and then copying it to your work
               | machine while pretending like you never tunneled to your
               | personal machine from the office.
               | 
               | That's what it feels like with some of these policies.
        
             | outworlder wrote:
             | Endpoint protection is theoretically useful.
             | 
             | In practice, most implementations cause more harm than
             | good. Some of them even add vulnerabilities themselves.
        
           | Melatonic wrote:
           | I would agree for some of the old guard antivirus (looking at
           | you McCrappy) but with the better ones you are also paying
           | for a huge amount of analytics and reporting. If you are
           | dealing with a breach (even a small one) it can be really
           | really nice (and reassuring especially for people at higher
           | levels) to have nice easy to consume data on what the chain
           | of attack was and where it started.
        
           | galacticaactual wrote:
           | Download and run mimikatz on an endpoint protected by said
           | "snake oil" and one without. Note the difference.
        
       | [deleted]
        
       | tptacek wrote:
       | This seems extremely tame by vulnerability disclosure fuckup
       | standards.
        
         | bsamuels wrote:
         | Seriously. I don't think the researcher realizes how many
         | people try to bypass hackerone because H1 would have flagged
         | their finding as invalid.
         | 
         | Using h1 isn't about bug bounties, it's about not having to
         | spend a 1-2 of your team's full time engineers triaging
         | security researcher reports.
        
           | 0x457 wrote:
           | We had some of the dumbest H1 "findings" at some companies
           | that I worked:
           | 
           | - Service that is explicitly out of scope of program is
           | "leaking" default CloudFront headers.
           | 
           | - Android application can be decompiled (that's it, not
           | secret is there, just the fact that it's possible)
           | 
           | - "I can do something bad IF I had a way to load malicious
           | JavaScript" (no, CSRF protection was one and correctly
           | implemented) (there is also no way to upload your own
           | JavaScript)
           | 
           | - "I can do things if I open a console in a browser" (can't
           | do anything because CORS policy only allowed for read-only
           | endpoints)
           | 
           | - "You can make requests with CURL and not official client"
           | 
           | Every week, there was at least one variation of one of those
           | reports. "Hackers" also got very defensive about their
           | "findings" and acted like we don't want to pay them for some
           | "mega hack of the year 0day total domination of user device"
           | vulnerabilities.
           | 
           | Not once has anyone found even a minor vulnerability, just
           | wannabes trying to get quick cash. Until we had H1 we had
           | zero reports, with H1 we had moronic reports every other day.
        
             | kstrauser wrote:
             | This has been my experience, too, with security reports in
             | general. We see things like:
             | 
             | - "An attacker could spoof an email from you to a user."
             | (POC video shows Yahoo webmail succeeding. We try the same
             | thing in Gmail, and it gets sent to the spam folder because
             | it fails SPF and DKIM.)
             | 
             | - "If I try logging in as a user with an invalid email too
             | many times, it locks them out of their account. That's a
             | denial of service." (Well, yeah, and that's a bummer, but
             | it beats allowing an attacker unlimited attempts.)
             | 
             | I'll say, though, that H1 has been super helpful at
             | screening the _worse_ reports. Sometimes they 'll initially
             | block reports like the above, but the researcher will
             | insist that this time it's for real. I don't feel too bad
             | closing _those_ reports as invalid.
             | 
             | In all, I'm a very happy H1 customer. They've been good to
             | work with.
        
           | thaeli wrote:
           | If H1 was willing to take and triage reports without
           | requiring acceptance of their terms and NDA, that would be
           | fine.
           | 
           | We also need to be very clear that the moment a company, or
           | it's authorized representative, flags something as a wontfix
           | or "not a security issue", full and immediate disclosure is
           | fair game.
        
             | tptacek wrote:
             | I think that clarity already exists.
        
           | RHSeeger wrote:
           | Then they should provide a path that doesn't involve
           | arbitrary NDAs if you're willing to forego the reward.
        
             | tptacek wrote:
             | That path already exists: it's called "email
             | security@vendor, tell them what you found, ask when a patch
             | is expected, and then tell them they have 60 days".
        
       | cr3ative wrote:
       | Refusing to interact with an existing security process
       | (HackerOne) and demanding a personal contact instead for a minor
       | issue is certainly an interesting take.
        
         | denton-scratch wrote:
         | "Hey neighbour, you left your front-door open".
         | 
         | "Can you notify my lawyer by FAX, please? And can you get the
         | document notarized first? Kthxbai".
        
         | mdbug wrote:
         | You find "interesting" that someone just wants to report a
         | security vulnerability without having to accept any conditions
         | first?
         | 
         | Funny, I find it interesting that they want to pay a bugbounty
         | even though nobody asked for it. But I guess paying hush money
         | is just cheaper than having to seriously fix the issue.
        
           | malaya_zemlya wrote:
           | >But I guess paying hush money is just cheaper than having to
           | seriously fix the issue.
           | 
           | They did fix the issue, though.
        
             | Anunayj wrote:
             | They just marked something the way exploit was done as
             | "malacious", without fixing the root problem, or informing
             | the the reporter that they "fixed" it. Instead claiming it
             | was never there. That is very unprofessional!
             | 
             | And if these guys were to go though the NDA route, The
             | company may choose just not to fix it at all, and tell
             | these researchers to be quiet about it. And you'd never
             | know there was such a exploit ever.
        
         | xbar wrote:
         | This is a case where an email from mod zero to
         | security@crowdstrike.com should have been enough to get the SOC
         | to read and route the vuln to the product team and get a fix
         | implemented.
         | 
         | NDA? HackerOne? Personal Information with Identity and Credit
         | History Verification, Cookies and Disclosure Agreements, and
         | 3rd party terms? Why is it at all "interesting" that a security
         | researcher is not interested in giving all of up in order to
         | tell CrowdStrike that their core product is broken in a way
         | that is completely inimical to its mission purpose?
        
         | CoastalCoder wrote:
         | I don't get the impression that the researchers were simply
         | being lazy or attention-seeking.
         | 
         | They objected to having to sign an NDA, when there was no clear
         | incentive to legally bind themselves in that manner.
        
         | tptacek wrote:
         | There's nothing at all weird about refusing to work through
         | HackerOne. If you're not violating contracts (or the law) to
         | conduct the research in the first place, you're not required to
         | work through a bounty program at all; you can just disclose
         | directly. Part of the point of a bounty program is to limit and
         | control how disclosure happens; if your priority is the
         | disclosure and not the cash reward, you won't want anything to
         | do with H1.
        
         | rroot wrote:
         | It's interesting that they don't want to sign an NDA when
         | there's absolutely nothing in it for them. Utterly astonishing.
         | I'm hopping mad.
        
       | balentio wrote:
       | Remove lawyers from the composite picture. Is there any rational
       | reason for a NDA in that case? If the answer is no, then in one
       | way or another they are trying to limit liability by limiting the
       | researcher's ability to be paid for their discovery and then
       | communicating that to the wider world.
        
         | thaeli wrote:
         | This isn't about liability. It's about shifting the decision of
         | whether to disclose, and on what timetable, entirely back on
         | the vendor.
        
           | balentio wrote:
           | Great. Imagine I sell a security product and I make a tidy
           | sum telling you how secure that product is. Turns out,
           | there's a security hole in my product that costs your company
           | 100 million in lawsuits as customer data gets stolen via this
           | hole. Now, you'd like to sue me on my claim on the basis that
           | my security product was in fact, flawed.
           | 
           | How's that not about liability?
        
       | nelox wrote:
       | Nothing to see here. Move along.
        
       | mberning wrote:
       | Working as a developer inside large enterprise is increasingly
       | intolerable by the day, made possible by tools such as
       | crowdstrike falcon. By the time your workstation is saddled with
       | endpoint security, DLP, zero trust networking, antivirus, etc. it
       | barely functions. And you can get in trouble for doing anything.
       | Installing tree from homebrew can get you flagged on some naughty
       | list where you have to justify why you needed tree of all things.
        
         | mrjin wrote:
         | Exactly. For competent developer, such tool is nothing but
         | wasting time. The reason is simple: if they don't know what
         | they are doing or cannot be trusted, they shouldn't be hired in
         | the first place.
         | 
         | But I do understand why those are in place:
         | 
         | 1. There are lots of those who have no idea what they are doing
         | in the organization. And/or
         | 
         | 2. Some high up who have no idea what they are doing want to
         | show their value. Such move typically happens after someone
         | with Chief Security Officer or similar get hired. Or they do
         | know but simply don't care.
        
           | mberning wrote:
           | I think there is a lot of value in these tools for the
           | enterprise. Even if it is just CYA insurance. If somebody
           | steals data but you have a DLP tool you can just blame the
           | vendor. My biggest issue with the tools is they have an
           | insanely deleterious impact on performance and the harsh
           | scrutiny applied has a chilling effect on employees. It turns
           | people into drones that do not dare step outside the norm.
        
             | bigDinosaur wrote:
             | This is hands down one of the scariest and depressing
             | comments I've ever read on HN.
        
               | mberning wrote:
               | Sadly it is becoming the norm. Even at small and medium
               | sized companies.
        
             | vladvasiliu wrote:
             | > If somebody steals data but you have a DLP tool you can
             | just blame the vendor.
             | 
             | This is just about the main thing that bugs me with this
             | kind of tool: sure you can blame the vendor, but the data
             | is _still_ stolen.
             | 
             | And companies are happy to have checked that box and won't
             | bother implementing actually useful policies.
        
         | LelouBil wrote:
         | I recently finished an internship in a large company.
         | 
         | I wanted to install netcat to troubleshoot networking issues
         | between windows and docker containers I was running.
         | 
         | Right when it was downloaded from scoop, it got deleted and I
         | got a scary automated email.
         | 
         | My manager called me immediately, in the end it was cleared
         | quickly but I did learn to be very carefull about what I try to
         | download.
         | 
         | At first I didn't understand why netcat was being detected by
         | the AV but then I remembered it can be used to set up a reverse
         | shell.
        
           | Anunayj wrote:
           | Gosh netcat binaries on Windows get picked up by Windows
           | Defender as malacious, it's so infuriating.
        
           | Aissen wrote:
           | Any programmer that knows how to code (and you seemed to have
           | docker installed, so I'm sure you do too) should be able to
           | create their own reverse shell from a few minutes to a couple
           | of hours. Hence the blocking of netcat in this context
           | (developer workstation) makes no sense.
        
             | unmole wrote:
             | I once worked for a large telecom equipment vendor whose IT
             | policies banned the installation of _dangerous software_
             | like Wireshark.
        
             | mberning wrote:
             | Or so many other tools. If you can open a socket, you can
             | transfer data.
        
       | Exuma wrote:
       | Is it weird that I read this as ClowdStrike like a dozen times
       | before realizing it was Crowd?
        
         | 0xbadcafebee wrote:
         | I've heard them referred to as ClownStrike...
        
           | Exuma wrote:
           | Lmao! Even better. I envision a custard pie as their logo
        
       | xbar wrote:
       | Clownish of CrowdStrike. Take your vulnerability and address it.
        
       | ehhthing wrote:
       | This all seems a bit silly, and could easily be attributed to a
       | communication issue.
       | 
       | On CrowdStrike's end, it's much more likely that their systems
       | changed a few heuristics so now it flags certain msiexecs as
       | malicious. Most anti-virus type software are highly
       | nondeterministic in the way they operate, with tiny changes in
       | detection engines able to cause large changes in the way some
       | threats are detected.
       | 
       | Even modzero themselves admitted that the vulnerability is not of
       | great severity so the motivation for the security triage team to
       | put more resources in validating a non-severe bug are probably
       | very low. They likely just tried to run the exploit, and didn't
       | think much of it after it didn't work.
       | 
       | Also if modzero is not participating in a bug bounty program then
       | CrowdStrike has no obligation in providing them with free trials
       | or such in verifying a vulnerability fix.
       | 
       | I'm no fan of CrowdStrike (in fact, one of the more memorable
       | moments for me at my previous job was my boss calling them
       | "ClownStrike"), but it seems as if this is just a bit of
       | overzealous entitlement from modzero as well as not enough
       | testing on CrowdStrike's end.
        
         | FreakLegion wrote:
         | _> Even modzero themselves admitted that the vulnerability is
         | not of great severity_
         | 
         | They're wrong. It's not at all uncommon for companies to give
         | employees admin, and privilege escalation tends to be easy on
         | Windows anyway.
         | 
         |  _> CrowdStrike has no obligation in providing them with free
         | trials or such in verifying a vulnerability fix_
         | 
         | Sure, and modzero has no obligation to responsibly disclose,
         | and now here we are. I'm sure they'll work it out if this gets
         | noticed.
         | 
         | My first direct experience with CrowdStrike was the
         | announcement of VENOM back in 2015 [1], which they coordinated
         | with a few friendlies like FireEye but left most of us in the
         | dark about (I was at Palo Alto Networks, not exactly a small
         | company). Looks like they still struggle with this stuff.
         | 
         | 1. https://web.archive.org/web/20150514062749/https://venom.cro
         | ..., possibly the origin of named and marketed vulnerabilities.
        
           | vladvasiliu wrote:
           | > privilege escalation tends to be easy on Windows anyway
           | 
           | Well, Crowdstrike is supposed to catch and prevent that...
        
           | tptacek wrote:
           | "Responsible" disclosure is an Orwellian term. The real term
           | is "coordinated disclosure", and, as you can see from the
           | timeline, there's coordination here.
        
             | FreakLegion wrote:
             | You're watering down the meaning of Orwellian just a tad
             | here but sure, "responsible" has a value judgment baked in
             | that "coordinated" is mercifully free of. On the other
             | hand, "coordinatedly disclosed" doesn't work as well in a
             | sentence.
        
               | tptacek wrote:
               | It's Orwellian because the term was deliberately chosen
               | in order to coerce researchers into accepting the
               | priorities of vendors (by implicitly rendering
               | non-"responsible" disclosure "irresponsible"). I think
               | it's actually one of the better modern examples of
               | Orwellian language in tech.
        
             | monocasa wrote:
             | "Coordinated disclosure" is the orwellian reimagining here.
             | The only reason why the industry settled on the responsible
             | disclosure process is that works regardless of how much
             | vendor coordination occurs, or even if they are technically
             | responding to emails but really just stalling indefinitely.
        
               | tptacek wrote:
               | No, I was there at the inception of this term, and it was
               | absolutely originally imagined as a way of controlling
               | researchers and giving vendors more power over
               | information about their products. It has pissed
               | researchers off for decades, as it implies that not
               | following the "responsible" process makes one per se
               | "irresponsible".
        
               | monocasa wrote:
               | > No, I was there at the inception of this term, and it
               | was absolutely originally imagined as a way of
               | controlling researchers and giving vendors more power
               | over information about their products.
               | 
               | I mean, that half is the carrot to get vendors to play
               | ball and actually fix their shitty code occasionally. It
               | lets unaffiliated white hat security researchers who are
               | just trying to get sec issues fixed actually get some
               | focus time from the various sauron-esque eyes that are
               | corporate attention by converting a 'bug report' into an
               | 'impending pr nightmare bomb with a known timer'. It's a
               | similar hack to complaining on twitter to deal with a
               | marketing department instead of calling in to a customer
               | service line that's just trying to get you to go away.
               | 
               | > It has pissed researchers off for decades, as it
               | implies that not following the "responsible" process
               | makes one per se "irresponsible".
               | 
               | The point of calling it responsible is to defend the
               | researchers who have massively less power in the
               | relationship against the vendors. Even now you'll see
               | whitehats lambasted for eventually disclosing after a
               | vendor dragged their ass. Beyond that, yeah you can't
               | please everyone.
        
               | akerl_ wrote:
               | What's an example of a researcher being lambasted for
               | disclosing a vulnerability to the public?
        
               | monocasa wrote:
               | The example I generally use is when Peter Bright went on
               | a several month set of tirades while a staff writer for
               | arstechnica lambasting project zero disclosing a
               | microsoft vulnerability after the disclosure window was
               | up. Hard to find the source though now that his articles
               | have been scrubbed ever since he got caught trying to
               | diddle some children. : \ In the comments, most of the
               | devops/sysadmin community of arstechnica took his side
               | that you shouldn't disclose publicly at all, missing the
               | point that others might be actively exploiting the same
               | vulnerability. You routinely see the same sentiment even
               | on HN (thankfully in the minority now), missing the point
               | that while finding exploits is hard work and generally
               | takes very smart individuals, that's not to the point of
               | being able to assume that someone else who wears a
               | slightly darker hat didn't figure out the same bug.
        
               | tptacek wrote:
               | I don't much care what Peter Bright thinks or said a
               | decade ago. Peter Bright isn't a security researcher, or
               | on a vendor security team. He's just some guy (or, was
               | some guy) on Twitter. Project Zero, meanwhile, is the
               | global gold standard for coordinated vulnerability
               | disclosure.
               | 
               | Lots of people with better reputations than Bright have
               | publicly lobbied against disclosure of any sort. Bruce
               | Schneier is a great example of this; if you're an old
               | industry head, Marcus Ranum is another name here. The
               | point isn't that everybody agrees with me about
               | "Responsible Disclosure"; it's that everybody who matters
               | does.
        
               | monocasa wrote:
               | He was one of the most important tech journalists in the
               | world when he was saying this stuff; his fame came before
               | his twitter account. He was also functionally a
               | mouthpiece for Microsoft PR, available to run hit pieces
               | on situations that were otherwise embarrassing for
               | Microsoft.
               | 
               | And project zero puts an emphasis on disclosure, not
               | coordination. When the ticker runs out they nearly always
               | disclose, regardless of where coordination is at. That's
               | what Peter Bright was ultimately complaining about; they
               | disclosed like a week before one of the relevant patch
               | tuesdays.
               | 
               | Like I've said, the primary point isn't the coordination.
               | That's the carrot to get the vendors to play game on a
               | sane schedule because otherwise the vendor holds all the
               | cards.
        
               | tptacek wrote:
               | I simply don't care what Peter Bright says, and neither
               | does anybody else in the field. We're now arguing for the
               | sake of arguing. It's plainly correct that the term
               | "Responsible Disclosure" is disfavored, and disfavored
               | for the reasons I say it is. I didn't come up with any of
               | these things; I just work in vulnerability research (or
               | used to) and know what the stories are.
        
               | tptacek wrote:
               | As Google proved long ago, the only thing you have to do
               | to get vendors to fix their shitty code is to set a fixed
               | timeline for disclosure; just tell the vendor "please let
               | us know when this is patched, and we're disclosing
               | regardless after 60 days".
               | 
               | The point of calling it "responsible" has nothing to do
               | with defending researchers; it's literally the opposite.
               | The norms of "responsible" disclosure were absolutely not
               | the norms of vulnerability reearchers of the time. The
               | term was invented back in the early aughts by @stake,
               | then the largest software security consultancy in the
               | world (my company, Matasano, was essentially a spin-off
               | of @stake; one of my cofounders was an @stake cofounder).
               | 
               | This is one of those things, like "Zero Trust
               | Networking", where people read the term, (reasonably)
               | believe they understand what the words mean, and then
               | axiomatically derive the concept. But, no, the concept
               | has its own history and its own meaning; the words have
               | little to do with it (except that here, it's especially
               | obvious what's problematic about the words themselves).
               | 
               | The industry has largely moved away from "Responsible"
               | disclosure to "Coordinated" disclosure, for all the
               | reasons I've given here. Even CERT, the most conservative
               | organization in software security, uses the new term now.
               | 
               |  _Later edit_
               | 
               | This originally read _The norms of "responsible"
               | disclosure were absolutely not the norms of "Responsible
               | Disclosure"_, which was a typo.
        
               | monocasa wrote:
               | The use of 'responsible' wrt vulnerability disclosure is
               | thought to at least go back to 2001's essay "It's Time to
               | End Information Anarchy", and I know that the adjective
               | 'responsible' was being thrown around wrt vulnarability
               | disclosure before that. @stake did not invent the term in
               | the early aughts. https://web.archive.org/web/20011109045
               | 330if_/http://www.mic...
               | 
               | Yes, CERT has moved using CVD, but I argue that's because
               | of their conservationism. They don't want to rock the
               | boat and tend towards vendor friendly, neutral language.
               | That makes sense for their niche.
               | 
               | And just throwing it out there that the older term that's
               | actively being erased because its implications are
               | unfriendly to entrenched interests isn't the "Orwellian"
               | one.
        
               | tptacek wrote:
               | I'm sorry, but I think you're just kind of making things
               | up here to perpetuate an unproductive argument. By
               | "conservative", I meant that CERT is broadly anti-
               | disclosure in all forms, which I think is a claim that
               | pretty much anyone working in vulnerability research
               | would recognize.
               | 
               | Here's the 2002 I-D that Weld worked on with Steve
               | Christey standardizing the term. It's notable for being
               | roughly contemporaneous with @stake firing Dan Geer over,
               | as I recall, somehow alienating Microsoft, one of
               | @stake's larger clients.
               | 
               | https://cve.mitre.org/data/board/archives/2002-02/msg0002
               | 6.h...
               | 
               | Your link, for what it's worth, doesn't use the term at
               | all. But even if it had, it wouldn't change anything.
               | 
               | Just to keep this on track: your claim was that
               | "Coordinated Disclosure" --- the near-universal standard
               | term for what used to be called "Responsible Disclosure"
               | --- is an "Orwellian re-imagining". Leaving aside the
               | fact that there's nothing intrinsically "Orwellian" about
               | renaming something, the fact remains: "Responsible
               | Disclosure" was researcher-hostile and patronizing, and
               | has been chucked out the window by almost everybody who
               | matters in vulnerability research.
               | 
               | We've managed to hash out a hot topic from, like, 2012 on
               | this thread, which is great, it's progress, we move
               | forward bit by bit here on HN, just like Joel Spolsky
               | once said; "fire and motion". But we can probably be done
               | now.
        
           | danieldk wrote:
           | _possibly the origin of named and marketed vulnerabilities._
           | 
           | Heartbleed is older (2014).
        
             | FreakLegion wrote:
             | Incredible! In my mental timeline VENOM came first, maybe
             | because we had to pull an all-nighter.
        
           | horsawlarway wrote:
           | I mean... as a user on the machine, if I have admin rights
           | there's very, very little they can meaningfully do to stop me
           | from removing their software.
           | 
           | Hooking a token into their uninstaller is hardly
           | sufficient... I have _SO_ much surface area to attack that I
           | don 't genuinely think you can call this anything other than
           | trivial.
           | 
           | For an admin user, I'd take this token prompt more as a "Hey
           | - you're about to violate company policy" more than any
           | literal technical restriction.
           | 
           | I can steal the network, change the registry, simply delete
           | their binaries, update shared dlls, or any number of other
           | easy hacks to get them offline.
           | 
           | This _is_ trivial.
        
             | FreakLegion wrote:
             | CrowdStrike has kernel code meant to stop everything you
             | mention minus the network part (and being offline should
             | have minimal impact on the security of the system), but in
             | practice I'm sure you're right. It's easy enough to get
             | attacks past them that frankly I haven't bothered trying to
             | disable the agent.
        
         | sagonar wrote:
         | To me it looks like like a very clear case of a
         | writing/speaking words which they know (or the company clearly
         | should have known) not to be true.
         | 
         | One could perhaps call this a "communication problem", but I'd
         | like to think most people would call it lying
        
         | jessaustin wrote:
         | _...it 's much more likely..._
         | 
         | This is very speculative. There's no reason to bend over
         | backwards to imagine a way in which "ClownStrike" (your boss
         | gets it!) _didn 't_ flag a specific PoC without fixing the
         | underlying issue. If CS insists on such opacity, the best
         | assumption is actually the opposite.
        
       | thisdrunkdane wrote:
       | Anybody know their problems with the terms at HackerOne? I admit
       | I don't know HackerOne's term exactly and I'm not that good at
       | reading legalese.
        
         | red_trumpet wrote:
         | A bit speculative, but the word "NDA" appears four times in
         | their post.
        
           | marcinzm wrote:
           | I believe HackerOne has some restrictions on disclosure as
           | with an NDA approval is needed.
        
           | thisdrunkdane wrote:
           | Yea I noticed that, but what do they specifically not like
           | about the NDA? afaik, HackerOne still makes vulnerability
           | disclosure possible (and automatic if taking too long?)
        
             | rroot wrote:
             | I guess the specifics are the letters N, D and A and what
             | they stand for. And the fact that there's absolutely
             | nothing in it for them.
             | 
             | Would you even consider signing an NDA if I sent you one? I
             | surely hope not.
        
               | [deleted]
        
               | ricardobeat wrote:
               | That's not an answer to the parent's question. HackerOne
               | has a disclosure process despite the NDA, so what part of
               | the process is the issue? Or is it simply "hackerone
               | bad"?
        
               | ImPostingOnHN wrote:
               | if they have a disclosure process, what purpose is served
               | by the non-disclosure agreement?
               | 
               | what with non-disclosure being literally the opposite of
               | disclosure & everything
        
               | tptacek wrote:
               | They pay you money, you disclose exclusively on their
               | terms. That's the deal, and the purpose of the NDA. If
               | you don't like the NDA terms, you don't engage with the
               | bounty program, and you just publish on your own. There's
               | no reasonable way to make a whole big thing out of this.
        
               | thaeli wrote:
               | The only reason it's a "thing" is that the reporters in
               | this case were attempting to do a responsible,
               | coordinated disclosure. That's important for their own
               | brand - many clients would be reluctant to hire a
               | security consultant who just dropped 0days without a damn
               | good reason. So this is documentation and justification
               | for why they did a unilateral disclosure - the
               | expectation is that you "show your work" and be clear
               | that you tried to work with the vendor and they wouldn't
               | work with you in good faith so you had no choice but to
               | unilaterally disclose.
        
               | tptacek wrote:
               | Nobody is going to avoid hiring a security consultancy
               | that posts bugs with a coordinated timeline that notes
               | they didn't engage with the bounty program.
        
               | ImPostingOnHN wrote:
               | right, if you don't like the terms of the NDA, don't
               | agree to it
               | 
               | that is precisely the choice made by the team in the
               | article, because the NDA was bad, for the reasons you
               | described
               | 
               | so it sounds like everyone is OK with this, the authoring
               | team is just describing that issue, along with other
               | issues, with the bug disclosure process (like lying about
               | there being no vulnerability while simultaneously fixing
               | it)
        
               | jknoepfler wrote:
               | They have no interest in signing away their right to
               | disclose the vulnerability at the behest of a private,
               | for-profit entity, because they believe public disclosure
               | of security vulnerabilities is crucial to improving
               | security.
        
             | rodgerd wrote:
             | There have been claims that companies are abusing the
             | HackerOne NDA process to cover up security issues: refuse
             | to acknowledge a problem, but weild the NDA to prevent
             | disclosure of the supposed non-existent disclosure.
        
             | vorpalhex wrote:
             | NDAs are expensive to review, usually over broad and always
             | badly written.
             | 
             | Just say no to NDAs.
        
         | [deleted]
        
       | arminiusreturns wrote:
       | The same Crowdstrike that was a key player in Russiagate? Colour
       | me shocked that they do things dumbly. Anyone still using them
       | after that fiasco and its impact on the US should be ashamed.
       | 
       | https://thegrayzone.com/2021/10/30/crowdstrike-one-of-russia...
       | 
       | https://thegrayzone.com/2020/05/11/bombshell-crowdstrike-adm...
        
         | pnemonic wrote:
         | Your sources are from thegrayzone? You should learn to consider
         | your sources...
        
           | arminiusreturns wrote:
           | The GrayZone has been impeccable in their reporting, with any
           | errors quickly being admitted and disclosed. Of course many
           | people disagree with them, and love to try character
           | assassination and other ad hominems, but I've found them
           | informative and having integrity. Maybe you should elaborate
           | your reasoning.
           | 
           | To elaborate further: "Leaked emails reveal British
           | journalist Paul Mason plotting with an intel contractor to
           | destroy The Grayzone through "relentless deplatforming" and a
           | "full nuclear legal" attack. The scheme is part of a wider
           | planned assault on the UK left."
           | 
           | https://thegrayzone.com/2022/06/07/paul-masons-covert-
           | intell...
        
             | philipwhiuk wrote:
             | Paul Mason isn't a journalist. He _was_ a journalist, then
             | he left and swerved hard left into Momentum. He 's
             | previously been members of groups best described as Marxist
             | or at least radical left.
        
               | pessimizer wrote:
               | He's a spy.
               | 
               | edit: cleared by MI5 just like every other BBC journalist
               | in Britain, but moreso, seeing as he was the economics
               | editor for BBC Newsnight, then for Channel 4 News.
               | 
               | https://www.cambridgeclarion.org/press_cuttings/mi5.bbc.s
               | taf...
               | 
               | https://www.cambridgeclarion.org/press_cuttings/mi5.bbc.p
               | age...
               | 
               | https://www.bbc.com/news/stories-43754737
               | 
               | edit: to be clear, when I say he's a spy, I mean that he
               | is being paid by British intelligence to report on the
               | operations of left wing organizations, sabotage them,
               | push them into doing extreme and unpopular things which
               | also hopefully constitute grounds for arresting targeted
               | individuals, and to say obnoxious things that piss
               | normies off as a representative of "the left" in
               | mainstream media.
               | 
               | edit: allegedly.
        
             | prvit wrote:
             | The Grayzone, home to impeccable reporting like
             | https://thegrayzone.com/2022/03/18/bombing-mariupol-
             | theater-...
        
           | denton-scratch wrote:
           | Do you know who thegrayzone founders and editors are? Hint:
           | they're not crazies, they're pro journos with classy
           | reputations.
        
             | jessaustin wrote:
             | Telling the wrong truth is considered "crazy" these days.
        
       | throwaway034271 wrote:
        
       | anothernewdude wrote:
       | Just sell it and move on with your lives.
        
       ___________________________________________________________________
       (page generated 2022-08-22 23:01 UTC)