[HN Gopher] U.S. Government Disclosed 39 Zero-Day Vulnerabilitie...
___________________________________________________________________
U.S. Government Disclosed 39 Zero-Day Vulnerabilities in 2023,
First-Ever Report
Author : jc_811
Score : 181 points
Date : 2025-02-06 14:35 UTC (8 hours ago)
(HTM) web link (www.zetter-zeroday.com)
(TXT) w3m dump (www.zetter-zeroday.com)
| HypnoDrone wrote:
| So there was 39 vulnerabilities that affected government systems.
| The rest didn't so they had no need to disclose.
| bangaladore wrote:
| Similar, but my thought is that they found out some other
| gov(s) know about it as well. And that it hurts others more
| than it hurts the US gov.
| maerF0x0 wrote:
| I'd say that's a very cynical take, theres a verification
| process and they disclosed 90% of them, pretty generous gift to
| the world if you ask me. They do not have a moral mandate to
| use their resources to benefit all.
| afavour wrote:
| > What changed the calculus in 2023 isn't clear.
|
| Well, the calculus didn't change in 2023 if the report was only
| released a month or so ago. And in fact, in May 2024:
|
| DHS, CISA Announce Membership Changes to the Cyber Safety Review
| Board https://www.dhs.gov/archive/news/2024/05/06/dhs-cisa-
| announc...
|
| So some new people came in and decided that more public
| information was better.
|
| > On January 21, 2025, it was reported that the Trump
| administration fired all members of the CSRB.
|
| Ah, well, never mind then
| neuronexmachina wrote:
| Yep. From the article:
|
| > This lack of transparency could become a greater issue under
| the Trump administration, which has vowed to ramp up the
| government's cyber offensive operations, suggesting that the
| government demand for zero-day vulnerabilities may increase
| over the next four years. If this occurs, the government's
| previous statements that the VEP favors disclosure and defense
| over withholding and offense may no longer be true. ...
|
| > "The VEP and that number of 90 percent was one of the few
| places where the president and the White House could set the
| dial on how much they liked defense vs offense," says Jason
| Healey, senior research scholar at Columbia University's School
| of International and Public Affairs and former senior
| cybersecurity strategist for CISA. "[The Trump administration]
| could say we're disclosing too [many vulnerabilities]. If the
| default [in the past] was to disclose unless there is a reason
| to keep, I could easily imagine the default is going to be to
| keep unless there is a reason to disclose."
| dylan604 wrote:
| Were these the same people that declared the 2020 election as
| being the safest ever? I can't imagine why that would grate
| Trump
| nimbius wrote:
| I hope this signals a turning point and lessons learned from the
| historic practice of hoarding exploits in the hopes they can be
| weaponized.
|
| when you disclose vulnerabilities and exploits, you effectively
| take cannons off both sides of the metaphorical battle field. it
| actively makes society safer.
| toomuchtodo wrote:
| Governments who want power will hoard the knowledge, which is
| power. Other governments will share. This is perpetual tension:
| we collectively receive utility when good policy is active
| (rapid dissemination of vuln info), but we need third parties
| to seek these exploits out when government cannot be relied on.
| Very similar to the concept of journalism being the Fourth
| Estate imho.
|
| (vuln mgmt in finance is a component of my day gig)
| tptacek wrote:
| This is a little bit like talking about why they hoard the
| guns. The reason governments have caches of exploit chains is
| not hard to understand.
| toomuchtodo wrote:
| You're the expert, and certainly not wrong, I wrote my
| comment because I have had to explain this to folks in a
| professional capacity and thought it might be helpful.
| tptacek wrote:
| It's just a rhetoric thing. We're talking about the USG
| "hoarding" stuff, but, in a sense, the government has a
| conceptual monopoly on this kind of coercive capability.
| Anybody can find vulnerabilities and write exploits, but
| using them in anger and without mutual consent is an
| authority exclusively granted to law enforcement and
| intelligence agencies. This is just an extension of the
| "monopoly on violence".
|
| The longstanding objection to this is that secretly
| holding a gun doesn't intrinsically make everyone less
| safe, and there's a sense in which not disclosing a
| vulnerability does. That argument made more sense back in
| and before 2010; it doesn't make much sense now.
| DoctorOetker wrote:
| It is substantially different from hoarding guns: not
| hoarding exploits takes away those exploits from
| adversaries.
|
| If an important factor is the ratio of exploits A and B
| have, then publishing their hidden but common exploits the
| ratio does not remain the same.
|
| The ratio is interesting because potential exploitation
| rate is proportional zero days (once used, the "zero" day
| is revealed and remediated after a certain time span).
| thomastjeffery wrote:
| You can't hoard knowledge, just like you can't take it away.
| Xen9 wrote:
| Only predictive capabilities & AI trained on data can be
| more valuable than having perfect profile of the person who
| turns out to be an enemy of your nation in whatever sense.
| Taking a person down once you know everyone they have ever
| talked, been interested about, or thought, is trivial. This
| can only be acheived by mass surveillance & hoarding.
|
| Arguably US as a nation state has the very best LLMs in the
| world, which is why I personally think they have been
| running weak AGI for few years, e.g. for autonomous malware
| analysis, reverse-engineering, and tailored malware
| generation & testing capability. Because they can actually
| store the personal data long term, without habing to delete
| it, this may be of gigantic strategic advantage due web
| being highly "polluted" after 2020-2021.
|
| I would from this guess US has bet on AI research since end
| of WWII, and especially within last 30 years, noting the
| rather highly remarkable possibility that Surveillance
| Capitalism is actually part of the nation's security
| effforts. The warehouses of data they have built are
| warehouses of gold, or rather, gold mixed in sand since of
| course lots of it is also garbage.
| axegon_ wrote:
| I doubt it. Historically, most government agencies around the
| world have had appalling security and each iteration is just as
| bad as the previous with a few half-assed patches on top to
| cover the known holes.
| lenerdenator wrote:
| I'd be surprised if the policy continues.
|
| Or if the people who worked at the agency are still there.
| burkaman wrote:
| They are not: https://techcrunch.com/2025/01/22/trump-
| administration-fires...
|
| Also, not a joke, this program contains the word "equity"
| ("the Director of National Intelligence is required to
| annually report data related to the Vulnerabilities Equities
| Process") so it will probably be frozen or cancelled.
| spacephysics wrote:
| Most likely these vulnerabilities were known by adversaries and
| they decided to report these to make it more difficult for
| those adversaries to attack.
|
| I'm sure the really juicy zero days they've discovered in-house
| are kept out of reports like these
| thewebguyd wrote:
| This is the most likely scenario. Its not like the government
| has decided they no longer need to hang on to zero days to
| use against adversaries.
|
| They've just determined these ones are either no longer
| useful to them, or adversaries have discovered and began
| using them.
| tptacek wrote:
| It literally is the scenario, as really the only outcome of
| the VEP (for serious, "marketable" vulnerabilities) is
| "disclose once burned".
| kevin_thibedeau wrote:
| The US depends on exploits being available for the companies
| it uses to circumvent the 4th amendment.
| downrightmike wrote:
| Probably not, trump's first term he was all for allowing
| ransomware. And the only reason we started seeing a strategy
| for mitigating was because of Biden. Since trump is all in on
| crypto and the fact that russia is the main beneficiary of
| ransomware, I highly expect cybercrime to ramp up as the
| current admin is positioned to benefit directly.
| meowface wrote:
| I might be a contrarian, but I think it makes sense for the NSA
| to hoard 0-days. They should disclose only after they burn
| them.
| thomastjeffery wrote:
| You can't actually hoard them, though. They aren't objects,
| they are knowledge.
|
| A 0-day is present in every instance of the software it can
| exploit.
| bluefirebrand wrote:
| This is a meaningless distinction imo
|
| You hoard knowledge by writing it down somewhere and then
| hoarding the places it's written down. Whether that's
| books, microfilm, hard drives, what have you
| thomastjeffery wrote:
| You can't stop someone else from writing it down.
|
| When you hoard something, your possession of that thing
| effectively takes access to that thing away from everyone
| else.
|
| You can't keep access to a vulnerability away from
| anyone!
| rozab wrote:
| Great, fantastic, awesome plan.
|
| https://en.wikipedia.org/wiki/The_Shadow_Brokers
| honzaik wrote:
| OK, hoarding discovered zero-days might not be the best
| strategy, BUT if we actually create a backdoor and don't
| tell anyone about it, then this should be safer right?
| right? /s
|
| https://www.wired.com/2015/12/researchers-solve-the-
| juniper-...
|
| https://en.wikipedia.org/wiki/Dual_EC_DRBG
|
| https://en.wikipedia.org/wiki/Juniper_Networks#ScreenOS_Bac
| k...
| tptacek wrote:
| Yeah, because intelligence is famously a discipline where
| nothing ever goes wrong.
| meowface wrote:
| That's definitely the downside in the trade-off, yeah. If
| you're going to hoard you better also protect or you just
| get the worst of all worlds. Still, I am generally hopeful
| about our intelligence agencies' ability to prevent leaks,
| even if fuckups have occurred.
| tptacek wrote:
| That's not the real downside. If it were, you'd be seeing
| mini-"shadow brokers" leaks every month, because the
| practice we're talking about here is extremely
| widespread: the economics of being a zero-day broker
| hinge on being able to sell the same exploit chain many,
| many times to the same country, and to get recurring
| revenue each such sale.
|
| The real downsides here are probably economic and have to
| do with how this shifts incentives for everybody in the
| industry. But, at the same time, every big tech company
| with a desktop/mobile footprint has invested mightily on
| staff to counter LE/IC/foreign CNE, which is something
| that might not have happened otherwise, so it's all
| complicated.
|
| People write as if the disclosure of a bunch of IC zero
| days is like some kind of movie-plot "Broken Arrow"
| situation, but it's really mostly news for message
| boards. Organizations that need to be resilient against
| CNE are already in a state of hypervigilance about zero-
| days; adversaries absolutely have them, no matter what
| "NSA" does.
| burkaman wrote:
| Any 0-day found by an NSA employee can and will be found by
| someone else, and then sold or used.
| tptacek wrote:
| The VEP is literally based on that premise.
| burkaman wrote:
| Right, I think we agree?
| tptacek wrote:
| Well, I'm just saying: the preceding comment believed
| themselves to be a contrarian for thinking it was OK for
| NSA to have these vulns, and it looked like you were
| rebutting them. But your rebuttal, if that's what it is,
| restates a premise NSA shares.
| burkaman wrote:
| The meaning I took from that comment is that the NSA
| should always keep any 0-day it finds indefinitely until
| it uses them. That's what I think "hoard" and "disclose
| only after they burn" means. It's a mindset of keeping
| everything you find secret because you might want to use
| it someday.
|
| My understanding of VEP is that the default is supposed
| to be to disclose immediately unless an agency has a
| really good reason not to, presumably because they want
| to use it soon. Don't hoard, only keep things that you're
| actually going to use.
| tptacek wrote:
| For clarity: the word "hoard" to me signals the idea that
| NSA keeps dozens of duplicative exploit chains around.
| The public policy rationale for them doing that is to me
| clear, but my understanding is that the practical
| incentives for them to do that aren't clear at all.
|
| When I say "burn", I mean that something has happened to
| increase the likelihood that an exploit chain is
| detectable. That could be them being done with it, it
| could be independent discovery, it could be changes in
| runtime protections. It's not "we use it a couple times
| and then deliberately burn it", though.
|
| We should stop talking about "NSA", because this is
| basically a universal LE/IC practice (throughout Europe
| as well).
| tptacek wrote:
| You're only a contrarian on message boards. The economics of
| CNE SIGINT are so clear --- you'd be paying integer multiples
| just in health benefits for the extra staff you'd need if you
| replaced it --- that vulnerabilities could get 10x, maybe
| 100x more expensive and the only thing that would change
| would be how lucrative it was to be a competitive vuln
| developer.
|
| A lot of things can be understood better through the lens of
| "what reduces truck rolls".
| timewizard wrote:
| The NSAs charter should be to secure this country and not
| attack others.
| tptacek wrote:
| That is literally the opposite of why NSA exists.
| dadrian wrote:
| That organization exists, and it is called the FBI.
| tptacek wrote:
| I mean, NSA has a directorate for it too, but it's not
| the agency's chartered purpose.
| edm0nd wrote:
| Aint no way.
|
| All major governments hoard 0days or buy them to use for
| espionage. I dont see this being some kind of "turning point"
| and more of a feel good easy PR win for the US gov but really
| they are still using many 0days to spy.
| thewebguyd wrote:
| Yeah, this is more like "these vulnerabilities are no longer
| useful to us" or "adversaries have discovered these and began
| using them, so here you go."
| tptacek wrote:
| It is not that turning point. These are VEP vulnerabilities.
| Like every major government, the US will continue to do online
| SIGINT.
| timewizard wrote:
| The lesson is not in vulnerability management.
|
| The lesson is that our desktop software is garbage and the
| vendors are not properly held to account.
| JumpCrisscross wrote:
| > _when you disclose vulnerabilities and exploits, you
| effectively take cannons off both sides of the metaphorical
| battle field. it actively makes society safer_
|
| If I know you always disclose, and I find something you haven't
| disclosed, I know I have an edge. That incentivises using it
| because I know you can't retaliate in kind.
|
| The hoarding of vulns is a stability-instability paradox.
| Retr0id wrote:
| How would you ever know that someone _always_ discloses, if
| you can 't know what they don't disclose?
| staticelf wrote:
| I think people give the US a lot of unnecessary shit. I don't
| think my government releases any zero days but I am sure they
| must have found some. Every government today probably uses zero
| days but it seems very few release information about them?
| dylan604 wrote:
| It's not about being held to a lower standard, it's about being
| held to a higher standard.
| numbsafari wrote:
| NOBUS is a disaster. Knowingly leaving citizens unprotected is an
| absolute failure of government. Having a robust policy of
| identifying a resolving cybersecurity faults, and holding
| organizations accountable for patching and remediation is
| necessary if we are going to survive a real cyber "war". We are
| absolutely unprepared.
| sneak wrote:
| This presupposes that the purpose of government is to protect
| citizens.
|
| The purpose of government is to take and maintain power and
| prevent any other organization from displacing them. It
| involves citizens only as a means to an end.
|
| It would be a failure of government to place citizen safety
| over continuity of government.
| ambicapter wrote:
| It can be both at the same time. A government won't be great
| at protecting its citizens if it has no power over bad
| actors, both inside and outside.
| acdha wrote:
| Your first sentence seems like a distraction because the same
| criticism of NOBUS holds either way. Even if the leaders do
| not care about the citizens except as a means to an end, an
| oppressive government especially needs to maintain security
| because it needs to project force to deter threats to its
| power. If they have a known vulnerability, they should be
| even more worried that internal dissidents or an external foe
| will find it because they are even more dependent on power in
| the absence of democratic legitimacy.
| pythonguython wrote:
| This is a classic security dilemma that is not easily
| resolvable. Suppose we just look at the US and China. Each side
| will discover some number of vulnerabilities. Some of those
| vulnerabilities will be discovered by both countries, some just
| by one party. If the US discloses every vulnerability, we're
| left with no offensive capability and our adversary will have
| all of the vulnerabilities not mutually discovered. Everyone
| disclosing and patching vulnerabilities sounds nice, but is an
| unrealistic scenario in a world with states that have competing
| strategic interests.
| tptacek wrote:
| The US VEP, which is like 10 years old now, is explicit about
| the fact that zero-day exploits are not fundamentally "NOBUS".
| That term describes things like Dual EC, or hardware embedded
| vulnerabilities; things that are actually Hard for an adversary
| to take advantage of. Chrome chains don't count.
| pentel-0_5 wrote:
| These are just the _disclosed_ ones. The _weaponized_ ones (as
| mentioned) found or bought kept secret by the NSA, etc. such as
| from Zerodium (ex-VUPEN) and similar aren 't counted obviously.
| ;)
| tptacek wrote:
| It's a "tell" in these discussions when they center exclusively
| on NSA, since there are dozens of agencies in the USG that
| traffick in exploit chains.
| ggernov wrote:
| These are wins because if they're actually patched it takes
| offensive tools away from our adversaries.
| davemp wrote:
| While I don't think we should be hoarding vulns, the idea of the
| government having huge budgets to find and disclose software
| defects is a bit strange to me. Seems like another instance of
| socializing bad externalities.
| mattmaroon wrote:
| "What the government didn't reveal is how many zero days it
| discovered in 2023 that it kept to exploit rather than disclose.
| Whatever that number, it likely will increase under the Trump
| administration, which has vowed to ramp up government hacking
| operations."
|
| This is a bit of a prisoner's dilemma. The world would be better
| off if everyone disclosed every such exploit for obvious reasons.
| But if government A discloses everything and government B
| reserves them to exploit later, then government B has a strong
| advantage over government A.
|
| The only responses then are war, diplomacy, or we do it too and
| create yet another mutually assured destruction scenario.
|
| War is not going to happen because the cure would be worse than
| the disease. The major players are all nuclear powers. Diplomacy
| would be ideal if there were sufficient trust and buy-in, but it
| seems unlikely the U.S. and Russia could get there. And with
| nuclear treaties there's an easy verification method since
| nuclear weapons are big and hard to do on the sly. It'd be hard
| to come up with a sufficient verification regime here.
|
| So we're left with mutually assured cyber destruction. I'd prefer
| we weren't, but I don't see the alternative.
| dadrian wrote:
| If Government A and Government B are not equally "good" for the
| world, then the world is _not_ better off if everyone
| disclosed, since the main users of CNE are LE/IC.
| mattmaroon wrote:
| I'm not sure what some of these initialisms are but the whole
| idea behind disclosing is to take tools away from the bad
| guys (whoever you think they are) because presumably they'll
| have found some of them too.
| tptacek wrote:
| CNE: the modern term of art for deploying exploits to
| accomplish real-world objectives ("computer network
| exploitation").
|
| LE: law enforcement, a major buyer of CNE tooling.
|
| IC: the intelligence community, the buyer of CNE tooling
| everyone thinks about first.
| josefritzishere wrote:
| I guess there wont be one in 2024
| skirge wrote:
| Burning 0-days makes your enemies spend more time on finding new
| ones - costs rise so they will go bankrupt. Cold war 2.0. It's
| not enough to just run grep / memcpy finder on software like
| 20-15 years ago.
| ikmckenz wrote:
| There is no such thing as a "Nobody But Us" vulnerability.
| Leaving holes in systems and praying enemies won't discover them,
| with the hope of attacking them ourselves is extremely foolish.
| tptacek wrote:
| CNE "zero-day" isn't "NOBUS", so you're arguing with a straw
| man.
| joshfraser wrote:
| I've seen the invite-only marketplaces where these exploits are
| sold. You can buy an exploit to compromise any piece of software
| or hardware that you can imagine. Many of them go for millions of
| dollars.
|
| There are known exploits to get root access to every phone or
| laptop in the world. But researchers won't disclose these to the
| manufacturers when they can make millions of dollars selling them
| to governments. Governments won't disclose them because they want
| to use them to spy on their citizens and foreign adversaries.
|
| The manufacturers prefer to fix these bugs, but aren't usually
| willing to pay as much as the nation states that are bidding. All
| they do is drive up the price. Worse, intelligence agencies like
| the NSA often pressure or incentivize major tech companies to
| keep zero-days unpatched for exploitation.
|
| It's a really hard problem. There are a bunch of perverse
| incentives that are putting us all at risk.
| timewizard wrote:
| > It's a really hard problem.
|
| Classify them as weapons of mass destruction. That's what they
| are. That's how they should be managed in a legal framework and
| how you completely remove any incentives around their sale and
| use.
| joshfraser wrote:
| Yes. Except our government is the largest buyer.
| Symbiote wrote:
| The USA has 5044 nuclear missiles, so that shouldn't be a
| problem.
| kingaillas wrote:
| How about some penalties for their creation? If NSA is
| discovering or buying, someone else is creating them (even if
| unintentionally).
|
| Otherwise corporations will be incentivized (even more than
| they are now) to pay minimal lip service to security - why
| bother investing beyond a token amount, enough to make PR
| claims when security inevitably fails - if there is
| effectively no penalty and secure programming eats into
| profits? Just shove all risk onto the legal system and
| government for investigation and clean up.
| tptacek wrote:
| That is never, ever going to happen, and they are nothing at
| all like NBC weapons.
| JumpCrisscross wrote:
| > _weapons of mass destruction. That 's what they are_
|
| Seriously _HN_? Your Netflix password being compromised is
| equivalent to thermonuclear war?
| aczerepinski wrote:
| Think more along the lines of exploits that allow turning
| off a power grid, spinning a centrifuge too fast, or
| releasing a dam.
| Henchman21 wrote:
| Suddenly I felt like re-reading Ken Thompson's essay
| _Reflections on Trusting Trust_.
|
| We've created such a house of cards. I hope when it all comes
| crashing down that the species survives.
| davisr wrote:
| Instead of hoping, you can do a lot just by ditching your
| cell phone and using Debian stable.
| Henchman21 wrote:
| Ah yes, switching from an iPhone to Debian is sure to...
| _checks notes_ save the species from extinction.
|
| Apologies for the dismissive snark; perhaps you could
| provide me some examples of how this would help?
| tptacek wrote:
| The markets here are complicated and the terms on "million
| dollar" vulnerabilities are complicated and a lot of intuitive
| things, like the incentives for actors to "hoard"
| vulnerabilities, are complicated.
|
| We got Mark Dowd to record an episode with us to talk through a
| lot of this stuff (he had given a talk whose slides you can
| find floating around, long before) and I'd recommend it for
| people who are interested in how grey-market exploit chain
| acquisition actually works.
|
| https://securitycryptographywhatever.com/2024/06/24/mdowd/
| JumpCrisscross wrote:
| > _It 's a really hard problem_
|
| Hard problems are usually collective-action problems. This
| isn't one. It's a tragedy of the commons [1], the commons being
| our digital security.
|
| The simplest solution is a public body that buys and releases
| exploits. For a variety of reasons, this is a bad idea.
|
| The less-simple but, in my opinion, better model is an
| insurance model. Think: FDIC. Large device and software makers
| have to buy a policy, whose rate is based on number of devices
| or users in America multiplied by a fixed risk premium. The
| body is tasked with (a) paying out damages to cybersecurity
| victims, up to a cap and (b) buying exploits in a cost-sharing
| model, where the company for whom the exploit is being bought
| pays a flat co-pay and the fund pays the rest. Importantly, the
| companies don't decide which exploits get bought--the fund
| does.
|
| Throw in a border-adjustment tax for foreign devices and
| software and call it a tariff for MAGA points.
|
| [1] https://en.wikipedia.org/wiki/Tragedy_of_the_commons
| fluoridation wrote:
| A tragedy of the commons occurs when multiple independent
| agents exploit a freely available but finite resource until
| it's completely depleted. Security isn't a resource that's
| consumed when a given action is performed, and you can never
| run out of security.
| JumpCrisscross wrote:
| > _Security isn 't a resource that's consumed when a given
| action is performed, and you can never run out of security_
|
| Security is in general non-excludable (vendors typically
| patch for everyone, not just the discoverer) and non-rival
| (me using a patch doesn't prevent you from using the
| patch): that makes it a public good [1]. Whether it can be
| depleted is irrelevant. (One can "run out" of security
| inasmuch as a stack becomes practically useless.)
|
| [1] http://www.econport.org/content/handbook/commonpool/cpr
| table...
| fluoridation wrote:
| >Security is [...] a public good
|
| Yeah, sure. But that doesn't make it a resource. It's an
| abstract idea that we can have more or less of, not a raw
| physical quantity that can utilize directly, like space
| or fuel. And yes, it is relevant that it can't be
| depleted, because that's what the term "tragedy of the
| commons" refers to.
| jrussino wrote:
| > it is relevant that it can't be depleted, because
| that's what the term "tragedy of the commons" refers to
|
| I think you're using an overly-narrow definition of
| "tragedy of the commons" here. Often there are gray areas
| that don't qualify as fully depleting a resource but
| rather incrementally degrading its quality, and we still
| treat these as tragedy of the commons problems.
|
| For example, we regulate dumping certain pollutants into
| our water supply; water pollution is a classic "tragedy
| of the commons" problem, and in theory you could frame it
| as a black-and-white problem of "eventually we'll run out
| of drinkable water", but in practice there's a spectrum
| of contamination levels and some decision to be made
| about how much contamination we're willing to put up
| with.
|
| It seems to me that framing "polluting the security
| environment" as a similar tragedy of the commons problem
| holds here, in the sense that any individual actor may
| stand to gain a lot from e.g. creating and/or hoarding
| exploits, but in doing so they incrementally degrade the
| quality of the over-all security ecosystem (in a way
| that, in isolation, is a net benefit to them), but
| everyone acting this way pushes the entire ecosystem
| toward some threshold at which that degradation becomes
| intolerable to all involved.
| impossiblefork wrote:
| I think what is actually the problem is the software and
| hardware manufacturers.
|
| Secure use of any device requires a correct specification.
| These should be available to device buyers and there should
| be legal requirements for them to be correct and complete.
|
| Furthermore, such specifications should be required also for
| software-- precisely what it does and legal guarantees that
| it's correct.
|
| This hasn't ever been more feasible, also considering that we
| Europeans are basically at war with the Russians, it seems
| reasonable to secure our devices.
| Always42 wrote:
| Please no more mandated insurance programs.
| westoque wrote:
| reminds me of the anthropic claude jailbreak challenge which
| only pays around $10,000. if you drive the price up, i'm pretty
| sure you'll get some takers. incentives are not aligned.
| Melatonic wrote:
| Makes me wonder if there are engineers on the inside of some of
| these manufacturers intentionally hiding 0 days so that they
| can then go and sell them (or engineers placed there by
| companies who design 0 days)
| tptacek wrote:
| People have been worrying about this for 15 years now, but
| there's not much evidence of it actually happening.
|
| One possible reason: knowing about a vulnerability is a
| relatively small amount of the work in providing customers
| with a working exploit chain, and an even smaller amount of
| the economically valuable labor. When you read about the
| prices "vulnerabilities" get on the grey market, you're
| really seeing an all-in price that includes value generated
| over time. Being an insider with source code access might get
| you a (diminishing, in 2025) edge on initial vulnerability
| discovery, but it's not helping you that much on actually
| building a reliable exploit, and it doesn't help you at all
| in maintaining that exploit.
| maerF0x0 wrote:
| the US often gets negative takes for doing what many other
| nations are also doing.
|
| For example in 2018 Tencent (basically, China) withdrew from
| hacking competitions like pwn2own taking along with them the
| disclosures that proceeded.
| tptacek wrote:
| Essentially every other industrialized nation.
| fluoridation wrote:
| Whataboutism. What China does doesn't invalidate any criticism
| the US gets. Or are you saying that actually it's perfectly
| fine to do this?
| tptacek wrote:
| That would be true if it was just China, but when it's so
| many countries that it's essentially an international norm,
| the "whataboutism" charge loses some of its sting.
| fluoridation wrote:
| "But other countries do it too" is whataboutism, no matter
| how many other countries that is; it doesn't invalidate the
| criticism no matter the number. So I ask again: is the real
| counterargument that actually this is perfectly fine to do?
| tptacek wrote:
| Yes.
| daedrdev wrote:
| China hiding exploits it find has a large jmpace on Us
| policy. Should the Us reveal every zero day it knows, in a
| theoretical conflict with china China will have zero days US
| didnt know about but the US will have none.
| fluoridation wrote:
| Then the counterargument to "the US shouldn't be hiding
| exploits it knows" isn't "but China does it too", it's
| "actually the US should be doing exactly that because it's
| in its best interest".
| Veserv wrote:
| Disclosing zero-days so the vendor can patch them and declare
| "mission accomplished" is such a waste.
|
| "Penetrate and Patch" is about as effective for software security
| as it is for bulletproof vests. If you randomly select 10
| bulletproof vests for testing, shoot each 10 times and get 10
| holes each, you do not patch those holes and call it good. What
| you learned from your verification process is that the process
| that lead to that bulletproof vest is incapable of consistently
| delivering products that meet the requirements. Only development
| process changes that result in passing new verification tests
| give any confidence of adequacy.
|
| Absent actively, or likely actively, exploited vulnerabilitys,
| the government should organize vulnerabilitys by "difficulty" and
| announce the presence of, but not disclose the precise nature of,
| vulnerabilitys and demand process improvement until
| vulnerabilitys of that "difficulty" are not longer present as
| indicated by fixing all "known, but undiclosed" vulnerabilitys of
| that "difficulty". Only that provides initial supporting evidence
| that the process has improved enough to categorically prevent
| vulnerabilitys of that "difficulty". Anything less is just
| papering over defective products on the government's dime.
| JumpCrisscross wrote:
| > _the government should organize vulnerabilitys by
| "difficulty" and announce the presence of, but not disclose the
| precise nature of, vulnerabilitys and demand process
| improvement until vulnerabilitys of that "difficulty" are not
| longer present as indicated by fixing all "known, but
| undiclosed" vulnerabilitys of that "difficulty"_
|
| For this amount of bureaucracy, the government should just hire
| all coders and write all software.
| Veserv wrote:
| You appear to misunderstand what I am saying:
|
| 1) Government already has vulnerabilitys.
|
| 2) Government identifies vulnerabilitys they already own by
| "difficulty to discovery".
|
| 3) Government selects the lowest "difficulty to discover"
| vulnerabilitys they already own.
|
| 4) Government announces products with known "lowest
| difficulty to discover" vulnerabilitys are vulnerable, but
| does not disclose them.
|
| 5) Government keeps announcing those products continue to be
| the most "insecure" until all vulnerabilitys they already own
| at that level are fixed.
|
| 6) Repeat.
| JumpCrisscross wrote:
| What you're suggesting requires creating a massive federal
| bureaucracy to continuously survey the product landscape.
| It then requires the private sector to duplicate that work.
| This is stupid.
| Veserv wrote:
| I have no idea how you came to that conclusion.
|
| The government has vulnerabilitys in stock that they
| _already_ verify function as intended. They _already_
| regularly verify these vulnerabilitys continue to
| function with each update cycle. They _already_ quantify
| these vulnerabilitys by ease-of-discovery, impact, etc.
| so they can prioritize utilization. They _already_
| determine vulnerabilitys to disclose.
|
| Assuming that the vulnerabilitys they disclose are not
| entirely "actively under exploit", the only difference in
| my proposed policy is that they do not disclose the
| details to the vendors so they can paper over them.
| Instead, they publicly announce the presence of
| vulnerabilitys and then keep verifying _as they already
| do_ until the vulnerabilitys no longer function.
|
| You seem to think I am arguing that the government should
| create a new organization to look for vulnerabilitys in
| all software everywhere and then act as I stated.
| JumpCrisscross wrote:
| > _they publicly announce the presence of vulnerabilitys
| and then keep verifying as they already do until the
| vulnerabilitys no longer function_
|
| Yes. This is difficult. Particularly given you need to do
| it fairly and thus comprehensively.
| Veserv wrote:
| Then you are arguing with a strawman. I never proposed
| they do it across all products, only the products that
| they _already_ have vulnerabilitys in that they _already_
| seek to disclose.
|
| You can not change the parameters of my proposal to
| explicitly require a gigantic bureaucracy then argue that
| it is a poor idea because the gigantic bureaucracy you
| added is a problem. You could have argued that my
| proposal is untenable because it would be "unfair" and
| the only way to make it fair would be to do it
| comprehensively which would be too hard.
|
| To which I would state:
|
| 1) That means we as a society care more about being
| "fair" to companies with inadequate software security
| than demanding adequate software security. Could be the
| case, but then everything other than roll over and accept
| it is off-the-table.
|
| 2) The government already requires certification against
| the Common Criteria for many software products in use by
| the government. You could restrict this policy to just
| systems that are used and require certification before
| use. Thus being applied "fairly" to government
| procurement and incentivizing improvements in security
| for procured systems.
|
| 3) This should actually just be general policy for
| everybody, but only the government, currently, has enough
| leverage to really pull off publicly announcing a problem
| and the vendor not being able to just shove it under the
| rug.
|
| And, even if you disagree with those points, I am also
| making the point that the current policy of disclosing
| the vulnerability so the vendor can make a point-patch to
| resolve it instead of fixing their overall security
| process is a failed security policy at the societal
| level. We need mechanisms to encourage overall security
| process improvement. Currently, the only thing that does
| that is the exponentially increasing amount and severity
| of hacks, and it would be nice to get ahead of it instead
| of being purely reactionary.
| tptacek wrote:
| "Penetrate and patch" is a term of art Marcus J. Ranum tried to
| popularize 15-20 years ago, as part of an effort to vilify
| independent security research. Ranum was part of an older
| iteration of software security that was driven by vendor-
| sponsored cliques. The status quo ante of "penetrate and patch"
| that he was subtextually supporting is not something that most
| HN people would be comfortable with.
| Veserv wrote:
| "Penetrate and Patch" as a failed security process is
| distinct from vilifying independent security research, and it
| should be obvious from my post as I point out that
| penetration testing is a integral part of the testing and
| verification process. It tells you if your process and design
| have failed, but it is not a good development process itself.
| tptacek wrote:
| Again and tediously: the only reason anyone would use that
| sequence of words would be to invoke Ranum's (in)famous
| "Six Dumbest Ideas In Computer Security" post, one of which
| was, in effect, "the entire modern science of software
| security".
|
| I recommend, in the future, that if you want to pursue a
| security policy angle in discussions online with people,
| you avoid using that term.
| Veserv wrote:
| I _am_ invoking it. "Penetrate and Patch" is illustrative
| and catchy and the arguments presented, for that one in
| particular, are largely theoretically, empirically, and
| even predictively supported.
|
| In fact, all of points except "Hacking is Cool", are
| largely well supported. The common thread being that all
| the other points are about "designing systems secure
| against common prevailing threat actors" (i.e "defense")
| and only that point is about "development of adversarial
| capabilitys" (i.e. offense) which they wrongfully
| undervalue and even associate criminality with;
| misunderstanding the value of verification processes.
|
| And besides, the entire modern science of software
| security is, objectively, terrible at "designing systems
| secure against common prevailing threat actors". What it
| is pretty good at is the "development of adversarial
| capabilitys" which has so far vastly outstripped the
| former and demonstrates quite clearly that prevailing
| "defense" is grossly inadequate by multiple orders of
| magnitude.
| tptacek wrote:
| If you say so, but I was a practitioner in 1997 and am a
| practitioner in 2025 and I think you'd be out of your
| mind to prefer the norms and praxis of circa-1997
| software security. You do you.
| Veserv wrote:
| The proposals were not the norms of the time. It is
| literally a rant against prevailing "bad" ideas of the
| time.
|
| "Default Permit", "Enumerating Badness", "Penetrate and
| Patch", "Educating Users" were the norms of the time and
| still, largely, are. And that is part of why "defense" is
| so grossly inadequate against common prevailing threat
| actors.
|
| I prefer the norms of high security software which
| include, but do not consist exclusively of, most of the
| stated ideas.
___________________________________________________________________
(page generated 2025-02-06 23:00 UTC)