[HN Gopher] Results of technical investigations for Storm-0558 k...
___________________________________________________________________
Results of technical investigations for Storm-0558 key acquisition
Author : duringmath
Score : 81 points
Date : 2023-09-06 18:07 UTC (1 hours ago)
(HTM) web link (msrc.microsoft.com)
(TXT) w3m dump (msrc.microsoft.com)
| gordian-not wrote:
| Weird they don't have logs saved for something that happened two
| years ago due to 'retention policies'.
|
| That's something I would fix
| paxys wrote:
| It works this way by design. Most companies will retain logs
| for exactly as much time as legally required (and/or
| operationally necessary), then purge them so they don't show up
| in discovery for some lawsuit years down the line.
| eli wrote:
| It's also a GDPR requirement to minimize the collection of
| personal data and to purge it as soon as it is no longer
| needed.
| gordian-not wrote:
| It's security logs though, presumably these carry less legal
| risk than chat messages or mails
|
| Also when you don't know how a Chinese threat group got into
| your network that's a major issue which will cost more than
| theoretical legal risk
| ses1984 wrote:
| Is it practical to keep logs detailed enough that would capture
| exfiltration like this from the corporate network?
| ratg13 wrote:
| If you take the time to do a decent setup.
|
| Text compresses extremely well and long-term archive space is
| generally inexpensive.
|
| The hard part is all the work of deciding what is important,
| what the processes will be, and implementing a proper archive
| system.
|
| Most people just don't dedicate the resources to things like
| this until the need is demonstrated.
| tptacek wrote:
| There are multiple regulatory reasons why logs in general
| (outside of specific use cases) are hard to retain
| indefinitely. You can document a security use case that
| triggers indefinite retention for logs based on some selector,
| but then you run into the problem that they say happened here:
| your selector is inexact and misses stuff.
| time4tea wrote:
| What this means is that the keys are not stored in non-
| recoverable hardware, they are available to a regular server
| process, just some compiled code, running in an elevated-priv
| environment. There is no mention that the systems that had access
| to this key were in any other than the normal production
| environment, so we may extrapolate that any production machine
| could get access to it and therefore anyone with access to that
| environment could potentially exfil the key material.
| 1970-01-01 wrote:
| >Our investigation found that a consumer signing system crash in
| April of 2021 resulted in a snapshot of the crashed process
| ("crash dump"). The crash dumps, which redact sensitive
| information, should not include the signing key. In this case, a
| race condition allowed the key to be present in the crash dump
| (this issue has been corrected).
|
| Correction is good, but why can't they go one more step and allow
| everyone to scan their server minidumps for crash-landed keys?
| dgudkov wrote:
| A breach like that requires a very good understanding of
| Microsoft's internal infrastructure. It's safe to assume that the
| breach was a coordinated effort of a team of hackers. This is not
| a cheap effort, but the payback is enormous. Hyper-centralization
| leads to a situation when hackers concentrate their efforts on a
| few high-value targets because once they are successful, the
| catch is enormous. I'm pretty much sure that there are teams of
| (state-sponsored) hackers that are already doing deep research
| and analysis of the internal infrastructure of Google, Microsoft,
| Amazon, etc. The breach gives an idea of how well already the
| hackers understand it.
|
| I would argue, it's time to decentralize inside a wider security
| perimeter.
| [deleted]
| splitstud wrote:
| [dead]
| _tk_ wrote:
| I am very curious why Microsoft is insisting that the key itself
| was ,,acquired" without having anything to show for it. The
| wording seems a little odd to me, the constant repetition even
| more so.
| baz00 wrote:
| So if we remove the careful wording, someone downloaded a
| minidump onto a dev workstation from production and then it was
| probably left rotting in corporate OneDrive until that
| developer's account was compromised. Someone took the dump, found
| a key in it and hit the jackpot.
| rdtsc wrote:
| Wonder if the actor caused the crash of the system in the first
| place?
|
| Or it was crashing so often they didn't have to.
|
| Race condition to scrub the crashdump sounds fishy. When the
| system is crashing it's hard to make assumptions or have any
| guarantees any cleanup and scrubbing is going to happen.
| natch wrote:
| > Due to log retention policies, we don't have logs with specific
| evidence
|
| No "this issue has been corrected" for this one. Are we still
| budgeting storage like it's the 1990s for logs?
| cesarb wrote:
| > > Due to log retention policies, [...]
|
| > Are we still budgeting storage like it's the 1990s for logs?
|
| Retention policies are not necessarily about storage space;
| sometimes, they are there to avoid being required to provide
| that old data during lawsuits.
| ratg13 wrote:
| I feel like there is a lot missing from this writeup, but I can't
| put my finger on exactly what.
|
| Also it feels strange that Government doesn't have its own
| signing key and they just use the same as everyone else. Which
| they didn't address and apparently do not intend to change.
| sidewndr46 wrote:
| if the government had its own key, you could trace anything
| they signed. Governments likely want code and other stuff they
| sign to appear as if another actor signed it
| ratg13 wrote:
| The key belongs to Microsoft. Microsoft is the one signing
| the auth tokens, not the end users.
|
| I'm saying that Microsoft should have a separate private key
| to sign government auth tokens with.
| shadowgovt wrote:
| IIUC in general they do. One of the steps of this failure
| is that a key that had no business signing off on accessing
| government data was granted that scope by MS's cloud
| software because they changed the scope-checking API in
| such a way that their own developers didn't catch the
| change ("Developers in the mail system incorrectly assumed
| libraries performed complete validation and did not add the
| required issuer/scope validation").
|
| So instead of failing safe, lack of new code to address
| additional scope features "failed open" and granted access
| to keys that didn't actually have the right scope.
| mrguyorama wrote:
| How banal can a software mistake be before we aren't allowed to
| besmirch the name of the devs involved? Is forgetting a test case
| a shameable offense? What about ignoring authentication? Rolling
| your own?
|
| Turns out when you write APIs that access security related
| things, you have to treat everything coming in as a threat,
| right? Shouldn't that be table stakes by now?
|
| We need a professional gatekeeping organization because the vast
| majority of us suck at our jobs and refuse to do anything about
| it.
| [deleted]
| phillipcarter wrote:
| The issue was caused by a race condition in extremely
| complicated software. Good luck setting up a gatekeeping
| organization that can track that level of detail (and
| understand every dimension of a possible fault like this).
| lcnPylGDnU4H9OF wrote:
| I wouldn't expect the gatekeeper to track these issues but
| rather to sign the credentials that developers have. Then the
| individual developers (ostensibly) have a base level of
| training that set them up to more likely avoid these issues.
| belltaco wrote:
| People will always make mistakes. Thats why it's better to
| focus on processes that should've been built to catch or stop
| mistakes, especially mistakes by a single person.
| ano-ther wrote:
| I don't understand the reflex for shaming. Everyone makes
| mistakes, and we are usually better off to understand & learn
| from it.
|
| If the first instinct is to punish, people will not be helpful
| in identifying their own mistake.
|
| Also, this is why companies like Microsoft have processes and
| systems to avoid such mistakes. They obviously failed here, but
| can be improved independently of the people involved.
|
| IIRC, airline safety investigations run in that way quite
| successfully.
| shadowgovt wrote:
| Mistakes and errors are inevitable. They are the side-effect
| of a functioning system.
|
| Where concern should occur is if one sees _repeated_ mistakes
| of the same preventable kind.
| The28thDuck wrote:
| Doesn't seem like an operational issue, seems more like 99.9%
| design coverage of preventing these issues from arising.
|
| I will say, I'm not very satisfied with just "improved security
| tooling." My gut is telling me that there is a better solution
| out there to guarding against credential leakage, but I feel
| wrangling memory dumps to have "expected" data is a fools errand.
| mh8h wrote:
| Why don't they use HSMs instead? The whole point of those
| hardwares is to prevent leaking the key materials.
| munificent wrote:
| I feel this everytime one of these articles comes out, but it
| seems totally bizarre to me that we rely on private enterprises
| to deal with state-level attacks simply because they are digital
| and not physical.
|
| If a Chinese fighter jet shot down a FedEx plane flying over the
| Pacific, that would be considered an attack on US sovereignty and
| the government would respond appropriately. Certainly we wouldn't
| expect FedEx to have to own their own private fleet of fighter
| jets to protect their transport planes. No one would be like,
| "Well it's FedEx's fault for not having the right anti-aircraft
| defenses."
|
| But somehow, once it hits the digital domain we're just supposed
| to accept that Microsoft is required to defend themselves against
| China and Russia.
| tschwimmer wrote:
| The digital domain is fundamentally lower stakes and harder to
| protect than the physical one. It is good that we do not
| respond to cyber attacks like we do physical ones because we
| would have escalated to nuclear war over a decade ago. The
| scope and volume of cyberattacks is very high but my
| understanding is that the US has a correspondingly high volume
| of outbound attacks as well.
| belltaco wrote:
| Firstly, no infrastructure was attacked or destroyed, or lives
| lost, unlike your example of FedEx plane. Some US govt folks
| had their emails read.
|
| Secondly, the US does this all the time, even to friendly
| countries, so it's hard to justify harsher measures.
| munificent wrote:
| _> no infrastructure was attacked or destroyed_
|
| _Value_ was destroyed in both cases. Users having their
| private data stolen have been harmed, the company 's brand
| value is harmed, and they may lose users over this.
|
| _> or lives lost_
|
| Lives can be lost and real people can be harmed if their
| private information is stolen and used against them. There
| are dissidents and journalists in repressive countries whose
| safety depends on information security.
| miki123211 wrote:
| > If a Chinese fighter jet shot down a FedEx plane flying over
| the Pacific, that would be considered an attack on US
| sovereignty and the government would respond appropriately
|
| But if a bunch of Chinese people robbed a US bank, let's say
| the federal reserve, causing enormous financial damage but not
| loss of life, the response would be similar. Especially so if
| their link to the actual Chinese government was suspected
| couldn't reliably be proven.
|
| Governments catch foreign agents somewhat regularly, and those
| captures don't lead to an all-out war.
| bell-cot wrote:
| Perhaps - but, whether or not people from $ForeignNation are
| involved, U.S. banks (or other corporations, or ordinary
| citizens) _generally_ do not need to have their own armed
| police /security forces to deal with armed robberies. Nor
| their own DA's, courts, etc.
|
| Vs. any "cyber" crime? All that nice stuff about
| "...establish Justice, insure domestic Tranquility, provide
| for the common defence, promote the general Welfare..." falls
| on the floor, and...YOYO.
| eli wrote:
| Isn't that the idea behind CISA?
| tptacek wrote:
| I don't think so? What makes you think it is?
| dboreham wrote:
| NSA
| ynniv wrote:
| And that's why you should keep your key material in an HSM, kids
| cobertos wrote:
| Some things were not plainly spelled out:
|
| * July 11 2023 this was caught, April 2021 it was suspected to
| have happened. So, 2+ years they had this credential, and 2
| months from detection until disclosure.
|
| * How many tokens were forged, how much did they access? I'm
| assuming bad if they didn't disclose.
|
| * No timetable from once detected to fix implemented. Just "this
| issue has been corrected". Hope they implemented that quickly...
|
| * They've fixed 4 direct problems, but obviously there's some
| systemic issues. What are they doing about those?
| bananapub wrote:
| as far as I can tell, the only non-bug mistake here was allowing
| coredumps to leave production ever. if this is your attacker, you
| are pretty fucked no matter how good you are.
| natas wrote:
| I would fire their entire security team on the spot.
| bagels wrote:
| Is it possible to build a security team in a way that you can
| guarantee to never have any vulnerability ever?
| shadowgovt wrote:
| Risky.
|
| Those are the only people on the planet you can trust to never
| make this mistake again.
| Eduard wrote:
| > The key material's presence in the crash dump was not detected
| by our systems (this issue has been corrected).
|
| Now hackers have it even easier to find valuable keys from
| otherwise opaque core dumps: Microsoft's corrected detection
| software will tell them as soon as it finds one.
| tptacek wrote:
| It feels like there are some missing dots and connections here: I
| see how a concurrency or memory safety bug can accidentally exfil
| a private key into a debugging artifact, easily, but presumably
| the attacker here had to know about the crash, and the layout of
| the crash dump, and also have been ready and waiting in
| Microsoft's corporate network? Those seem like big questions.
| "Assume breach" is a good network defense strategy, but you don't
| literally just accept the notion that you're breached.
| stilist wrote:
| Yeah:
|
| 'After April 2021, when the key was leaked to the corporate
| environment in the crash dump, the Storm-0558 actor was able to
| successfully compromise a Microsoft engineer's corporate
| account. This account had access to the debugging environment
| containing the crash dump which incorrectly contained the key.'
|
| So either the attacker was already in the network and happened
| to find the dump while doing some kind of scanning that wasn't
| detected, or they knew to go after this specific person's
| account.
| shadowgovt wrote:
| > but presumably the attacker here had to know about the crash,
| and the layout of the crash dump
|
| If I were an advanced persistent threat attacker working for
| China who had compromised Microsoft's internal network via
| employee credentials (and I'm not), the first thing I'd do is
| figure out where they keep the crash logs and quietly exfil
| them, alongside the debugging symbols.
|
| Often, these are not stored securely enough relative to their
| actual value. Having spent some time at a FAANG, _every single
| new hire,_ with the exception of those who have worked in
| finance or corporate regulation, assumes you can just glue
| crash data onto the bugtracker (that 's what bugtrackers are
| for, tracking bugs, which includes reproducing them, right?).
| You have to detrain them of that and you have to have a vault
| for things like crashdumps that is so easy to use that people
| don't get lazy and start circumventing your protections because
| their job is to fix bugs and you've made their job harder.
|
| With a compromised engineer's account, we can assume the
| attacker at least has access to the bugtracker and probably the
| ability to acquire or generate debug symbols for a binary. All
| that's left then is to wait for one engineer to get sloppy and
| paste a crashdump as an attachment on a bug, then slurp it
| before someone notices and deletes it (assuming they do; even
| at my big scary "We really care about user privacy" corp,
| individual engineers were loathe to make a bug harder to
| understand by stripping crashlogs off of it unless someone in
| security came in and whipped them. Proper internal opsec can
| _really_ slow down development here).
| Eduard wrote:
| >... you have to have a vault for things like crashdumps that
| is so easy to use that people don't get lazy...
|
| Let's assume a crash dump can be megabytes up to gigabytes
| big.
|
| How could a vault handle this securely?
|
| the moment it is copied from the vault to the developer's
| computer, you introduce data remanence (undelete from file
| system).
|
| keeping such coredump purely in RAM makes it accessible on a
| compromised developer machine (GNU Debugger), and if the
| developer machine crashes, _its coredump_ contains /wraps the
| sensitive coredump.
|
| A vault that doesn't allow direct/full coredump download, but
| allows queries (think "SQL queries against a vault REST API")
| could still be queried for e.g. "select * from coredump where
| string like '%secret_key%'".
|
| So without more insight, a coredump vault sounds like
| security theater which tremendously makes it more difficult
| for intended purposes.
| formerly_proven wrote:
| Another article from Microsoft in this affair that barely (if
| indeed) answers more questions than it raises.
| gnfargbl wrote:
| The article says that the employee compromise happened some
| time after the crash dump had been moved to the corporate
| network. It says that MS don't have evidence of exfil, but my
| reading is that they do have some evidence of the compromise.
|
| The article also says that Microsoft's credential scanning
| tools failed to find the key, and that issue has now been
| corrected. This makes me think that the key _was_ detectable by
| scanning.
|
| Overall, my reading of this is that the engineer moved the dump
| containing the key into their account at some point, and it
| just sat there for a time. At a later point, the attacker
| compromised the account and pulled all available files. They
| then scanned for keys (with better tooling than MS had; maybe
| it needed something more sophisticated than looking for BEGIN
| PRIVATE KEY), and hit the jackpot.
| mistrial9 wrote:
| it brings a lot of questions to the table about what employee
| knew what, and when.. A real question is - under a "zero
| trust" environment, how many motivated insiders have they
| accumulated with their IT employment and contracting.
___________________________________________________________________
(page generated 2023-09-06 20:00 UTC)