[HN Gopher] The full story of the RSA hack can finally be told
___________________________________________________________________
The full story of the RSA hack can finally be told
Author : whiteyford
Score : 164 points
Date : 2021-05-20 13:38 UTC (1 days ago)
(HTM) web link (www.wired.com)
(TXT) w3m dump (www.wired.com)
| neonate wrote:
| https://archive.is/tqdS9
| neilv wrote:
| > _Now, staring at the network logs on his screen, it looked to
| Leetham like these keys to RSA's global kingdom had already been
| stolen._
|
| That must've been a sickening feeling.
| tptacek wrote:
| Since when were NDAs routinely limited to 10 years?
| drdavid wrote:
| Some searching tells me 1 to 5 years is considered normal.
| ianhawes wrote:
| My thoughts exactly. I'm curious if these people _thought_
| their NDAs expired after 10 years.
| eganist wrote:
| > Since when were NDAs routinely limited to 10 years?
|
| I'm not seeing an indication that it was routine, just that all
| the people involved happened to have 10 year NDAs in place.
| Might've been RSA-specific, potentially as a consequence of the
| breach or just an artifact of RSA's own policies; it's not
| actually mentioned. I'm also only familiar with 5 year NDAs.
| tptacek wrote:
| I just went through all the Confidentiality agreements I
| could find in my mail spool and none of them had an explicit
| time limit. Is it normal for people to have 5-year NDAs, or
| even 10-years, for company secrets? How does that make sense?
| One of the main characters in this story had a tenure at RSA
| that exceeded 10 years.
| dadrian wrote:
| It's standard-ish to be able to request a 10-year NDA for
| anything that your not part of / on your way out, e.g. I
| know people that have them for severance / mutual non-
| disparagement packages.
| eganist wrote:
| I've got an equal mix of 5 year and unrestricted NDAs, no
| 10 year NDAs, oddly enough.
| idlewords wrote:
| Could it be a matter of state law?
| eganist wrote:
| If anyone else is wondering as to why there's so much human drama
| written in what should otherwise probably have been a normal
| retrospective, it's (at least by my judgment) to increase the
| likelihood that the article gets optioned for a film or tv
| series, etc.
|
| This isn't a first for Andy Greenberg, either.
| https://www.imdb.com/name/nm5200697/
|
| (My comment isn't critical of his writing; it's merely an effort
| at explaining it.)
| munificent wrote:
| I mean, I also found the story much more engaging because it
| involved actual humans with human feelings and reactions.
| zinglersen wrote:
| It was a bit long-winded but I was thinking that it would make
| a great movie already in the first half.
|
| Now I wonder if there are any cool SecOps movies or tv shows
| out there(?)
| alternatetwo wrote:
| Mr Robot is somewhat realistic and very good, too.
| kakamiokatsu wrote:
| Sir, I believe you forgot to add /s to your message.
| ddlatham wrote:
| > In the hours that followed, RSA's executives debated how to go
| public. One person in legal suggested they didn't actually need
| to tell their customers, Sam Curry remembers. Coviello slammed a
| fist on the table: They would not only admit to the breach, he
| insisted, but get on the phone with every single customer to
| discuss how those companies could protect themselves. Joe Tucci,
| the CEO of parent company EMC, quickly suggested they bite the
| bullet and replace all 40 million-plus SecurID tokens. But RSA
| didn't have nearly that many tokens available--in fact, the
| breach would force it to shut down manufacturing. For weeks after
| the hack, the company would only be able to restart production in
| a diminished capacity.
|
| > As the recovery effort got under way, one executive suggested
| they call it Project Phoenix. Coviello immediately nixed the
| name. "Bullshit," he remembers saying. "We're not rising from the
| ashes. We're going to call this project Apollo 13. We're going to
| land the ship without injury."
|
| This is the sort of response that would increase my trust to
| choose this company in the future. This is not easy. Our human
| instincts naturally kick in to minimize our faults and protect
| ourselves. Choosing to put the customer first at risk of your own
| reputation is hard, but the right choice, even for your
| reputation in the end.
| YetAnotherNick wrote:
| I don't know the contract with clients or anything but "didn't
| actually need to tell their customers" seems totally illegal to
| me if the RSA is the provider of one of the most fundamental
| layer of security to large companies. It is a PR piece. Not
| saying that RSA likely did the best they could do to be secured
| from the hack, but this story is a pr piece.
| m463 wrote:
| There's plenty of examples supporting that. The earliest I
| remember was the Tylenol murders where 31 million bottles of
| tylenol were taken off the shelves:
|
| https://en.wikipedia.org/wiki/Tylenol_(brand)#1982_Chicago_T...
|
| In the end, their brand got stronger and more trustworthy.
|
| (and I believe we got sealed bottles)
| cafard wrote:
| Tamper-proof. Yes, that was the example that occurred to me.
| dctoedt wrote:
| > _Tamper-proof_
|
| You probably see "tamper- _resistant_ " more often.
| beambot wrote:
| Looks like Tylenol is/was owned by J&J... the same company
| that knowingly sold talcum powder that caused cancer:
|
| https://www.npr.org/2020/05/19/859182015/johnson-johnson-
| sto...
| nix23 wrote:
| Just imagine you knowingly let parent threat their baby's
| with asbestos...and what happened? Nothing
|
| https://www.cancer.org/cancer/cancer-causes/talcum-powder-
| an...
| TheManInThePub wrote:
| > sold talcum powder that caused cancer
|
| Ummm
|
| Your link contained no evidence for this. A search of the
| NHS website also suggests no clear evidence [1]. Cancer
| Research (a respected UK charity) give a layman's summary
| (albeit focusing on ovarian cancer), stating no clear
| evidence and pointing out that there are far more serious
| risks to worry about [2].
|
| [1] https://www.evidence.nhs.uk/search?om=[{%22ety%22:[%22I
| nform...
|
| [2] https://www.cancerresearchuk.org/about-cancer/causes-
| of-canc...
| NelsonMinar wrote:
| This story was big news when it happened and I'm grateful to Andy
| Greenberg for writing this retrospective. Pretty much nothing has
| changed about security at companies in the last ten years that
| would foil this kind of attack. I mean maybe folks are a little
| smarter about catching spearfishing Office docs, and Flash
| exploits are now a thing of the past, but those are replaced by
| contemporary equivalents. And I'm sure conveniences like the not-
| quite-airgapped crucial equipment still persist. We truly have no
| idea how to secure systems on the Internet.
|
| As for RSA, it came out a couple years later their products were
| compromised various ways by the NSA. Then a couple years later
| the NSA lost control of its own hacking tools with the infamous
| Shadow Brokers release. Not only is building secure systems hard,
| but the US government actively works to undermine its own
| companies' security.
|
| https://www.reuters.com/article/idUSBRE9BJ1C220131220?irpc=9...
| https://arstechnica.com/information-technology/2014/01/how-t...
| lbriner wrote:
| "single, well-protected server"
|
| It doesn't sound like it. Even our production servers hosted by a
| "rackspace" company have outgoing ports closed by default, we
| earn a tiny amount compared to RSA.
|
| I know there will be reasons but honestly, the server should have
| been air-gapped or something. I can't imagine they need changing
| very often so why not copy it across the gap on a USB stick when
| you need it and leave it non-networked otherwise?
|
| Of course, I know nothing about this organisation, it just sounds
| weird that a system that was so crucial was so vulnerable.
| mattpavelle wrote:
| I'd bet that they had most ports blocked and the attackers used
| multi-stage tunnels. Details like that probably just don't make
| it into a Wired article.
| jaywalk wrote:
| The article explains all of this.
| haecceity wrote:
| Should have just turned off their computers
| znpy wrote:
| non-paywalled link?
| mywacaday wrote:
| Open link in duckduckgo
| [deleted]
| gvb wrote:
| Disable javascript
| Jaymoon85 wrote:
| https://outline.com/uE6PA4
| BombNullIsland wrote:
| This story would be many times more interesting with more
| technical details and less "human drama".
| eganist wrote:
| The story was almost certainly written with media optioning in
| mind, hence the human drama. It's common with longform
| journalism, and it's common for the author as well.
| https://www.imdb.com/name/nm5200697/
| sgt101 wrote:
| Are the seeds large primes?
| timdierks wrote:
| No, it's a symmetric system, they're just random values.
| tyingq wrote:
| _" Moments later, his computer's command line came back with a
| response: "File not found." He examined the Rackspace server's
| contents again. It was empty. Leetham's heart fell through the
| floor: The hackers had pulled the seed database off the server
| seconds before he was able to delete it."_
|
| I get the compulsion to delete it, but deleting it wouldn't have
| provided any real comfort. You would have no idea if that was the
| only copy. So, delete it just in case it is, but it doesn't
| change what you would have to do afterwards...the master keys
| have to be assumed leaked.
| PUSH_AX wrote:
| I mean you're never not going to delete it when given the
| option. As tiny as the chances are, it could mean your
| downstream consumers end up safe. Although yes, it should not
| stop you going into full damage control mode.
| xyzzy123 wrote:
| It's moderately likely the employee in fact deleted that copy
| and the "disappeared seconds before it happened" is a fig
| leaf or polite fiction.
|
| Have to assume the data were compromised anyway, as you point
| out.
| wolverine876 wrote:
| It might make customers less safe: If you delete the file,
| you reveal to the attackers what you know. The attackers may
| move more quickly to exploit the seeds, and it may disrupt
| your investigation, on which your customers depend: The
| attackers may abandon that path and follow another one that
| you are unaware of.
| ianhawes wrote:
| Good on the author for noting that while common now, the practice
| of dumping passwords in memory (a la Mimikatz) was not common
| until some time after this attack.
| nyokodo wrote:
| > Multiple executives insisted that they did find hidden
| listening devices--though some were so old that their batteries
| were dead. It was never clear if those bugs had any relation to
| the breach.
|
| Well that's not exactly comforting. Who else might have had the
| keys to the kingdom?
| wussboy wrote:
| Man do I ever hate the word "stunning" in headlines.
| dang wrote:
| I've clubbed it out of the title above, and added it to HN's
| debaiting software, so in the future it will get automatically
| dropped. (Most of the time.)
| btbuilder wrote:
| I would think it would be better to have a provisioning design
| that did not require that the company retain the seed data for
| every fob they had sold.
| briffle wrote:
| Or at least auto-delete them after 30 days, in case a customer
| didn't get theirs, and needed it resent. Retention policies
| limit the blast radius when there is a problem.
| ianhawes wrote:
| And a wise business decision since you would also benefit from
| having to replace the client's fobs in the event that they lost
| their seeds.
| benlivengood wrote:
| Unlike U2F and similar specs there was no direct communication
| between the SecurID tokens and any other device, limiting the
| bandwidth to less entropy than necessary to validate public key
| signatures. That necessitated having a shared secret between
| the token and auth server.
| throw0101a wrote:
| > _That necessitated having a shared secret between the token
| and auth server._
|
| Yes, but why did there have to be a central server with the
| shared secret _for every token on the planet_?
|
| The way the SecurIDs were designed, there was not way to plug
| into them, so there was no way to program them. So when you
| bought a batch you entered each serial number into your RSA
| auth server, which phoned home, and got the seed/secret.
|
| Huge single point of failure.
|
| TOTP (and HOTP before it) has a shared secret between the
| auth server and the token (software), but if Company X is
| hacked they don't get the secrets to Company Y:
|
| * https://en.wikipedia.org/wiki/Time-based_One-Time_Password
| fmajid wrote:
| Both HOTP and TOTP are vulnerable to phishing, unlike U2F.
| throw0101a wrote:
| "Perfect is the enemy of good."
|
| I'll take whatever improvements I can get in security.
| mrandish wrote:
| > but why did there have to be a central server with the
| shared secret for every token on the planet?
|
| Yeah, this struck me as a huge flaw. The breached system
| was used to create CDs full of IDs for customer deployment.
| For convenience the manufacturing system was almost but not
| fully air gapped. They retained the ID data in case the
| customer needed a copy in the future. However, keeping all
| of the IDs ever made on one system seems crazy.
|
| If they had just deleted the data after backing it up to
| discrete offline media every week...
| mcfedr wrote:
| It's great that the system that was printing CDs somehow
| had to be internet connected, it's not like they are
| emailing these keys
| benlivengood wrote:
| > If they had just deleted the data after backing it up
| to discrete offline media every week...
|
| Data loss probably scared them more than risk of breach.
|
| The real failure, after all, was not having the system
| actually airgapped. Aside from electromagnetic leakage
| through the power system there isn't much difference
| between spinning disks and tapes if they're not connected
| to anything else.
| wolverine876 wrote:
| I would lose trust if I found out that they retained copies of
| my private cryptographic data. Isn't that shocking in a company
| as sophisticated as RSA?
| chinathrow wrote:
| > RSA executives told me that the part of their network
| responsible for manufacturing the SecurID hardware tokens was
| protected by an "air gap"--a total disconnection of computers
| from any machine that touches the internet. But in fact, Leetham
| says, one server on RSA's internet-connected network was linked,
| through a firewall that allowed no other connections, to the seed
| warehouse on the manufacturing side.
|
| That's not really an air gap, isn't it?
| dredmorbius wrote:
| That configuration is more typically referred to as a bastion
| server (or bastion host, per Wikipedia).
|
| Access between network segments, or to a protected host, is
| through a single specifically-hardened host. Through network
| traffic (natting or bridging) is typically disabled or at least
| not provided by default, though in practice, it's challenging
| to entirely prevent tunelling.
|
| But no, it is _not_ an air-gapped system. Likely a journalistic
| compromise as "bastion host" is a less familiar term to the
| public.
|
| https://en.wikipedia.org/wiki/Bastion_host
| dredmorbius wrote:
| ... and from the article, the description of "airgapped"
| apparently came from RSA management. That may have been
| _their_ understanding. Todd Leetham pretty clearly understood
| otherwise.
| PeterWhittaker wrote:
| As others have noted, no, it is not. But that doesn't boggle my
| mind...
|
| What boggles my mind is that the seed machine and the
| intervening network and the firewall did not appear to have
| "scream loudly then shutdown when this threshold is exceeded"
| mitigations in place.
|
| They were wise enough to have a single connection from the seed
| host to the seed requester. They were wise enough to limit the
| requester to one request every 15 minutes.
|
| They only discovered that threshold was being exceeded when
| they logged in to that machine.
|
| The firewall itself should have had detection and response
| capabilities to notice when calls were being made faster than
| that, and it should have had a third, dedicated warning
| connection to alert humans to the fact. The seed host should
| have had detection and response capabilities.
|
| And, given the value of the asset, it would have been entirely
| reasonable to have a transparent bit of network gear doing the
| same, like a custom switch invisible to the request host.
|
| Since the article didn't mention any of these things, and since
| it said that the high request rate was detected only by humans
| on the box, I'm going to assume they didn't have these, for
| reasons mysterious.
|
| EDIT: Come to think of it, since that machine was being used to
| burn CDs, there should also have been strict limits with
| appropriate detection mitigations on what that machine could do
| outbound.
| amackera wrote:
| Seems more like an attempted air gap :facepalm:
| jaywalk wrote:
| "A firewall that only allows connections from one Internet-
| connected server" is quite literally not an air gap. It's one
| of those things that you look back on and wonder how in the
| world that decision was made.
| sterlind wrote:
| seems totally unjustifiable to me. if they truly needed to
| keep backups of their customers' seeds, then send them
| through a data diode to an actually-airgapped tape deck.
| there's no reason to keep seeds on the machines where they
| were generated.
|
| ...also, since they detected the breach before the attackers
| got to the "seed warehouse," why did they try to tail them in
| real time? just pull power to the whole DC.
| jaywalk wrote:
| The only thing I can think of with the "tailing in real
| time" is that there was some journalistic license taken to
| spice up the story. Otherwise, you don't even need to pull
| power to the whole DC. Just cut off all Internet access.
| jarym wrote:
| They must have meant the executives had an air gap between
| their ears.
| criddell wrote:
| I don't have an ad-blocker on this machine and I couldn't finish
| reading the page. The ads are ridiculously obnoxious.
|
| It wasn't that long ago when magazines like Wired cared a great
| deal about the page. What a mess.
| dang wrote:
| " _Please don 't complain about website formatting, back-button
| breakage, and similar annoyances. They're too common to be
| interesting. Exception: when the author is present. Then
| friendly feedback might be helpful._"
|
| https://news.ycombinator.com/newsguidelines.html
|
| (This guideline isn't there because such complaints are wrong
| or inaccurate; just the opposite.)
| i_like_kotlin wrote:
| They still do, if you pay.
| criddell wrote:
| That's not true. I just subscribed and they obnoxious ads and
| reflowing text are still there.
|
| Edit: I had to sign out and then back in after paying. Now
| the only ads I see are in-house ones (they really want you to
| sign up for newsletters).
| i_like_kotlin wrote:
| I see no ads as a subscriber and the subscription terms
| explicitly states ad-free, unlimited browsing for 1 year
| for $10, subscription benefits listed here: https://subscri
| be.wired.com/subscribe/wired/121743?source=AM...
| criddell wrote:
| I signed out and then signed back in and now they are
| gone. Thanks.
| bigbillheck wrote:
| > It wasn't that long ago when magazines like Wired cared a
| great deal about the page
|
| Do you remember the super-early Wired print editions, because
| those colors made it barely readable.
| criddell wrote:
| Every one of those choices were deliberate though. There was
| an aesthetic and style they were after. You can't say the
| same thing about flashing ads and text shuffling around as
| you scroll.
| munificent wrote:
| I think the aesthetic they are going for these days is
| "profitable".
| pwg wrote:
| Install uBlock origin and block the javascript running on the
| page.
|
| You still get to read the article, but no "ridiculously
| obnoxious" ads appear anywhere.
___________________________________________________________________
(page generated 2021-05-21 23:03 UTC)