[HN Gopher] Book Review: "This Is How They Tell Me the World Ends"
___________________________________________________________________
Book Review: "This Is How They Tell Me the World Ends"
Author : wglb
Score : 49 points
Date : 2021-02-25 18:52 UTC (4 hours ago)
(HTM) web link (addxorrol.blogspot.com)
(TXT) w3m dump (addxorrol.blogspot.com)
| goatinaboat wrote:
| I've started it and a few pages in I would largely agree with
| this review, it's very America-centric. You have to
| simultaneously believe that the NSA are the best in the world,
| and that foreign hackers despite being nowhere near as smart
| managed to steal all their hacking tools, and it's lucky for them
| that they did otherwise they would have no chance against the NSA
| because they are the BEST! USA! USA!
|
| If it doesn't get better than that pretty quickly I'm unlikely to
| finish it.
| ForHackernews wrote:
| I don't know if the NSA are the best in the world, but they're
| probably among the best-funded in the world.
|
| Stuxnet is close to a work of art; the degree of effort
| involved in hiding self-reinstalling malware inside hard-drive
| firmware is staggering.[0] Anyone who's ever tried to write
| Linux drivers for undocumented hardware can appreciate how
| insane it is that they managed to do it for a dozen different
| hard drive manufacturers.
|
| [0] https://www.theverge.com/2015/2/16/8048243/nsa-hard-drive-
| fi...
| moyix wrote:
| Keep in mind that a motivated academic research team was able
| to write a hard drive firmware implant on their own as well:
| https://www.impeachdonaldtrump.net/Implementation.and.Implic.
| ..
|
| But yes, making it work for a bunch of manufacturers would be
| a lot of additional work (though easier if you can obtain the
| data sheets for the embedded microcontroller, which is
| probably doable if you're the NSA).
|
| Side note: don't blame me for the URL, it was just the first
| place I found that had a PDF of the paper.
| secfirstmd wrote:
| Let's not forget. Stuxnet is 15ish years old also.
| ConnorLeet wrote:
| Wasn't the NSA tool leak due to an ex-NSA contractor
| intentionally leaking it?
| goatinaboat wrote:
| _Wasn 't the NSA tool leak due to an ex-NSA contractor
| intentionally leaking it?_
|
| No one knows who the Shadow Brokers are, unless that's
| covered later in the book.
| IggleSniggle wrote:
| No one knows! I thought this was a compelling theory that I
| haven't seen mentioned elsewhere:
|
| https://threadreaderapp.com/thread/1224880979258441729.html
| tptacek wrote:
| I listened to a discussion with Perlroth on a podcast where she
| said things that implied strongly that exploits, in general,
| writ large, were leaked NSA tradecraft. I don't have even 1/5th
| the skin in this game Halvar does, and I found it offensive
| enough to yell out loud in my car, just at the implication.
|
| In reality I think it's kind of an embarrassment how much CNE
| technology goes the other way, from industry and research _to_
| the IC. Not for moral reasons (though: I would have ethical
| problems selling bugs to attackers of any sort and am thankful
| I don 't produce the kinds of bugs that have this market) so
| much as "it's not that hard to do this work and we should be
| getting more for our tax dollars".
| Animats wrote:
| _" This Is How They Tell Me the World Ends" tackles an important
| question: What causes the vulnerability of our modern world to
| "cyberattacks"?_
|
| What causes that vulnerability? Lack of liability for software
| vendors. If Microsoft had to pay the costs of vulnerabilities,
| we'd have far more secure systems.
|
| There's one area of the industry where companies are held
| financially responsible for their mistakes - gambling systems. A
| few percent of revenue goes to paying for mistakes. For GTech,
| before they were acquired by a non-US company, you could see the
| numbers in their annual report. It's not a killer.
| joe_the_user wrote:
| You need a way to generally enforce an appropriate effort to
| security. This seems extremely important. I don't see
| liability, after-the-fact-punishment, only with software
| vendors, being the way to achieve this.
|
| Numerous threads on HN lately have discussed the way that
| "connect everything to everything" approaches wind-up
| inherently insecure and a wide range of organizations have no
| incentive to stop them. Water systems have no need to connect
| to the Internet even indirectly and neither automobile remote
| start system but organization managing these things have no
| incentive to take this stuff seriously.
|
| This is what's called an "externality" and a difficult one.
| Externalities are generally best dealt with be direct
| regulation but security is a bit different externalities
| because defining best practicing isn't. I could imagine a
| "security institute", only focused on defense and independent
| enough to be trustworthy to most institutions. But I'm not all
| that optimistic such a thing could be created.
| google234123 wrote:
| If MSFT had to pay the cost of vulnerabilities then Windows
| would cost a lot more. Also, who would pay for Linux
| vulnerabilities? Would anyone contribute to open source if they
| were liable for any damage caused by their contribution?
| tptacek wrote:
| Since nobody knows how to reliably ship secure commercial
| software, liability will mostly have the effect of making it
| difficult to start new software businesses. Liability would
| make more sense to me if we could converge on a common
| understanding of a secure development process, but working on
| that has taken up most of my career and I don't think it's even
| on the horizon. The industry is still debating memory safety.
| devonkim wrote:
| We have things like cybersecurity insurance that's required
| to be carried for vendors when working with the government,
| but much greater investment in both private and public
| sectors into secure coding and operational practices (namely
| by making commonly insecure things much more secure) would be
| helpful to at least some trends away from the current
| dominant mentality of "ship it first, ship it fast"
| dominating the software business.
|
| I don't think it makes much sense for a random one-off script
| written by some lone developer starting a company to be
| subjected to all the alphabet soup of regulation, but I don't
| think letting everyone get away with security breaches
| forever is a good idea nor is just throwing up our hands and
| going "oh well, we're going to keep getting break-ins" for
| another 50 years.
| aidenn0 wrote:
| > Since nobody knows how to reliably ship secure commercial
| software, liability will mostly have the effect of making it
| difficult to start new software businesses.
|
| Do you literally mean "nobody" here? As in if I wanted to
| hire you to put a team together to reliably ship secure
| commercial software, you couldn't do it either?
|
| I think we have, as an industry, tacitly agreed that it's
| better to ship cheap, insecure software than it is to ship
| expensive, secure software. A lot of the innovation in
| software has occurred specifically because it is so
| inexpensive.
|
| It seems clear to me that, so far, this tradeoff has been a
| net benefit, but at some point we may want to trade the
| inexpensiveness and innovation for some security,
| particularly has we start to put software into more things
| that can kill and maim us.
| ampdepolymerase wrote:
| Galois, if you are listening, there is a billion dollar
| opportunity for building a provably secure SAP/Salesforce.
| aidenn0 wrote:
| It could quite possibly cost over a billion dollars to make
| a provably secure SAP/Salesforce, and it's not a one-time
| cost since SAP at least is usually customized.
| Veserv wrote:
| That is only a problem if you create mandatory fixed-cost
| liability requirements. You can solve the problem by allowing
| companies to opt-in to liability requirements for some
| benefit. As an example, we could require that any company
| that wants to advertise security must do so in the form of a
| number specifying how much they will pay their customers in
| the event of a breach. So, they would not be able to
| advertise something like: "We have a secure cloud database.".
| They would instead be required to say: "We have a cloud
| database with $10M security." and in the event of a breach be
| forced to pay out $10M to its customers.
|
| So, if a company is small and unable to afford any security,
| they can just set the number at $0 which means it is the
| customer's problem. However, if the company is big and people
| have security expectations, they would need to specify a
| reasonable number that would alleviate those concerns. As
| long as the number is clearly communicated and you are not
| allowed to fraudulently or misleadingly advertise a different
| number, then customer's would be able to make an informed
| decision with respect to the security level and liability
| they are accepting from their vendors.
|
| Obviously there are some complexities with respect to
| properly communicating this information. For instance, for a
| consumer-facing company you would probably want a per-
| consumer number instead of an aggregate number. As an
| example, if Apple were to claim $300M for the iPhone, that
| might seem like a large number to an average person, but that
| would only amount to ~$1.50 per iPhone sold in a year. You
| also need to prevent misleading advertising that might
| attempt to divert from the quantitative liability they are
| actually accepting. However, the base scheme of mandatory
| labelling requirements along with opt-in liability should
| allow for a solution that is hard to bypass while being
| flexible enough to support both small and large businesses.
| ImprobableTruth wrote:
| What's the issue with requiring a formal spec and being
| liable for deviations from it? Though obviously this would
| have to be restricted to certain areas.
| mcguire wrote:
| " _Liability would make more sense to me if we could converge
| on a common understanding of a secure development process,
| but working on that has taken up most of my career and I don
| 't think it's even on the horizon._"
|
| That seems unlikely to happen as long as it is in commercial
| software businesses' best financial interests that it not
| happen.
| jackpirate wrote:
| Does anyone here have a recommendation on an actually good book
| about cybersecurity policy? My sense is that all existing books
| are trash.
| 1MachineElf wrote:
| This might be a cliche answer, but the (ISC)2 CISSP CBK
| Reference. Among cybersecurity certifications, the CISSP is
| notable for it's focus on cybersecurity policy laws &
| implementation frameworks.
| SCHiM wrote:
| I like Network Attacks and Exploitation: A Framework by Matthew
| Monte. It makes a clear and comprehensible case about the
| various asymmetries in cybersecurity and gives a detailed
| overview of failed past strategies, and strategies currently
| being tried.
| WrtCdEvrydy wrote:
| I kinda like the Bruce Scheiner books but they're mostly
| examples of security failures.
| jkonline wrote:
| > policy (Government, Politics & Diplomacy) a plan of action
| adopted or pursued by an individual, government, party,
| business, etc [1]
|
| Are you genuinely interested in a book on policy? (If not, I
| might have some recommendations, depending on what you're
| interested in).
|
| If so, you might need to adjust your expectation bar toward the
| lower spectrum. I honestly can't imagine a book about policy
| that isn't trash, but maybe that's just me.
|
| [1]: freedictionary.com
| jmuguy wrote:
| Not exactly what you asked for but I always recommend The
| Cuckoo's Egg by Cliff Stoll whenever hacking comes up. And he
| deals with the early days of computer networks when there
| basically was no cybersecurity, policy or otherwise.
| nielsbot wrote:
| It's sort of driving me crazy that "Add" isn't capitalized like
| the other ops: "ADD"
| edflsafoiewq wrote:
| HN's title mangler strikes again?
___________________________________________________________________
(page generated 2021-02-25 23:01 UTC)