[HN Gopher] Arm Introduces Its Confidential Compute Architecture
___________________________________________________________________
Arm Introduces Its Confidential Compute Architecture
Author : Symmetry
Score : 111 points
Date : 2021-06-23 14:10 UTC (8 hours ago)
(HTM) web link (fuse.wikichip.org)
(TXT) w3m dump (fuse.wikichip.org)
| Symmetry wrote:
| SemiAccurate has a more detailed explanation.
|
| https://www.semiaccurate.com/2021/06/23/arm-talks-security-c...
| Scene_Cast2 wrote:
| I see that there's a lot of mindshare on privacy and "computing
| trust" these days. There are several approaches that I noticed.
| One is to not send data to untrusted places (e.g. a server),
| another is to have a trust chain & hardware enclave (either
| through SGX / trustzone), and lastly to encrypt data (e2e
| encryption for transport-only, or homomorphic encryption (slow) /
| distributed encryption) for doing operations on the data.
| Currently, verifiable OSS on the server isn't in vogue (in
| practice or literature), but could be another approach.
| formerly_proven wrote:
| > I see that there's a lot of mindshare on privacy and
| "computing trust" these days.
|
| This is the opposite of respecting privacy (unless you mean
| e.g. the privacy of DRM code), all of this stuff is "trusted"
| as in TCG.
| Scene_Cast2 wrote:
| Yep. Privacy is a heavily overloaded term; I meant it as a
| more industry-style "data access management". Some comments
| in the other thread do talk about non-DRM uses.
| bawolff wrote:
| https://signal.org/blog/private-contact-discovery/ seems like
| an application of SGX where net privacy is increased.
| dane-pgp wrote:
| Or, that looks like a single point of failure that would
| make a good target for the government to issue a National
| Security Letter to Intel for.
| mindslight wrote:
| It's a societal dead end, so there's no technical excitement.
| The same technology that would let you verify a cloud server is
| running code you specify, would also be used by websites to
| demand that your personal computing environment conforms to
| their whims.
|
| Imagine websites that required proprietary Google Chrome on
| Win/OSX on bare metal - no VM, no extensions, no third party
| clients, no automation, etc. Imagine mandatory ads, unsaveable
| images, client-side form verification, completely unaccountable
| code, etc.
|
| _Protocols_ are the proper manner of allowing unrelated
| parties to transact at arm 's length, and technologies such as
| remote attestation would completely destroy that.
| contingencies wrote:
| Is this going to be perverted as a way to grow DRM on the device
| and have it owned by the vendor instead of the user?
| blendergeek wrote:
| No.
|
| In order for this to be "perverted as a way to grow DRM", it
| would need some other main function.
|
| The main purpose of this is to allow vendors, rather than
| users, to control personal computers.
|
| It will be great for finally allowing truly unskippable ads,
| ads that track your eyeballs to make sure you are looking, text
| that cannot be copied, etc.
| contingencies wrote:
| Sounds like the way things are going, open systems are going
| to be effectively illegal soon. Tencent/Apple/Google (TAG)
| will arrange that if you're not on an approved 'secure'
| device you are 'suspect', and slowly find ways to integrate
| this in to governance "for social benefit". At that point, if
| you're not 100% on the system (read: surveilled 24x7x365 -
| voice assistant anyone? - with a credit card/bank account,
| state-ID and consumer profile associated) you'll be
| penalized: your views will be impossible to share (it's
| almost that way already), and you'll be incrementally
| excluded or dissuaded from travel and transport, financial
| services, educational opportunities, events, job markets,
| etc.
|
| To resist this situation a device manufacturer could emerge
| with a privacy-first experience and solutions to proxy just
| the right amount of legitimacy to the outside systems (PSTN,
| payment, etc.) with an international legal structure
| optimised for privacy, akin to what wealthy people do
| already. A sort of anti walled-garden shared infrastructure.
| Technically I could see an offering providing mobile # to
| VOIP forwarding, data-only phone with mesh networking > wifi
| connectivity preference, MAC shuffling, darknet/VPN mixing,
| payment proxy, global snailmail proxy network, curated (well
| tested for pop sites) lightweight browser with reduced attack
| surface and a mobile network only as a last-resort
| connectivity paradigm (with dynamic software SIM and network
| operator proxying). Open source of course. Issue being,
| infrastructure would be expensive to offer and if it were
| ever popular there'd be political pushback. I guess privacy
| will increasingly be the exclusive domain of the rich.
|
| We've already lost.
| wmf wrote:
| Where is the memory encryption done? The L2? The DSU?
| kijiki wrote:
| Intel and AMD do it in the memory controller. The caches, by
| virtue of being entirely on-die, are assumed to be
| inaccessible.
|
| It is unclear what AMD intends to do here for their TSV stacked
| mega-cache thing. Perhaps they'll declare the TSVs as not
| practically snoopable, similar to how on-die metal layer wires
| are treated now...
| wmf wrote:
| My concern is less about the die boundary and more about
| vendor boundaries. In an Arm-based SoC with potentially a
| bunch of different IP all sharing the coherent fabric do you
| want to bet that all that IP will correctly implement CCA?
| Probably not.
| kijiki wrote:
| A legit concern. Basically everything that connects
| directly to the cache controllers has to get CCA right.
| Coherent GPUs are the big, complicated one.
|
| Intel had an SGX bug where the GPU could access the last 16
| bytes of every 64 byte cache line. If you need the GPU
| enabled, you have no choice but to design your enclave to
| only use the first 48 bytes of each cache line. Fortunately
| if you don't need the GPU, whether or not the GPU is
| enabled (among other things) is measured and included in
| the attestation quote, so clients can simply choose to
| refuse to talk to servers with the GPU enabled...
| api wrote:
| Seems like this and similar ideas are a "poor man's fully
| homomorphic CPU." The idea is that the CPU has some kind of
| embedded key that you can use to send data into a trust enclave,
| perform secret compute on it, and export that data in encrypted
| form back out such that nobody should be able to see what
| happened... assuming you trust the vendor and your adversary does
| not have extreme NSA-level hardware forensic capabilities and
| physical access to the hardware.
|
| Honestly it's not bad until or unless we get actually fast FHE
| chips. I think we're a ways away from that.
| ajb wrote:
| Although it requires more trust, this also has more
| functionality than an FHE, in that it allows you to
| deliberately allow selected information out of the enclave. For
| example, you can check a signature. With an FHE, the boolean
| indicating that the signature was valid, can only be revealed
| to someone with the key of the FHE scheme. FHE is not a fully
| generic way of using third-party hardware in a trustworthy way.
| formerly_proven wrote:
| > ARMv9 will power the next 300 billion ARM-based devices
|
| Stuff like this makes me nauseous. This is not a good thing. Stop
| it.
| AnimalMuppet wrote:
| Why is it not good?
| zaybqp wrote:
| Maybe it is the unsustainable environmental impacts of never-
| ending new gadgets?
|
| https://www.hpe.com/us/en/insights/articles/top-6-environmen.
| ..
| qayxc wrote:
| It means >110 _new_ devices per living human being.
|
| That can't indeed be good.
| staticassertion wrote:
| I really like the idea of secure enclaves. I'd like to use them.
| The problem I have is that it's unclear to me:
|
| a) How the hell I'm supposed to do so? It seems fairly arcane and
| in need of some higher level abstractions. fortanix[0] seems like
| it could be good here?
|
| b) What the implications are. What's it like to maintain an
| enclave? What do I lose in terms of portability, debug-ability,
| etc?
|
| It reminds me of GPU to some extent - it's this thing in my
| computer that's super cool and every time I think about using it
| it looks like a pain in the ass.
| gendal wrote:
| My team has built Conclave that might be interesting.
| https://docs.conclave.net. The idea is 1) make it possible to
| write enclaves in high level languages (we've started with JVM
| languages), 2) make the remote attestation process as seamless
| as possible.
|
| The first part is what most people fixate on when they first
| look at Conclave. But an equally important thing is actually
| the second part - remote attestation.
|
| The thing a lot of people seem to miss is that for most non-
| mobile-phone use-cases, running code inside an enclave is only
| really valuable if there is a _user_ somewhere who needs to
| interact with it and who needs to be able to reason about what
| will happen to their information when they send it to the
| enclave.
|
| So it's not enough to write an enclave, you also have to "wire"
| it to the users, who will typically be different
| people/organisations from the organisation that is hosting the
| enclave. And there needs to be an intuitive/way to for them to
| encode their preferences - eg "I will only connect to an
| enclave that is running this specific code (that I have
| audited)" or "I will only connect to enclaves that have been
| signed by three of the following five firms whom I trust to
| have verified the enclave's behaviour".. that sort of thing.
| staticassertion wrote:
| Is a user necessary? I feel like one thing I'd use an enclave
| for is as a signing oracle for service to service
| communications.
|
| Like I have service A and service B. A is going to talk to B,
| and has some secret that identifies it (maybe a private key
| for mTLS). I'd like for A to be able to talk to B without
| having access to that secret - so it would pass a message
| into the enclave, get a signed message out of it, and then
| proceed as normal.
|
| Would that not be reasonable? Or I guess maybe I'd want to
| attest that the signing service is what I expect?
| jacobr1 wrote:
| > Or I guess maybe I'd want to attest that the signing
| service is what I expect?
|
| Exactly. If you have a threat-model where you want to limit
| access to your secrets from a limited code path, you need
| to attest that only specific, signed code is running within
| the enclave that can access the secrets. You might only
| need this to satisfy your own curiosity, but in practice it
| probably is something you need to prove to your internal
| security team, third-party auditor, or even direct to a
| customer.
| amicin wrote:
| > How the hell I'm supposed to do so?
|
| You might want to check out Parsec (also from Arm).
|
| https://developer.arm.com/solutions/infrastructure/developer...
| pram wrote:
| I use the T2 SE on my Mac to generate/store ssh keys. The
| private key never leaves the enclave. That's a pretty neat,
| functional example.
| ticviking wrote:
| Do you know of a good tutorial or documentation on how to do
| that?
| pram wrote:
| I use this:
|
| https://github.com/maxgoedjen/secretive
| staticassertion wrote:
| Awesome, thank you.
| cma wrote:
| > The private key never leaves the enclave.
|
| ..
|
| > While Secretive uses the Secure Enclave for key
| storage, it still relies on Keychain APIs to access them.
| Keychain restricts reads of keys to the app (and
| specifically, the bundle ID) that created them.
|
| Doesn't that mean the keys leave the enclave?
| pram wrote:
| The Keychain API is how you interface with the enclave
| key functions. It's just saying theres separate
| permissions for using the generated key.
|
| https://developer.apple.com/documentation/security/certif
| ica...
|
| "When you store a private key in the Secure Enclave, you
| never actually handle the key, making it difficult for
| the key to become compromised. Instead, you instruct the
| Secure Enclave to create the key, securely store it, and
| perform operations with it. You receive only the output
| of these operations, such as encrypted data or a
| cryptographic signature verification outcome."
| astrange wrote:
| I think they meant to say "use" not "read".
| Scene_Cast2 wrote:
| I'm not sure if the push for enclaves on phones is really meant
| for _you_ per se. IIRC one of the more talked-about use cases
| is for DRM'd content (e.g. software that you can't crack,
| banking apps with more resistance to malware, etc).
| drivebycomment wrote:
| While it is often useful to understand what the motivating
| goal was for the design of a new system like secure enclaves
| and what the practitioners primary concerns are, in the end
| it doesn't actually matter what the designers intender.
| Rather, what matters is what it actually does and provides.
| As long as the new functionality can be used for other useful
| purposes that are still within the design goals and
| parameters, it would be useful. Tool doesn't care what the
| creator intended - it's whether the tool is useful for
| whatever purpose it can be used for.
|
| Thus I find most analysis and comments like this, based only
| on motivation and incentives, very lacking.
|
| Add to that, to be more specific to this particular topic,
| secure enclaves are designed not only for DRM but for many
| other critical applications (that are actually far more
| important than DRM and are/were the key motivating use cases)
| - it, or the general concept, is the basis of the security
| guarantee for iPhone's fingerprint or face ID, or the
| confidentiality of the key materials in various end-to-end
| encryption, which allows things like the-phone-as-a-security-
| key.
| Scene_Cast2 wrote:
| The intention is quite important for estimating which
| workflows would be easy and what a response could be, given
| an atypical use case. (E.g. if this has a backdoor, when
| would it be used?)
| staticassertion wrote:
| True, but I'm assuming that this is all moving towards
| enclaves being more standard and cross platform.
| count wrote:
| Given the audience of this site....who do you think writes
| those apps?
| dmulligan wrote:
| There's a lot of projects working in this area to make enclaves
| easier to manage and deploy, e.g. Veracruz [0] which I work on.
|
| [0]: https://github.com/veracruz-project/veracruz
___________________________________________________________________
(page generated 2021-06-23 23:01 UTC)