[HN Gopher] Security research on Private Cloud Compute
___________________________________________________________________
Security research on Private Cloud Compute
Author : todsacerdoti
Score : 134 points
Date : 2024-10-24 17:36 UTC (5 hours ago)
(HTM) web link (security.apple.com)
(TXT) w3m dump (security.apple.com)
| dewey wrote:
| Looks like they are really writing everything in Swift on the
| server side.
|
| Repo: https://github.com/apple/security-pcc
| tessela wrote:
| I hope this helps people to consider Swift 6 as a viable option
| for server-side development, since it offers many of the modern
| safety features of Rust, including simpler memory management
| through ARC, compared to Rust's more complex ownership system
| and more predictable than Go's garbage collector.
| danielhep wrote:
| Is using something other than XCode viable? I'd love to do more
| with swift but I hate that IDE.
| selectodude wrote:
| VS Code?
|
| https://marketplace.visualstudio.com/items?itemName=sswg.swi.
| ..
| dewey wrote:
| Most editors will do, Xcode is mostly needed for iOS / macOS
| development if you want to submit to the App Store or work
| with a lot of Apple frameworks.
| iforgotmysocks wrote:
| There is a Swift LSP. See -
| https://github.com/swiftlang/sourcekit-lsp
| kfreds wrote:
| Wow! This is great!
|
| I hope you'll consider adding witness cosignatures on your
| transparency log though. :)
| mmastrac wrote:
| I feel like this is all smoke and mirrors to redirect from the
| likelihood intentional silicon backdoors that are effectively
| undetectable. Without open silicon, there's no way to detect that
| -- say -- when registers r0-rN are set to values [A, ..., N] and
| a jump to address 0xCONSTANT occurs, additional access is granted
| to a monitor process.
|
| Of course, this limits the potential attackers to 1) exactly one
| government (or N number of eyes) or 2) one company, but there's
| really no way that you can trust remote hardware.
|
| This _does_ increase the trust that the VMs are safe from other
| attackers, but I guess this depends on your threat model.
| yalogin wrote:
| This is an interesting idea. However what does open hardware
| mean? How can you prove that the design or architecture that
| was "opened" is actually what was built? What does the
| attestation even mean in this scenario?
| dented42 wrote:
| This is my thought exactly. I really love the idea of open
| hardware, but I don't see how it would protect against cover
| surveillance. What's stopping a company/government/etc from
| adding surveillance to an open design? How would you
| determine that the hardware being used is identical to the
| open hardware design? You still ultimately have to trust that
| the organisations involved in
| manufacturing/assembling/installing/operating the hardware in
| question hasn't done something nefarious. And that brings us
| back to square one.
| mdhb wrote:
| This website in particular tends to get very upset and is
| all too happy to point out irrelevant counter examples
| every time I point this out but the actual ground truth of
| the matter here is that you aren't going to find yourself
| on a US intel targeting list by accident and unless you are
| doing something incredibly stupid you can use Apple /
| Google cloud services without a second thought.
| kfreds wrote:
| > How would you determine that the hardware being used is
| identical to the open hardware design?
|
| FPGAs can help with this. They allow you to inspect the
| HDL, synthesize it and configure it onto the FPGA chip
| yourself. The FPGA chip is still proprietary, but by using
| an FPGA you are making certain supply chain attacks harder.
| warkdarrior wrote:
| How do you know the proprietary part of the FPGA chip
| performs as expected and does not covertly gather data
| from the configured gates?
| kfreds wrote:
| > How do you know the proprietary part of the FPGA chip
| performs as expected and does not covertly gather data
| from the configured gates?
|
| We don't, but using an FPGA can make supply chain attacks
| harder.
|
| Let's assume you have a chip design for a microcontroller
| and you do a tapeout, i.e. you have chips made. An
| attacker in your supply chain might attack your chip
| design before the design makes it to the fab, maybe the
| attacker is at the fab, or they change out the chips
| after you've placed them on your PCB.
|
| If you use an FPGA, your customer could stress test the
| chip by configuring a variety of designs onto the FPGA.
| These designs should stress test timing, compute and
| memory at the very least. This requires the attacker's
| chip to perform at least as well as the FPGA you're
| using, while still having the same footprint. An attacker
| might stack the real FPGA die on top of the attacker's
| die, but such an attack is much easier to detect than a
| few malicious gates on a die. As for covertly gathering
| or manipulating data, on an FPGA you can choose where to
| place your cores. That makes it harder for the attacker
| to predict where on the FPGA substrate they should place
| probes, or which gates to attack in order to attack your
| TRNG, or your master key memory. Those are just some
| examples.
|
| If you're curious about this type of technology or line
| of thinking you can check out the website of one of my
| companies: tillitis.se
| kfreds wrote:
| > what does open hardware mean?
|
| Great question. Most hardware projects I've seen that market
| themselves as open source hardware provide the schematic and
| PCB design, but still use ICs that are proprietary. One of my
| companies, Tillitis, uses an FPGA as the main IC, and we
| provide the hardware design configured on the FPGA. Still,
| the FPGA itself is proprietary.
|
| Another aspect to consider is whether you can audit and
| modify the design artefacts with open source tooling. If the
| schematics and PCB design is stored in a proprietary format
| I'd say that's slightly less open source hardware than if the
| format was KiCad EDA, which is open source. Similarly, in
| order to configure the HDL onto the FPGA, do you need to use
| 50 GB of proprietary Xilinx tooling, or can you use open
| tools for synthesis, place-and-route, and configuration? That
| also affects the level of openness in my opinion.
|
| We can ask similar questions of open source software. People
| who run a Linux distribution typically don't compile packages
| themselves. If those packages are not reproducible from
| source, in what sense is the binary open source? It seems we
| consider it to be open source software because someone we
| trust claimed it was built from open source code.
| threeseed wrote:
| And what attestation do you have that the FPGA isn't
| compromised.
|
| We can play this game all the way down.
| kfreds wrote:
| You're right. It is very hard, if not impossible, to get
| absolute guarantees. Having said that, FPGAs can make
| supply chain attacks harder. See my other comments in
| this thread.
| SheinhardtWigCo wrote:
| Yeah, but, considering the sheer complexity of modern CPUs and
| SoCs, this is still the case even if you have the silicon in
| front of you. That ship sailed some time ago.
| kfreds wrote:
| > I feel like this is all smoke and mirrors to redirect from
| the likelihood intentional silicon backdoors that are
| effectively undetectable.
|
| The technologies Apple PCC is using has real benefits and is
| most certainly not "all smoke and mirrors". Reproducible
| builds, remote attestation and transparency logging are
| individually useful, and the combination of them even more so.
|
| As for the likelihood of Apple launching Apple PCC to redirect
| attention from backdoors in their silicon, that seems extremely
| unlikely. We can debate how unlikely, but there are many far
| more likely explanations. One is that Apple PCC is simply good
| business. It'll likely reduce security costs for Apple, and
| strengthen the perception that Apple respects users' privacy.
|
| > when registers r0-rN are set to values [A, ..., N] and a jump
| to address 0xCONSTANT occurs
|
| I would recommend something more deniable, or at the very least
| something that can't easily be replayed. Put a challenge-
| response in there, or attack the TRNG. It is trivial to make a
| stream of bytes appear random while actually being
| deterministic. Such an attack would be more deniable, while
| also allowing a passive network attacker to read all user data.
| No need to get code execution on the machines.
| formerly_proven wrote:
| Apple forgot to disable some cache debugging registers a
| while back which in effect was similar to something GP
| described, although exploitation required root privileges and
| would allow circumventing their in-kernel protections;
| protections most other systems do not have. (And they still
| didn't manage to achieve persistence, despite having beyond-
| root privileges).
| kfreds wrote:
| > Apple forgot to disable some cache debugging registers a
| while back which in effect was similar to something GP
| described
|
| Thank you for bringing that up. Yes, it is an excellent
| example that proves the existence of silicon
| vulnerabilities that allow privilege escalation. Who knows
| whether it was left there intentionally or not, and if so
| by whom.
|
| I was primarily arguing that (1) the technologies of Apple
| PCC are useful and (2) it is _very_ unlikely that Apple PCC
| is a ploy by Apple, to direct attention away from backdoors
| in the silicon.
| password4321 wrote:
| 20231227 https://news.ycombinator.com/item?id=38783112
| Operation Triangulation: What you get when attack iPhones
| of researchers
|
| 20231229 https://news.ycombinator.com/item?id=38801275
| Kaspersky discloses iPhone hardware feature vital in
| Operation Triangulation
| greenthrow wrote:
| If this is your position then you might as well stop using any
| computing devices of any kind. Which includes any kind of smart
| devices. Since you obviously aren't doing that, then you're
| trying to hold Apple to a standard you won't even follow
| yourself.
|
| On top of which, your comment is a complete non-sequitur to the
| topic at hand. You could reply with this take to literally any
| security/privacy related thread.
| ryandv wrote:
| There's been some limited research in this space; see for
| instance xoreaxeaxeax's sandsifter tool which has found
| millions of undocumented processor instructions [0].
|
| [0] https://www.youtube.com/watch?v=ajccZ7LdvoQ
| brokenmachine wrote:
| Relevant:
|
| 37C3 - Operation Triangulation: What You Get When Attack
| iPhones of Researchers
| https://www.youtube.com/watch?v=1f6YyH62jFE
|
| Absolutely insane attack. Really opens your eyes on what
| nation-state attackers are capable of.
| kmeisthax wrote:
| The economics of silicon manufacturing and Apple's own security
| goals (including the security of their business model) restrict
| the kinds of backdoors you can embed in their servers at that
| level.
|
| Let's assume Apple has been compromised in some way and
| releases new chips with a backdoor. It's expensive to insert
| extra logic into just one particular spin of a chip; that
| involves extra tooling cost that would be noticeable line-items
| and show up in discovery were Apple to be sued about their
| false claims. So it needs to be on all the chips, not just a
| specific "defeat PCC" spin of their silicon. So they'd be
| shipping iPads and iPhones with hardware backdoors.
|
| What happens when those backdoors inevitably leak? Well, now
| you have a trivial jailbreak vector that Apple can't patch.
| Apple's security model could be roughly boiled down as "our DRM
| is your security"; while they also have lots of actual
| security, they pride themselves on the fact that they have an
| economic incentive to lock the system down to keep both bad
| actors and competing app stores out. So if this backdoor was
| inserted without the knowledge of Apple management, there are
| going to be heads rolling. And if it was, then they're going to
| be sued up the ass once people realize the implications of such
| a thing, because Tim Cook went up on stage and promised
| everyone they were building servers that would refuse to let
| them read your Siri queries.
| threeseed wrote:
| You have to be serious here.
|
| The level of conspiracy needed to keep something like this a
| secret would be unprecedented.
|
| And if Apple was able to do that why wouldn't they just
| backdoor iOS/OSX instead of baking it into the hardware.
| ngneer wrote:
| How is this different than a bug bounty?
| davidczech wrote:
| Similar, but a lot of documentation is provided, source code
| for cross reference, and a VM based research environment
| instead of applying for a physical security research device.
| alemanek wrote:
| Well they are providing a dedicated environment from which to
| attack their infrastructure. But also they have a section
| called " Apple Security Bounty for Private Cloud Compute" in
| the linked article so this is a bug bounty + additional goodies
| to help you test their security.
| floam wrote:
| There is a bug bounty too, but the ability to run one the same
| infrastructure, OS, models locally is big.
| kfreds wrote:
| I've been working on technology like this for the past six years.
|
| The benefits of transparent systems are likely considerable. The
| combination of reproducible builds, remote attestation and
| transparency logging allows trivial detection of a range of
| supply chain attacks. It can allow users to retroactively audit
| the source code of remote running systems. Yes, there are attacks
| that the threat model doesn't protect against. That doesn't mean
| it isn't immensely useful.
| aabhay wrote:
| A lot of people seem to be focusing on how this program isn't
| sufficient as a guarantee, but those people are missing the
| point.
|
| The real value of this system is that Apple is making legally
| enforceable claims about their system. Shareholders can, and do,
| sue companies that make inaccurate claims about their
| infrastructure.
|
| I'm 100% sure that Apple's massive legal team would never let
| this kind of program exist if _they_ weren't also confident in
| these claims. And a legal team at Apple certainly has both
| internal and external obligations to verify these claims.
|
| America's legal system is in my opinion what allows the US to
| dominate economically, creating virtuous cycles like this.
| gigel82 wrote:
| No amount of remote attestation and "transparency logs" and other
| bombastic statements like this would make up for the fact that
| they are fully in control of the servers and the software. There
| is absolutely no way for a customer to verify their claims that
| the data is not saved or transferred elsewhere.
|
| So unless they offer a way for us to run the "cloud services" on
| our own hardware where we can strictly monitor and firewall all
| network activity, they are almost guaranteed to be misusing that
| data, especially given Apple's proven track record of giving in
| to government's demands for data access (see China).
| kfreds wrote:
| > No amount of remote attestation and "transparency logs" and
| other bombastic statements like this would make up for the fact
| that they are fully in control of the servers and the software.
| There is absolutely no way for a customer to verify their
| claims that the data is not saved or transferred elsewhere.
|
| You are right. Apple is fully in control of the servers and the
| software, and there is no way for a customer to verify Apple's
| claims. Nevertheless system transparency is a useful concept.
| It can effectively reduce the number of things you have to
| blindly trust to a short and explicit list. Conversely it
| forces the operator, in this case Apple, to explicitly lie. As
| others have pointed out, that is quite a business risk.
|
| As for transparency logs, it is an amazing technology which I
| can highly recommend you take a look at in case you don't know
| what it is or how it works. Check out transparency.dev or the
| project I'm involved in, sigsum.org.
|
| > they are almost guaranteed to be misusing that data
|
| That is very unlikely because of the liability, as others have
| pointed out. They are making claims which the Apple PCC
| architecture helps make falsifiable.
___________________________________________________________________
(page generated 2024-10-24 23:00 UTC)