[HN Gopher] A cryptographically secure bootloader for RISC-V in ...
       ___________________________________________________________________
        
       A cryptographically secure bootloader for RISC-V in Rust
        
       Author : fork-bomber
       Score  : 121 points
       Date   : 2024-08-05 14:18 UTC (8 hours ago)
        
 (HTM) web link (www.codethink.co.uk)
 (TXT) w3m dump (www.codethink.co.uk)
        
       | greenavocado wrote:
       | Congratulations on the development. Part of me is concerned that
       | this will be used to push devices that cannot be unlocked and
       | tinkered with by end users, reducing their technological freedom.
        
         | mouse_ wrote:
         | That's what it's for, make no mistake.
         | 
         | Those who sacrifice liberty for safety will receive and deserve
         | neither.
        
           | gjsman-1000 wrote:
           | Reminder that quote is so out of context that the actual
           | meaning, as it was intended, is wildly different.
           | 
           | https://techcrunch.com/2014/02/14/how-the-world-butchered-
           | be...
           | 
           | https://news.ycombinator.com/item?id=5268899
           | 
           | https://www.npr.org/2015/03/02/390245038/ben-franklins-
           | famou...
        
             | mouse_ wrote:
             | I am sorry if I have caused any misunderstanding; the point
             | I was trying to make is that, as ownership is a nebulous
             | concept, yet one that must be protected, one with physical
             | access to a device ought to be able to do whatever they
             | want to it. To imply that bootloader locking is not
             | primarily used to restrict what the owner of a device is
             | able to do with it is disingenuous at best. I fundamentally
             | disagree with the concept of bootloader locks in general
             | under the idea that the evil that corporations are able to
             | enact with them has historically far outweighed the real
             | life security concerns that they actually protect users
             | against. I understand this may be a controversial viewpoint
             | but personally I feel (opinion) that the erosion of
             | ownership is one of the most important issues we face
             | today.
        
         | RsmFz wrote:
         | That'll happen with permissive licenses
        
         | oconnor663 wrote:
         | I'm sure that's part of the story, but there's tons of boring
         | old company/government equipment out there in the world that
         | wants secure boot too.
        
         | Aurornis wrote:
         | The purpose of this bootloader is to avoid executing malicious
         | code sent over the internet, such as by a MITM attack.
         | 
         | The author explains that it does not attempt to defend against
         | hardware attacks or attempts to replace the bootloader:
         | 
         | > SentinelBoot's threat model focuses on thin client devices
         | which do not store their own OS and over-the-air updates (e.g.
         | how phones are updated): both of these cases involve executable
         | code being sent over a network, usually the internet. We ignore
         | the risk of direct hardware modification, as an attacker can
         | just swap out the bootloader (making any potential defence
         | implemented by SentinelBoot in vain).
        
         | almatabata wrote:
         | That shift has already started in various areas.
         | 
         | You see it with phones but also with cars where OEMs require
         | Secure Boot enabled devices when possible. This ranges from the
         | central unit to all small ECUs.
         | 
         | You can kind of see it pushed for desktops as the default mode
         | but at least there for now you usually have a way to disable
         | it.
         | 
         | For embedded devices though they never really provide a way to
         | disable it. I think you could find a compromise where you could
         | provide a physical method to disable it. If you disable it you
         | clear all the DRM keys and other secrets. This way you can
         | still protect against malicious updates for most users and
         | tinkerers can do what they want once the company stops
         | supporting it.
        
         | talldayo wrote:
         | This was going to happen regardless. I believe Nvidia's RISC-V
         | coprocessor ships with hardware fuses that serve more-or-less
         | the same purpose.
         | 
         | If anything, it just makes me glad that RISC-V also has specs
         | for UEFI-like interfaces.
        
           | bri3d wrote:
           | Many (most) devices with secure boot have hardware fuses, but
           | the software that reads them is usually broken. Rust and an
           | eye towards sound cryptographic primitives (especially
           | against side channels) will definitely go a distance towards
           | protecting against this, although logic bugs are also quite
           | common.
           | 
           | This bootloader doesn't actually seem to be a real secure /
           | trusted boot implementation anyway, just a thing that
           | verifies updates.
        
             | throwawaymaths wrote:
             | What is the story with rust and cryptographic side
             | channels? I imagine the layers of abstraction (e.g. an
             | iterator may have arbitrary O) would make it harder to see
             | those?
        
       | fefe23 wrote:
       | "cryptographically secure bootloader" is a meaningless phrase.
       | 
       | They mean a boot loader that validates cryptographic public key
       | signatures of the loaded component. That would be a secure
       | cryptographic bootloader. AFTER they have proven that it is, in
       | fact, secure.
       | 
       | You can't just write some code and then say it must be secure
       | because Rust was involved.
        
         | creshal wrote:
         | How does it know what keys to trust? TPM?
        
         | RsmFz wrote:
         | > You can't just write some code and then say it must be secure
         | because Rust was involved
         | 
         | Did they say that?
        
           | fefe23 wrote:
           | Yes. They call it "secure" and have zero arguments to back up
           | that claim except Rust's memory safety guarantees.
           | 
           | Which, by the way, do not apply, since the SHA256-Code is
           | marked unsafe.
        
             | AlotOfReading wrote:
             | Unsafe blocks do _not_ imply equivalence with C. They imply
             | that _if_ there are memory safety issues, the issue
             | originates in one of the unsafe blocks. Usually there are
             | few enough lines of code in unsafe blocks doing small
             | enough tasks that you can feasibly to rule out issues by
             | thinking hard enough.
             | 
             | Contrast that with C, where every line may be a source of
             | safety issues. It's a meaningful difference.
        
               | uecker wrote:
               | Well, not every construct in C can have safety issue.
               | Saying that every line in C may be the source of memory
               | safety issues is as accurate as saying that every line of
               | Rust may be a source of memory safety issues, because it
               | could make use of unsafe.
               | 
               | There is another issue: Unsafe code in Rust could violate
               | assumptions that could cause other code in Rust to be
               | unsafe. So it needs more care to write than regular C.
               | 
               | But I agree that it still a huge net benefit with respect
               | to memory safety, but let's not exaggerate.
        
               | itishappy wrote:
               | Those unsafe lines in C could be anywhere in your
               | program. In Rust they cannot exist outside of unsafe
               | blocks. This is not a trivial distinction! For all
               | intents and purposes, each and every line of C must be
               | treated as potentially unsafe.
        
               | adgjlsfhk1 wrote:
               | The really big difference is the searchability and
               | frequency of possibly unsafe operations. If you want to
               | audit all possible unsafe lines of code in a Rust
               | project, you can grep for "unsafe" and find all of them
               | (and in most projects there will be very few if any). In
               | C, on the other hand, you need to look at literally every
               | indexing operation, every pointer dereference, every use
               | of a variable (to make sure it isn't potentially used
               | after free or before initialization), every cast, and
               | probably some extras that I've forgotten. As such, rather
               | than having a low double digit number of cases to look
               | at, you have to look at the vast majority of lines of
               | code.
        
               | uecker wrote:
               | While true, my point is that you can write C in a way
               | that many functions are also obviously free of UB, and
               | you only need to carefully vet the pointer arithmetic in
               | some low-level functions.
               | 
               | So I agree with the point in principle, I just do not
               | like the "spin" of "every line of C is time bomb nobody
               | can understand" while in Rust you just have to look at
               | some lines of "unsafe" and all is good.
        
               | AlotOfReading wrote:
               | It's not my experience that C can be obviously free of UB
               | and I'm curious to know how you approach that. I'm not
               | aware of any methods or tools that claim to achieve it
               | and there's a long history of "correct" programs written
               | by experts were discovered to contain subtle UB with
               | improvements in automated analysis. Here's one example,
               | from Runtime Verification:
               | https://runtimeverification.com/blog/mare-than-14-of-sv-
               | comp...
        
               | adgjlsfhk1 wrote:
               | The key point is that no matter how you write your C
               | code, for anyone else that wants to verify a lack of
               | memory safety problems, they need to read every single
               | line to determine which ones do the low level unsafe
               | bits.
        
         | wyldfire wrote:
         | > You can't just write some code and then say it must be secure
         | because Rust was involved.
         | 
         | I have a feeling that the qualifier is there in the headline to
         | distinguish from the potential security improvements that come
         | from replacing a C bootloader implementation with a feature
         | -parity Rust one.
        
         | Aurornis wrote:
         | > You can't just write some code and then say it must be secure
         | because Rust was involved.
         | 
         | The article doesn't claim that at all.
         | 
         | The cryptographically secure part comes from doing
         | cryptographic verification of the code before running it.
         | 
         | The article talks about using Rust to improve memory safety.
        
       | Aurornis wrote:
       | This is a very specific type of bootloader for devices that get
       | their code over the internet:
       | 
       | > SentinelBoot's threat model focuses on thin client devices
       | which do not store their own OS and over-the-air updates (e.g.
       | how phones are updated): both of these cases involve executable
       | code being sent over a network, usually the internet. We ignore
       | the risk of direct hardware modification, as an attacker can just
       | swap out the bootloader (making any potential defence implemented
       | by SentinelBoot in vain).
       | 
       | The author readily acknowledges that it does not defend against
       | hardware modification. The other comments here trying to vilify
       | this project don't understand what it is supposed to do.
        
       | zamalek wrote:
       | The problem with Rust in the boot process is that it's going to
       | become much harder to find vulnerabilities for roots/jailbreaks.
       | Still, this is great work!
        
         | kelnos wrote:
         | Could you elaborate on this? I'm not sure what you mean; are
         | you saying that there will still be vulnerabilities that are of
         | similar difficulty to exploit as would be found in a C
         | bootloader, but will be harder to find by security researchers?
         | Or are you just saying that there will be fewer
         | vulnerabilities, but the ones that do exist will be more
         | "obscure" than would be the case if it were written in C,
         | because Rust eliminates some of the more "obvious" vectors?
         | 
         | Either way, do you consider this a bad thing?
        
           | yjftsjthsd-h wrote:
           | It's neither of those. The trade-off is that features like
           | this are often used against users, preventing them from
           | actually controlling their own machines. Under those
           | circumstances, bugs in the "security" of the machine are a
           | mixed bag: malware can exploit them to break out, but users
           | can exploit them to get full control over the machine that
           | they own. This has happened with Android phones, for
           | instance, allowing people to root phones and/or replace the
           | ROM with a community version that gets security patches not
           | available in the stock ROM, which is probably a net security
           | improvement even with the bootloader left vulnerable.
           | 
           | So it's really hard to call it a good thing or a bad thing;
           | it's a trade.
        
         | quohort wrote:
         | Yes, ironically increased transparency and more secure systems
         | will lead to less freedom for the user, because trusted
         | computing is most often securing the interests of manufacturers
         | against users (what RMS refereed to as "Treacherous Computing")
         | 
         | I think that we have been able to thwart "treachery" in the
         | meantime by exploiting side-channels in trusted computing
         | implementations. Ultimately it may be necessary to amend the
         | constitution to prevent manufacturers from distributing locked-
         | down hardware for the good of free society (competition,
         | democracy, etc.) at large. Otherwise, computer giants will have
         | ultimate control over the distribution of information (given
         | that the economics of manufacturing are driven by economies of
         | scale).
        
       | ReleaseCandidat wrote:
       | I don't get the "1/10 size of U-Boot" argument. As it can only
       | boot 3 RISC-V64 boards via TFTP, it also has less than 1/10 of
       | the features and supported hardware of U-Boot.
       | https://github.com/u-boot/u-boot
        
         | Arnavion wrote:
         | Supported hardware doesn't matter because they're comparing the
         | compiled binary size, not source code size. The u-boot binary
         | you'd compile would also only have the stuff relevant to that
         | particular hardware compiled-in.
         | 
         | If you don't need the other features of u-boot that this
         | doesn't have, it makes sense to count the lower binary size and
         | runtime memory usage as an advantage.
         | 
         | That said, they compared it to "an example U-boot binary",
         | which sounds like they probably didn't tweak the bajillion
         | config options u-boot has to produce one with an equivalent
         | feature set to theirs, which would've been a fairer comparison
         | for sure.
        
       | Vogtinator wrote:
       | Measured boot > trust chain through signature verification:
       | 
       | With measured boot, components in the boot chain tell some
       | trusted component (e.g. a TPM, possibly in FW) about all of their
       | input and only if the hashes at the end match, $something is
       | accessible (in most cases a secret key for data decryption).
       | 
       | 1. More flexibility (with TPM e.g. you can "seal" a secret
       | against different parts independently)
       | 
       | 2. No need for PKI, which gets very complex once revocations are
       | involved (have fun looking at the "Secure Boot" DBX lists and the
       | shim SBAT mechanism)
       | 
       | 3. More freedom: The system still boots if the measurements don't
       | match, you just don't get access to secrets. You're free to seal
       | your own secrets against your new measurements and whoever did
       | the last sealing has no access anymore. (Unlike on PCs where the
       | Microsoft trust is in most cases not removable).
        
         | quohort wrote:
         | 1. This is interesting. So in a measured boot scenario, you
         | wouldn't be able to boot the main OS, but it would give you
         | access to sort of a minimal initramfs environment for
         | debugging? It's a good idea for personal computers, like a
         | tamper-proofing approach.
         | 
         | I assume the TPM in this case would only have a partial
         | decryption key? I think something similar could be accomplished
         | with SSS, no?
         | 
         | 2. As for this, I can say i've never used DBX with UEFI Secure
         | boot. Instead of revoking keys, I just remake the entire PKI
         | from the top. The PKI is only there to support independent use
         | by OS Vendor/OEM hence the separation of PK/KEK/db.
         | 
         | 3. Counterpoint: over-reliance on TPMs and such. Whereas the
         | ordinary trust chain only requires signature verification at
         | the start of boot (presumably on-chip), measured boot requires
         | more complex trusted computing hardware (presumably off-chip).
         | 
         | Personally, I find that systems that are overly-reliant on
         | complex trusted computing hardware tend to lack in other areas.
         | For example, iphones or google-pixel devices encourage the user
         | to use a low-entropy password like a 4-digit PIN. These systems
         | try often to reconcile "analog" passkeys like Biometrics
         | (FaceID, fingerprints) by using trusted computing. Of course,
         | if the trusted computing systems are breached
         | (https://www.404media.co/leaked-docs-show-what-phones-
         | cellebr...), then security is very weak.
         | 
         | I suppose the advantage of the measured-boot method is that it
         | is optional. So you can still boot whatever OS you want, just
         | without some TC features.
        
           | Vogtinator wrote:
           | > 1. This is interesting. So in a measured boot scenario, you
           | wouldn't be able to boot the main OS, but it would give you
           | access to sort of a minimal initramfs environment for
           | debugging? It's a good idea for personal computers, like a
           | tamper-proofing approach.
           | 
           | Depends on how it's set up. Currently most setups that use
           | measured boot (systemd-pcrlock, partially BitLocker) ask for
           | a recovery key if unsealing fails due to measurement
           | mismatches and offer other options.
           | 
           | > I assume the TPM in this case would only have a partial
           | decryption key?
           | 
           | That's also possible, but so far I haven't seen that. The
           | sealed secret is sent to the TPM which then uses its hidden
           | internal seed to derive the master key for volume decryption
           | and sends it back. (In the case of bitlocker with TPM < 2
           | that could trivially be sniffed on the LPC bus...)
           | 
           | > I think something similar could be accomplished with SSS,
           | no?
           | 
           | If you mean Shamir's secret sharing, possibly. Question is
           | what to do with the shares.
           | 
           | 2. Yeah, for your local machine this is a working approach,
           | if you make sure that really only your own key works. Another
           | reason against PKI is also that the trusted authority can't
           | retroactively sign a backdoored executable to gain access to
           | devices, as the measurements are independent from authority
           | and ideally device specific.
           | 
           | 3. Signature verification isn't just needed at the start of
           | boot, it's ideally from start of booting until user
           | authentication, which is the part that can be tampered with.
           | I'd argue that the software side for measured boot is
           | simpler, while the hardware side may be more complex.
           | 
           | > For example, iphones or google-pixel devices encourage the
           | user to use a low-entropy password like a 4-digit PIN.
           | 
           | Using TPM+PIN is actually not that bad: Only if measurements
           | match it's possible to unlock with a PIN and the TPM uses a
           | counter in nonvolatile memory to prevent brute force attacks.
           | It's not unfathomable that some manufacturer screws that up,
           | but it's IMO stronger than relying on multiple parties (CPU,
           | BIOS, OEMs, OS) developing an actually secure trust chain.
        
           | thewanderer1983 wrote:
           | >1. This is interesting. So in a measured boot scenario, you
           | wouldn't be able to boot the main OS, but it would give you
           | access to sort of a minimal initramfs environment for
           | debugging? It's a good idea for personal computers, like a
           | tamper-proofing approach.
           | 
           | If you would like to play around with measured boot and
           | similar functionality of TCG DICE. Thats on a USB stick that
           | open, and have a good team behind it.
           | 
           | https://tillitis.se/
        
         | Arnavion wrote:
         | That said, it does require more care when you do OS updates or
         | UEFI updates to remember to update the TPM sealed secret with
         | the new measurements. Windows and Linux both have the former
         | automated so it should generally be fine.
         | 
         | UEFI updates can also be a problem if they wipe the TPM as part
         | of the update and thus destroy the sealed secret entirely (as
         | my PC mobo does).
        
           | Vogtinator wrote:
           | > That said, it does require more care when you do OS updates
           | or UEFI updates to remember to update the TPM sealed secret
           | with the new measurements. Windows and Linux both have the
           | former automated so it should generally be fine.
           | 
           | Yep, this can be a pain also in regards to firmware bugs
           | (broken TCG event log anyone?). In the worst case you need to
           | enter the recovery key or if you know in advance, exclude
           | some component from measurement temporarily while supervising
           | the next boot. If something goes wrong with the trust chain
           | like a key got revoked but the bootloader didn't update
           | correctly, you end up with an unbootable device and can't
           | even go back easily.
           | 
           | > UEFI updates can also be a problem if they wipe the TPM as
           | part of the update and thus destroy the sealed secret
           | entirely (as my PC mobo does).
           | 
           | Ouch, that's bad design. The firmware is measured into the
           | TPM on boot so there's no reason to do that..
        
             | Arnavion wrote:
             | Yeah, every time I update the UEFI it pops up a warning
             | that the TPM will be cleared and I better have disabled
             | Windows Bootlocker before I did this. The warning also goes
             | away within a fraction of a second because the PC reboots
             | which is not nearly enough time to read it, and I only know
             | what it says because I've updated the UEFI enough times to
             | be able to piece it together.
             | 
             | It might just be a warning to cover their asses; ie it
             | doesn't actually clear the TPM but they don't want to be
             | responsible for your un-unlockable drive in case it does. I
             | don't actually use the TPM for measured boot or anything
             | else so I haven't checked.
             | 
             | In any case, UEFI updates are relatively common right now
             | (once every couple of months or so) because it's a
             | relatively new mobo (AM5), and because AMD is about to
             | release new CPUs that requires corresponding AGESA etc
             | updates. It'll probably become less frequent in a few
             | years.
        
         | evanjrowley wrote:
         | It appears Apple Silicon uses a combination of measured boot
         | and trusted boot concepts:
         | https://support.apple.com/guide/security/boot-process-secac7...
        
       | IshKebab wrote:
       | That is I've impressive final year project, nice work!
       | 
       | Vector crypto is very cutting edge too. I guess there isn't any
       | hardware that has it yet...
        
         | ReleaseCandidat wrote:
         | SiFive's P670 Cores do:
         | https://www.sifive.com/cores/performance-p650-670 If and when
         | they are available, I don't know, Sophgo licensed them:
         | https://www.cnx-software.com/2023/10/21/sophgo-sg2380-16-cor...
         | preorder is here: https://arace.tech/products/pre-order-milk-v-
         | oasis-16-core-r...
        
       | zokier wrote:
       | tbh I feel bad for the kid, his thesis supervisor should have
       | helped him more here to scope and direct the work in some
       | sensible way. now it is bit of a mess :(
       | 
       | like just doing a review and comparison of existing boot
       | verification mechanisms would have been already good scope for a
       | thesis. Instead they are barely even mentioned as a side-note,
       | which puts this in a awkward position.
       | 
       | or if crypto was focus, then putting more work on designing and
       | implementing the crypto scheme would have been relevant. Now they
       | got so tangled with the nitty gritty boot details that the crypto
       | ended up also as questionable side-note.
       | 
       | or if rust was focus, then just implementing clean pure-rust
       | bootloader could have been already enough for the thesis,
       | avoiding the stumbling over on misguided crypto bits.
       | 
       | or many other ways this could have been more successful. overall
       | it now feels the author ended up biting far more than what they
       | can chew. also they should have imho spent less time coding and
       | more time on editing the actual thesis. the text is all over the
       | place.
        
       ___________________________________________________________________
       (page generated 2024-08-05 23:00 UTC)