[HN Gopher] Producing a trustworthy x86-based Linux appliance
       ___________________________________________________________________
        
       Producing a trustworthy x86-based Linux appliance
        
       Author : todsacerdoti
       Score  : 222 points
       Date   : 2021-06-02 05:06 UTC (17 hours ago)
        
 (HTM) web link (mjg59.dreamwidth.org)
 (TXT) w3m dump (mjg59.dreamwidth.org)
        
       | marcus_holmes wrote:
       | It's interesting that there's no attempt to solve the actual
       | problem here - telling the difference between the owner of the
       | device (who should be able to do what they like to their stuff)
       | and an attacker (who must be prevented from making changes to the
       | device).
       | 
       | Both are presumed to have extended physical access to the device,
       | any requisite knowledge, tools, etc.
       | 
       | The normal solution to this is to have a password that only the
       | owner knows. I'm assuming that that hasn't been used in this case
       | because the intention here is actually to lock the owner out and
       | only allow the manufacturer access to change the device. Is that
       | the case?
        
         | mjg59 wrote:
         | Where do you put the password? If you're just protecting the
         | firmware setup, then I can just rewrite the secure boot key
         | database without using the firmware setup. If you're using it
         | for disk encryption, I just buy one myself, dump the disk image
         | after it's decrypted, modify it so it'll boot whatever password
         | you type in, intercept your device while it's being shipped or
         | spend some time alone in a room with it, and drop my backdoored
         | image on there.
         | 
         | Please don't get me wrong - I would love a solution to this
         | problem that wasn't isomorphic to a solution that can be used
         | to lock out users. But I've spent the last 8 years or so of my
         | life trying to do so and haven't succeeded yet, because the
         | hardware we have available to us simply doesn't seem to give us
         | that option.
        
           | juhanima wrote:
           | Is it really that easy just to overwrite the secure boot key
           | database? I set up recently secure boot on a Lenovo laptop
           | according to these instructions
           | https://nwildner.com/posts/2020-07-04-secure-your-boot-
           | proce...
           | 
           | ...and deploying the keys involves setting UEFI in the Setup
           | mode, which is protected by the firmware setup password.
           | 
           | Granted, I didn't verify where the keys are stored and in
           | which format once deployed. But it would be pretty
           | disappointing if they were just copied to another place
           | without any encryption or signing authorized by the password.
        
             | mjg59 wrote:
             | The UEFI variable store is typically just a region of the
             | flash chip that stores the firmware. It's not impossible
             | that some systems perform some sort of integrity
             | validation, but I haven't seen that in the wild.
        
               | Harvesterify wrote:
               | Isn't Intel Bios Guard supposed to protect against this
               | very attack ?
        
           | marcus_holmes wrote:
           | Well, you're comparing hashes to ones online, can't you put
           | the password hash online? (sorry, I am very ignorant of the
           | situation here and am asking stupid simple questions)
           | 
           | I mean, surely the same problem exists for any data stored on
           | the device (keys, hashes, whatever)? If there's a way of
           | storing a key chain securely on the device so it can't be
           | modified by an attacker, can't a password be stored there
           | instead?
           | 
           | > ... the hardware we have available to us simply doesn't
           | seem to give us that option.
           | 
           | Is that because the manufacturers don't give the option, or
           | because technically there isn't a way of giving the option?
        
             | mjg59 wrote:
             | > Well, you're comparing hashes to ones online, can't you
             | put the password hash online? (sorry, I am very ignorant of
             | the situation here and am asking stupid simple questions)
             | 
             | If the system is compromised then it can just report the
             | expected password hash. You can't trust a compromised
             | machine to tell the truth.
             | 
             | > I mean, surely the same problem exists for any data
             | stored on the device (keys, hashes, whatever)? If there's a
             | way of storing a key chain securely on the device so it
             | can't be modified by an attacker, can't a password be
             | stored there instead?
             | 
             | Various bits of TPM functionality can be tied to requiring
             | a password, but it's hard to actually turn that into
             | something that's more freedom preserving while still
             | preserving the verifiability. What you want is the ability
             | to re-seal a secret to new values only as long as you have
             | the password available, and I don't /think/ you can
             | construct a system that does that with the features the
             | spec gives you.
             | 
             | > Is that because the manufacturers don't give the option,
             | or because technically there isn't a way of giving the
             | option?
             | 
             | Unclear. There's little commercial incentive for people to
             | come up with elegant solutions to this, and I can't
             | immediately think of a model that would be significantly
             | better than the status quo.
        
               | hnlmorg wrote:
               | > _If the system is compromised then it can just report
               | the expected password hash. You can 't trust a
               | compromised machine to tell the truth._
               | 
               | I think the GP's point is that you assume the system
               | can't tell the truth so you do the validation server side
               | rather than client side. Sure, the system could send a
               | different password hash, but as long as you don't publish
               | the correct hashes it's not going to matter if the client
               | sends alternative hashes since that validation happens
               | server side and thus the client wouldn't know what hashes
               | are valid.
        
               | marcus_holmes wrote:
               | Thanks for the clear answers and info :) It's an
               | interesting subject!
        
         | moring wrote:
         | This distinction is only useful if additional conditions are
         | satisfied:
         | 
         | The user must not be asked to enter that password repeatedly to
         | do everyday tasks, otherwise it is easy to trick them into
         | entering the password into a malicious input field.
         | 
         | More important, the user must not be expected to routinely
         | waive all protection for required tasks. For example, the user
         | is often expected when installing software on a computer to
         | give administrator privileges to an installer script which
         | comes from an untrusted source. The user is expected to "make
         | sure the source is trusted". This does nothing but put the
         | blame on the user, who cannot do that in a meaningful way at
         | all, yet is expected by other factors to install and use the
         | software.
         | 
         | The user must not be forced by other factors to enter the
         | password and take measures which are not just unsafe but are
         | actually known to attack the system, exfiltrate data or
         | similar.
        
           | marcus_holmes wrote:
           | I agree totally.
           | 
           | But we haven't come up with any better methods of doing this.
           | Some tasks require making changes to the machine that could
           | be evil, so we have to ask the user for permission to make
           | those changes (to stop evil people making those changes with
           | no permission). The more parts of the device we protect, the
           | more we have to ask for permission, and the more routine the
           | granting of that permission is and the less effective the
           | whole permission-granting mechanism is.
           | 
           | Coming up with a decent, workable, solution to this would be
           | great. As you say, in an ideal world the onus would not be on
           | the user to verify that the software they're installing is
           | not malicious (with no way of effectively doing that).
           | 
           | hmm, sounds like a problem that could be lucrative to
           | solve...
        
         | candiodari wrote:
         | > I'm assuming that that hasn't been used in this case because
         | the intention here is actually to lock the owner out and only
         | allow the manufacturer access to change the device. Is that the
         | case?
         | 
         | No the intention is to lock out functionality. Either
         | programming code that can't be decrypted without secure boot
         | actually booting security, or access to remote networks. That
         | second one is where the controversy comes in because it means
         | if the owner of the device is Netflix (or FB/Youtube/HBO/...)
         | and the network is owned by Netflix you cannot change anything
         | on the device in your house and still watch Netflix.
         | 
         | Because of this locking out functionality it is referred to as
         | "rendering your device useless". It can of course still do
         | everything it can do, at the owner's request, just not with
         | Netflix' data.
        
       | 1vuio0pswjnm7 wrote:
       | "Unless you've got enough RAM that you can put your entire
       | workload in the initramfs, you're going to want a filesystem as
       | well, and you're going to want to verify that that filesystem
       | hasn't been tampered with."
       | 
       | I have enough RAM. More than enough.
       | 
       | About 10+ years ago I was easily running the entire OS in 500MB
       | of RAM on an underpowered computer, boot from USB stick, tmpfs
       | (or mfs), no swap. Today, the amounts of RAM are even greater on
       | a comparabley-priced equivalant, at least double, more likely
       | triple or quadruple. I was not doing this to avoid some "anti-
       | tampering" scheme, I just wanted speed and cleanliness, and not
       | having to worry about HDD failures.
       | 
       | However I was using NetBSD not Linux. One thing I have learned
       | about Linux, it is not nearly as nice in the way it handles
       | memory exhaustion, or, in the case of this RAM disk setup, what
       | we might call "running out of disk space". IME, NetBSD
       | anticpitates and handles this situation better by default. I
       | could routinely run out of space, delete the overgrown file at
       | issue and continue working. I noticed an interesting comment
       | recently from a Linux developer on resource usage under NetBSD:
       | "Lastly, redbean does the best job possible reporting on resource
       | usage when the logger is in debug mode noting that NetBSD is the
       | best at this."
       | 
       | I like this RAM disk setup because it forces me to decide what
       | data I truly want to save long-term. Any data I want to preserve
       | I move to removable storage. The system is clean on every reboot.
       | In a sense, code is separated from data. The OS and user data are
       | not stored together.
       | 
       | Anyway, putting the entire OS in intramfs makes sense to me.
       | Intramfs, or at least tmpfs, is a filesystem so the "you're going
       | to want a filesystem" comment seems strange. Also I think the
       | reason(s) one might want a more optimised, lower overhead HDD-
       | based filesystem could be something other than "not enough RAM".
        
       | mlang23 wrote:
       | My generation was taught: "Once you have physical access, the
       | game is over." I believe this is still true. Pretending otherwise
       | feels like snake oil.
        
         | advisedwang wrote:
         | This idea came about in the days when we kept disks
         | unencrypted. Ripping the disk and editing /etc/shadow or
         | pulling data was trivial. Physical access was the _only_
         | requirement to do this.
         | 
         | Disk encryption then became practical, and defeating it
         | requires a running, vulnerable machine (for cold boot) or
         | tampering+user interaction (for evil maid).
         | 
         | Secure Boot makes those even harder - you will have to
         | compromise the TPM itself.
         | 
         | All of this is to say it is still possible to attack a machine
         | with physical access, but you now have to enage in further
         | security breaks. It's not really "game over" anymore as there
         | are further defensive to defeat.
        
           | mlang23 wrote:
           | Disk encryption is only really practical when you never need
           | to reboot. Found that out the hard way, like most others did
           | at some point :-)
        
             | generalizations wrote:
             | Yup. Can't remote manage a system with encrypted boot
             | disks.
        
               | advisedwang wrote:
               | Sure you can, e.g.
               | 
               | https://docs.microsoft.com/en-
               | us/windows/security/informatio...
               | 
               | https://access.redhat.com/documentation/en-
               | us/red_hat_enterp...
               | 
               | https://wiki.archlinux.org/title/Dm-
               | crypt/Specialties#Remote...
               | 
               | etc
        
         | userbinator wrote:
         | It's merely a rationalisation for the companies to maintain
         | control and lock users out of what they should rightly own.
        
         | qayxc wrote:
         | This only applies to parties with significant resources.
         | 
         | The same way that locking your front door or being home or
         | parking in a guarded parking lot will be efficient in deterring
         | opportunist thieves and even most regular thieves.
         | 
         | Absolute security doesn't exist but that doesn't mean security
         | measures are futile in general.
        
       | c0l0 wrote:
       | While I find this post and the ideas presented very interesting
       | on the technical level, work in that direction ("remote
       | attestation", making devices "tamper-proof") tends to give me a
       | dystopian vibe - foreshadowing a world where there's no hardware
       | left you can hack, build and flash your own firmware onto:
       | Complete _tivoization_ , to re-use lingo from when the GPLv3 was
       | drafted. With that, really neutering all the benefits Free
       | Software provides.
       | 
       | What good is having all the source code in the world if I can
       | never put my (or anyone else's) modifications to it into effect?
        
         | taneq wrote:
         | Why can't we have both? General-purpose computers for whatever-
         | we-want, and single purpose devices from a trusted source which
         | we can be more confident are untampered with?
        
           | dmos62 wrote:
           | I've not thought this through extensively, but couldn't you
           | just flash signed firmwares onto those "single-purpose
           | trusted devices"? If they were open to flashing that is.
           | 
           | What more to want from a security perspective? In-device
           | protection from flashing? Sounds similar to security through
           | obscurity. I'd prefer easy ways to check what a device is
           | flashed with. Something like a checksum calculator device.
           | Not sure if that's a reasonable idea.
        
             | mjg59 wrote:
             | There's a bunch of cases where you want these devices to be
             | resilient against attackers who have physical access to the
             | system. That means it has to be impossible to simply re-
             | flash them given physical access, unless you can detect
             | that this has happened. That's what the trusted boot side
             | of this is - it gives you an indication that that
             | reflashing has occurred.
             | 
             | Out of band firmware validation is a real thing (Google's
             | Titan sits in between the firmware and the CPU and records
             | what goes over the bus, and can attest to that later), but
             | that's basically just moving who owns the root of trust,
             | and if you don't trust your CPU vendor to properly record
             | what firmware it executes you should ask whether you trust
             | your CPU vendor to execute the instructions you give it.
             | Pretty much every option we currently have is just in a
             | slightly different part of the trade off space.
        
               | michaelt wrote:
               | _> There 's a bunch of cases where you want these devices
               | to be resilient against attackers who have physical
               | access to the system._
               | 
               | I looked into TPM stuff a few years ago, and it all
               | seemed pretty useless to me.
               | 
               | First of all, the entire key-protection house of cards
               | relies on the assumption if you've booted the right OS,
               | the keys can safely be unsealed. But the TPM does nothing
               | to protect from security issues beyond that point, which
               | is the vast majority of security issues.
               | 
               | Second of all, if you're worried about someone snatching
               | your laptop or phone, full disk encryption where you type
               | the password at boot gets you 99% of the protection with
               | much less complexity. And the much lower complexity means
               | many fewer places for security bugs to be accidentally
               | introduced.
               | 
               | Third, if you're worried about evil maid attacks where
               | someone dismantles your laptop and messes with its
               | internals without you knowing then gives it back to you,
               | then the TPM isn't sufficient protection anyway. They can
               | simply put in a hardware keylogger, or get direct memory
               | access, in which case it's game over anyway.
               | 
               | And fourth, the TPM doesn't have a dedicated hardware
               | button (making it a shitty replacement for a U2F key) and
               | doesn't have an independent clock (making it a shitty
               | replacement for TOTP on your phone) so it's not even a
               | good replacement for other security hardware.
               | 
               | About the only use I can see for this stuff is if you're
               | some huge multinational company, and you think even the
               | authorised users of your computers can't be trusted.
        
               | detaro wrote:
               | Note the term "appliance" in the submission title, and
               | "single-purpose trusted devices" in the comment chain you
               | replied to. General end-user desktop devices like your
               | laptop indeed aren't that high on the list of use cases,
               | at least not without further development in other areas.
               | (although I think you are somewhat skipping over the part
               | of being able to protect secrets stored in the TPM
               | against being requested by an unverified system)
        
               | zxzax wrote:
               | Just some small notes/nitpicks...
               | 
               | >the TPM does nothing to protect from security issues
               | beyond that point, which is the vast majority of security
               | issues.
               | 
               | I hear this type of thing often but it's the wrong
               | mindset to take when dealing with this stuff. Security
               | holes in one part of the stack are not an excuse to avoid
               | fixing security holes in other parts -- if you do that,
               | you now have multiple security bugs that are going
               | unfixed.
               | 
               | >And the much lower complexity means many fewer places
               | for security bugs to be accidentally introduced.
               | 
               | This doesn't seem to make any sense, avoiding securing
               | the boot process does not mean the boot process is any
               | less complicated or somehow has less parts that can be
               | compromised. TFA is just describing how to secure the
               | parts that are already there.
               | 
               | >They can simply put in a hardware keylogger, or get
               | direct memory access, in which case it's game over
               | anyway.
               | 
               | I'm not sure how this is related, building a tamper-proof
               | case seems to be outside of the scope of this. This seems
               | to cover only the software parts.
        
               | michaelt wrote:
               | _> avoiding securing the boot process does not mean the
               | boot process is any less complicated_
               | 
               | Of course it does: Not only does secure boot add an extra
               | point of failure, it's a point of failure that's
               | specifically designed to be highly sensitive, and to fail
               | locked, and that hardly anyone in the kernel development
               | community is testing with.
               | 
               |  _> I 'm not sure how this is related_
               | 
               | From a computer owner's point of view, the TPM's secure
               | boot functionality exists only to protect against
               | attackers with physical access to the device. After all,
               | if a malicious attacker making a remote attack has the
               | ability to replace your bootloader or reflash your BIOS,
               | they've already got everything.
               | 
               | In other words, secure boot is there to protect against
               | an evil maid [1] removing your hard drive, replacing your
               | bootloader with one that logs your full disk encryption
               | password, then subsequently stealing your laptop and
               | password at the same time. Or something of that ilk.
               | 
               | However, the TPM is insufficient to protect against such
               | attacks.
               | 
               | As such, secure boot fails to provide the one thing it
               | claims to provide.
               | 
               | A serious system - like the xbox's security system - (a)
               | has the functionality on the CPU die, and (b) has the
               | hardware for full speed RAM, bus and disk crypto, all
               | with keys that are inaccessible to the OS.
               | 
               | [1] https://en.wikipedia.org/wiki/Evil_maid_attack
        
               | zxzax wrote:
               | I don't understand what you mean extra point of failure.
               | There is a boot process, you can't get rid of it because
               | then you can't boot the machine. So you either secure it
               | or you don't. I get the concern that the hardware
               | implementation could contain bugs, and that's a real
               | concern, but your system is not going to be less secure
               | by having this -- at worst, it seems it can only be as
               | insecure as it would be without it.
               | 
               | >However, the TPM is insufficient to protect against such
               | attacks. As such, secure boot fails to provide the one
               | thing it claims to provide.
               | 
               | I don't think anyone is saying TPM or secure boot alone
               | is going to prevent against such attacks. It needs to be
               | combined with some other physical protection measures,
               | e.g. a tamper-proof case of some kind.
        
               | michaelt wrote:
               | _> It needs to be combined with some other physical
               | protection measures, e.g. a tamper-proof case of some
               | kind._
               | 
               | xboxes and iphones don't need tamper-proof cases.
        
               | salawat wrote:
               | Likely what they are referring to is how UEFI has greatly
               | complicated the nature of a computer's boot process by
               | essentially inserting a firmware runtime into what was a
               | simpler to understand chain of POST->Hand off to program
               | at address.
               | 
               | I had issues wrapping my head around this as well with
               | regards to things like Network Boot etc, where I could
               | not for the life of me understand or justify a boot
               | process having a runtime capable of doing all this extra
               | cryptographic/network nonsense when all I bloody wanted
               | was my OS, up, now.
               | 
               | Not to get nostalgic, but that magical era for a user
               | around Windows XP with a <5 second boot was just that;
               | magic.
               | 
               | I know all the oldtimers will come out of the woodwork
               | with horror stories of competing, sloppily specified BIOS
               | implementations, the pain of malware hiding in CMOS, the
               | threat of rootkits, etc... And the admins will chime in
               | with "How do you expect me to power on the thousands of
               | servers in my datacenter without network access during
               | boot"?
               | 
               | Those are valid and real situations in isolation I can
               | stomach. I cannot, however, stomach a boot process
               | whereby a non-owner arranges things in a way where it is
               | guaranteed that they get the final word in deciding how
               | hardware you paid for is used, which requires the
               | composition of those services.
        
           | josefx wrote:
           | Copyright owners will push for the later, maybe even take a
           | page out off Googles Play store licensing and outright
           | prohibit hardware manufacturers of even dreaming about
           | producing a device that doesn't enforce their draconian
           | requirements. So hardware manufacturers get the hard choice
           | of either going for the general market that expects things to
           | work or going for a tiny market that is happy with some of
           | the most popular services not working by design.
        
             | amelius wrote:
             | You can still do both. Just allow the user to blow a fuse,
             | and from then on the locks are removed.
        
             | indigochill wrote:
             | > So hardware manufacturers get the hard choice of either
             | going for the general market that expects things to work or
             | going for a tiny market that is happy with some of the most
             | popular services not working by design.
             | 
             | Software is already here, particularly in social media
             | (mastodon/SSB vs Facebook). That hardware eventually gets
             | there seems to me an inevitability (arguably we're already
             | at least partially there, as evidenced by the fact
             | Purism/Pine64/etc exist).
             | 
             | I still don't see it as a problem, though, because an
             | individual can have different technical interfaces
             | (devices, OSes, etc) for different purposes.
             | 
             | Generally, I put my personal stuff on systems I
             | understand/control.
             | 
             | For some things, like watching TV, I'm okay with going to
             | Netflix because that transaction is expected to be
             | transitory. If Netflix disappears or declares themselves a
             | new world order tomorrow, I can simply unsub and no harm
             | done.
             | 
             | Where things get problematic is when so much of someone's
             | life is wrapped up in a mono-corporate cocoon (e.g. Amazon
             | shipping things to your house and running your servers, or
             | Google serving you search results + mail + maps).
        
               | AshamedCaptain wrote:
               | > For some things, like watching TV, I'm okay with going
               | to Netflix because that transaction is expected to be
               | transitory. If Netflix disappears or declares themselves
               | a new world order tomorrow, I can simply unsub and no
               | harm done.
               | 
               | So much for your $1000 TV that had Netflix and only
               | Netflix builtin, and will refuse to boot when the
               | cryptographic check fails becaused you changed the string
               | that points to http://www.netflix.com to
               | http://www.notnetflix.com
        
               | HideousKojima wrote:
               | Or when Netflix refuses to continue supporting their app
               | on your device and so you are forced to upgrade, despite
               | your device still being fully capable of running video
               | streaming (like the Wii, and probably soon the PS3 and
               | Xbox 360)
        
               | josefx wrote:
               | > I can simply unsub and no harm done.
               | 
               | But your TV manufacturer still wants to provide Netflix
               | to other users and Netflix decided to require all their
               | devices to run its trusted code if they want to provide
               | Netflix to anyone, whether you in particular want it or
               | not. So your choice is to trash your existing TV and
               | track down a manufacturer that doesn't have any support
               | for Netflix, Hulu, Youtube, Amazon Prime, etc. at all to
               | buy a new TV that doesn't ignore your choice. With TVs
               | you might be lucky since there is a large market for dump
               | displays that avoid any TV related functionality anyway,
               | of course there might be restrictions in the license
               | between Netflix and the TV manufacturer to close that
               | loophole too, maybe limiting sales of dumb displays to
               | specific types of users.
        
           | pessimizer wrote:
           | People will not willingly buy a locked-down device over an
           | open device, all other things being equal. So general purpose
           | devices will not be made available, so that locked-down
           | devices will sell.
           | 
           | edit: the only people who think that being locked-down is a
           | feature are rationalizing technologists who indirectly profit
           | from that arrangement. It's not even more secure. The methods
           | used to control locked-down devices (namely constant network
           | connections and cloud storage/control) are the most
           | vulnerable attack surfaces we have, and the source of
           | virtually all contemporary security disasters.
        
             | freedomben wrote:
             | I sorely wish you were right, but the success of companies
             | like Apple seem to indicate otherwise. I won't buy a
             | locked-down device, but for every person like me there are
             | thousands who don't care.
        
             | lupire wrote:
             | This is a perfect example of survivor bias. OSes are secure
             | now, so attacks that succeed attack the sxurit system.
        
         | an_opabinia wrote:
         | > Complete tivoization
         | 
         | Ironically TiVo is long gone from living rooms.
         | 
         | The iPhone is locked down, consumers buy 5.5 Android phones for
         | every 1 iPhone. But _rich_ users buy iPhones, and they also
         | _buy_ software, so...
        
         | nextlevelwizard wrote:
         | Being able to verify that your program runs on authenticated
         | firmware does not mean you can't modify the firmware or the
         | software running or replace it with something else.
         | 
         | It just means that you can be sure no one else has tampered
         | with your device.
         | 
         | To me it seems very silly to not follow this line of thought
         | just because someone in the future might use it lock out
         | hackers. This is like leaving bugs unfixed because someone
         | might have a use for it.
        
           | nebulous1 wrote:
           | The user being able to verify things isn't the issue, the
           | issue is somebody else being able to verify things, perhaps
           | even requiring verification. This can even be extended (and
           | already has been) to where binaries can be supplied in
           | encrypted form and run on a processor that never reveals the
           | unencrypted code to its user.
        
         | stevenhuang wrote:
         | While the use case of secure boot is often anti consumer/end-
         | user, there are many applications that make sense to have
         | attestation (yubikey type embedded projects, etc)
         | 
         | Without free software implementations of secure boot et al. all
         | this would just happen behind closed doors. At least with this
         | the field progresses and you'll have the tools to secure your
         | own applications when the right project comes.
         | 
         | > What good is having all the source code in the world if I can
         | never put my (or anyone else's) modifications to it into
         | effect?
         | 
         | Well, it'll be more difficult to get pwned, for one.
        
         | mjg59 wrote:
         | You're not wrong, but unfortunately those of us on the side of
         | free software aren't the ones driving most technology
         | decisions. This technology already exists, and if people want
         | to use it to lock us out of our own hardware, they can already
         | use it to do so. Right now we're partially saved by remote
         | attestation on x86 systems just being too complicated (and,
         | basically, too privacy violating) to deployed in a way that
         | could be used against users, but this is basically what things
         | like Safetynet on Android are doing right now.
         | 
         | When the proprietary software industry is already using
         | technology, I don't think we benefit by refusing to touch it
         | ourselves. We can use this tech to lock down devices that are
         | better off locked down, and where we're not violating user
         | freedom in the process. We can use this to make it harder for
         | activists to have their machines seized and compromised in a
         | way they can't detect. Refusing to do the good things isn't
         | going to slow down the spread of the bad things.
        
           | userbinator wrote:
           | I am completely against any form of this technology, just
           | like DRM, because it breaks the concept of physical
           | ownership.
           | 
           |  _We can use this to make it harder for activists to have
           | their machines seized and compromised in a way they can 't
           | detect._
           | 
           | This argument is often-made and I hate it because it
           | advocates destroying the freedom of many just for the needs
           | of a _tiny_ minority --- and if a nation-state is going after
           | you, it 's pretty much game over unless you can create your
           | own hardware.
           | 
           |  _Refusing to do the good things isn 't going to slow down
           | the spread of the bad things._
           | 
           | Maybe to you it's a good thing, but to many of us, that is
           | the equivalent of giving Big Tech the noose and saying "don't
           | put it around my neck!" (The saddest part is how many will
           | happily work in these noose-factories, either oblivious to or
           | convinced that what they're doing is "good".)
        
             | zxzax wrote:
             | >it breaks the concept of physical ownership.
             | 
             | Am I missing something? This seems to be incorrect, this is
             | explicitly a case where you, the hardware owner, controls
             | the signing keys. It's nothing like DRM, that is a case
             | where an outside person controls the keys.
        
               | josephcsible wrote:
               | The problem is now that this exists and is easy to set
               | up, it's easy for the manufacturer to make a device where
               | they're in control of the keys forever instead of the
               | eventual owner gaining control.
        
               | zxzax wrote:
               | So don't buy that device? Those devices already exist,
               | that doesn't prevent you from buying other devices where
               | you control the keys. If manufacturing an unlocked device
               | becomes unprofitable and stops happening everywhere, then
               | we can talk about what to do, but I don't think the
               | existence of secure boot on Linux is going to make much
               | of a difference either way.
        
               | salawat wrote:
               | >If manufacturing an unlocked device becomes unprofitable
               | and stops happening everywhere, then we can talk about
               | what to do, but I don't think the existence of secure
               | boot on Linux is going to make much of a difference
               | either way.
               | 
               | You mean... The last decade or so? Pretty much Mobile
               | period sans the Librem 5 and I think maybe one other?
               | Anything with an ARM chip that'll run windows must be
               | secure booted and signed by Microsoft.
               | 
               | Or how about Nvidia(mostly)/AMD(to a lesser degree) video
               | cards, where the entertainment industry increasingly
               | relies on cryptographic attestation to constrain what
               | people can do with hardware they bought? There is no
               | "fully unlocked" buying option, and trying to divest
               | yourself of Nvidia is impossible while being able to use
               | your card to the fullest.
               | 
               | Or John Deere with their crippled hardware as a service
               | model?
               | 
               | I'm all with charging for convenience. That's a value
               | add. I'm not cool with intentional crippling, and
               | extortionate practices whereby the manufacturer maintains
               | ultimate control after first sale either legally or
               | practically through privileged access to signing keys.
        
               | zxzax wrote:
               | So.... don't buy those devices? I have a pinephone
               | myself, I don't really use GPGPU, and all the farmers I
               | know buy tractors from other companies.
        
             | bitwize wrote:
             | Sorry, but the future of computing is secure attestation of
             | everything the CPU runs -- from the boot firmware to end-
             | user applications. In the open source world we have two
             | options -- we can either get on board with this or we can
             | fight it and lose.
        
           | caspper69 wrote:
           | Yes, and you only need this at the root hypervisor level,
           | once peripherals can be abstracted in a new way (maybe DMA at
           | a different privilege level, certain hardware features would
           | be required).
           | 
           | I am not super mad if I have to run my custom kernel in a VM.
           | It substantially reduces the surface area exposed.
        
           | Teknoman117 wrote:
           | I personally don't dislike the concepts of trusted computing.
           | As much as I love to tinker with things, the last thing I
           | want is some data appliance being remotely exploitable.
           | 
           | I think all the devices that provide more security by being
           | heavily locked down should basically have a tinker switch. If
           | you really want to write your own firmware for your phone or
           | your dishwasher, flip it to tinker mode which locks you
           | (maybe permanently) out of the software it shipped it and let
           | you flash whatever on to it. The manufacturer gets to waive
           | all responsibility for your safety (digital, physical, etc.)
           | from that point onward.
           | 
           | Bonus points if it just blows away the keys to the onboard
           | software so you can use the security mechanisms for your own
           | code.
        
           | floatboth wrote:
           | Heh, Safetynet is consistently fooled by Magisk's hide-root
           | option, so is it really doing that?
        
             | no_time wrote:
             | Yeah, on devices without hardware attestation. Which is now
             | the new normal on all phones sold. When the software route
             | inevitably gets disabled and you can no longer fool google
             | into believing you dont have hardware attestation you are
             | done for good.
        
           | lmm wrote:
           | Development work is expensive. A cheap turnkey way of
           | building a locked-down, remote-attesting distribution is
           | going to make the bad things cheaper and more common. I'm
           | sure proprietary developers would get there eventually, but
           | this is one class of software where I think publishing a
           | stable, bug-free implementation that anyone can use does more
           | harm than good.
        
             | viraptor wrote:
             | It's been already done many times. You can lock down your
             | own machine from scratch in a couple of days with no prior
             | knowledge. There's really nothing to hide and all elements
             | of it are useful in their own right.
        
             | zxzax wrote:
             | How? You can use this to secure your own devices from
             | tampering. Lots of (cheap) devices are already locked down
             | like this, would it really help to deprive yourself of the
             | capability to secure your own devices too?
        
               | lupire wrote:
               | Many people want contradictory things and loudly ignore
               | the contradictions.
        
         | t0mas88 wrote:
         | I think the technology itself is great to have in the open
         | source sphere. There are many valid reasons to want to have a
         | system that is both open source AND cryptographically proven to
         | run exactly the software you think it runs.
         | 
         | For example voting machines should be done in this way. Open
         | source software such that outsiders are able to verify + a
         | secure boot process such that anyone can verify that the
         | machine is really running the code it is supposed to run.
         | 
         | Of course we should all still be very careful of what we accept
         | in terms of control of our hardware. And I agree with you that
         | things are not moving in the right direction there, with locked
         | ecosystems everywhere.
        
           | stefan_ wrote:
           | But nothing here is cryptographically proven. Remote
           | attestation ala Intel SGX is an opaque black box that comes
           | down to trusting Intel.
           | 
           | I think most people would prefer no voting machine software
           | at all, seeing how most people can not "verify that the
           | machine is really running the code it is supposed to run" but
           | can indeed verify a paper ballot.
           | 
           | And of course signing a huge code bundle is the farthest
           | possible thing from "run exactly the software you think it
           | runs". Console manufacturers keep learning that. You really
           | wanted to run that WebKit version that turned out to instead
           | be a generic code execution widget? Think again.
        
             | mjg59 wrote:
             | TPM-based remote attestation doesn't involve SGX at all. If
             | Boot Guard is enabled then you're trusting the ACM to
             | perform its measurements correctly, but that's a not overly
             | large amount of code that can be disassembled and verified.
        
             | mwcampbell wrote:
             | > I think most people would prefer no voting machine
             | software at all
             | 
             | The majority of people, with normal sight and no mobility
             | impairment, may be fine with paper ballots. But for some of
             | us, an accessible voting machine is more than a
             | convenience, as it enables us to independently (and
             | therefore privately) cast our votes.
        
               | nebulous1 wrote:
               | A machine could fill out the paper ballot in these cases.
        
               | throwaway3699 wrote:
               | Keep it as a secondary option then? Or, at worst, have
               | every state or county independently write the software
               | for their machines so they won't all be compromised. A
               | scalable way of breaking an election is dangerous.
               | 
               | Even a mobile app to guide blind users on the ballet
               | would be more secure.
        
             | t0mas88 wrote:
             | Sure, you can prefer paper ballots, but that's just one
             | example.
             | 
             | The reason I bring it up is that one of the benefits of
             | open source that is often mentioned is the ability to
             | verify that it does what you think it's doing. Doesn't
             | matter whether it's a voting machine, a self driving system
             | or an ATM or whatever. It's still good for open source to
             | have the capability to do this kind of proving in cases
             | where you want it.
        
             | zxzax wrote:
             | If you don't trust the CPU vendor, a solution there would
             | be to buy multiple CPUs from multiple vendors, run the same
             | thing on all of them, and compare the results. You would
             | still want them all to have the equivalent of SGX.
        
           | _Algernon_ wrote:
           | I have yet to see a digital voting system that a grandma with
           | 0 digital literacy can dream of trusting. That's my standard
           | for digital voting.
           | 
           | Basically excludes any black box machines, block chain,
           | cryptography and any existing computers.
        
         | [deleted]
        
         | ReptileMan wrote:
         | The question with secure boot is who has the keys. As long as
         | that's the end user it is awesome.
        
           | Ieghaehia9 wrote:
           | Which is why every trusted computing platform with free
           | software on it should have an owner override. (The total
           | count of such platforms is, as far as I know, unfortunately
           | zero.)
        
             | mjg59 wrote:
             | Pretty much every x86 system with a TPM and UEFI Secure
             | Boot can have the secure boot keys swapped out by the
             | owner.
        
               | ok123456 wrote:
               | I recently went to write a kernel module on my Ubuntu
               | system, only to discover the boot loader now defaulted to
               | "secure boot" nonsense and I couldn't insmod a non-signed
               | module.
               | 
               | I tried to simply disable "secure boot" in the BIOS
               | settings and then the boot loader just did absolutely
               | nothing. Hot fucking garbage.
               | 
               | Apparently, if you have "secure boot" available during
               | the install it will use "secure boot" without any way to
               | opt-out.
        
               | tzs wrote:
               | Did you try disabling it in shim-signed instead of the
               | BIOS (method 2 on this page [1])? I'd expect that to be
               | more consistent and/or reliable since BIOS quality can
               | vary a lot from vendor to vendor.
               | 
               | You might also try signing the kernel module yourself
               | (the manual method at the bottom of that page)?
               | 
               | [1] https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
        
               | ok123456 wrote:
               | I don't want to do any of that. I just want to insmod a
               | module like I've been doing since 1995.
        
         | NovemberWhiskey wrote:
         | I know this is going to be an unpopular opinion, but if a
         | service provider wants to support only specific clients which
         | have a rigorously-managed supply chain of their
         | hardware/software, then that's up to them.
        
           | c01n wrote:
           | I buy a computer I own that computer there should be no
           | strings attached.
        
             | NovemberWhiskey wrote:
             | Yes; sure.
             | 
             | But you don't own (for example) Netflix. So Netflix can
             | exclude you if you use your computer in certain ways,
             | right?
             | 
             | i.e. if you refuse to use a secure boot infrastructure and
             | do remote attestation of the trust chain to Netflix, they
             | can refuse to provide you service - obviously based on the
             | assumption that this was all made clear to when you signed
             | up.
        
               | salawat wrote:
               | The 'ole "they're a private business, they can do what
               | they want" defense.
               | 
               | I try not to whip that out because it's all fun and cool
               | til they do somethimg you don"t like.
        
       | exxo_ wrote:
       | If one can measure the whole boot process and verify the
       | attestation "remotely", why would he need secureboot on top of
       | that?
        
         | viraptor wrote:
         | You need secureboot to be able to ensure that the boot process
         | is the one you set up. Otherwise the attacker can observe it
         | once and replace it with their own version doing whatever they
         | want and saying "yup, here's your magic number, I totally
         | generated in a legit way not read from a saved store".
        
       | devit wrote:
       | You should not do that since there is no reason to disallow the
       | user from doing what they want.
       | 
       | But if you really want it, writing a custom boot ROM and OS is
       | probably the only way you can have an actually secure system (you
       | might need a custom CPU as well).
       | 
       | Given the lack of skill and discipline of most programmers the
       | whole TPM/secure-boot/UEFI/grub/Linux/dm-verity stack is likely
       | full of holes, so just the assuming that it works as you'd expect
       | will probably be disappointing.
        
         | GordonS wrote:
         | > You should not do that since there is no reason to disallow
         | the user from doing what they want.
         | 
         | For desktop computing ("personal computing", if you like),
         | anyone here will agree.
         | 
         | But the article is specifically talking about securing
         | appliances, and generally when talking about appliances, you're
         | talking about rackable machines sold to the enterprise. There,
         | they don't care a jot about being able to muck around with the
         | machine - the whole point of an appliance is that you plug it
         | in and go; that is more or less manages itself.
         | 
         | And for many of these customers, and of course any operating in
         | a high-security environment (e.g. defence), this level of
         | security is a feature, not a hindrance.
        
       | Layke1123 wrote:
       | This is what should be posted at the top of Hacker News. I hate
       | corporate software engineers sold on the capitalism train.
        
       | [deleted]
        
       | no_time wrote:
       | Trusted computing and TPMs by extension are treachery on a chip
       | without an user override. And the fact that even the most tech
       | savvy of us don't care (looking at security researchers with
       | macs) makes me super pessimistic about the future of computing.
       | Can't wait for the time when i wont be allowed to access a
       | website because I have unsigned drivers running...
        
         | derefr wrote:
         | You're conflating trusted computing with there being an OS-
         | manufacturer monopoly over TCB boot keys.
         | 
         | Trusted computing is great if you're an IT admin (or even an
         | "IT admin of one") and you order devices with an open/unsealed
         | boot-key store from your hardware vendor. You can install your
         | own boot keys on your fleet of devices, seal the boot key
         | store, modify OS images/updates to your heart's content, and
         | then sign those modified images with those same keys. Now only
         | the images _you 've_ created will run on those computers.
         | People won't even be able to install the original, _unmodified_
         | OS on those machines; they 'll now only ever run the version
         | you've signed.
         | 
         | This isn't just about employees not being able to futz with the
         | MDM. Even the _OS vendor_ won 't be able to push updates to
         | your managed devices without your consent. You'll truly have
         | taken control over what code the devices will (or more
         | importantly, _won 't_) run.
         | 
         | In any situation where the person intended to have physical
         | access to the device is not the same as the owner-operator of
         | the device, this kind of thing is essential. Obviously, public-
         | use computers in libraries et al. But also, ATMs. Kiosks.
         | Digital workplace control panels and dashboards. On all of
         | those, nobody's around to monitor the hardware to ensure that
         | someone doesn't just open the back and swap the hard drive out
         | for their own. With TCB, swapping the hard drive out just makes
         | the device not boot.
        
         | tachyonbeam wrote:
         | > Can't wait for the time when i wont be allowed to access a
         | website because I have unsigned drivers running...
         | 
         | If that happens, it will create a market opportunity for
         | websites without DRM or such checks. If you fuck with the
         | ergonomics, you necessarily always create a market opportunity
         | for competitors IMO. That being said, I also would rather use
         | open computing platforms where I can easily install whatever
         | OS, drivers, hardware or userland software I please.
        
           | no_time wrote:
           | Will it though? From what I see anecdotally, people will just
           | accept it as the new normal sooner or later. Just like when
           | Android rolled out a feature that enables apps to prevent you
           | screenshotting them. At first it was annoying but now nobody
           | cares.
        
             | nebulous1 wrote:
             | I tend to agree. This ended up longer than expected, sorry.
             | 
             | There's the theory of how incentives should work in free
             | markets, and then there's the practice of exactly how savvy
             | consumers can really be, and whether non-consumer interests
             | can organize themselves in a way that easily overpowers the
             | consumers.
             | 
             | I've thought about this recently regarding hardware DRM in
             | Android phones. Google has Widevine which has different
             | levels of support, and Netflix, for example, will only send
             | high definition streams if your device supports L1 Widevine
             | which means it will only be decrypted in "secure" sections
             | of hardware and the user cannot access these areas. This is
             | intended to stop user access to the unencrypted media.
             | 
             | This hardware is widely available in Android devices
             | already, so why would Netflix* do otherwise? And if you
             | want to stream HD from Netflix then you'll get a device
             | that supports it because Netflix require it. However, how
             | did our devices end up with this technology to begin with?
             | If consumers acted in their own best interest, why would
             | they pay to have this extra technology in their devices
             | that protects somebody else's interest? If this technology
             | _wasn 't_ on our devices already, do we think that Netflix
             | wouldn't be offering HD streams anyway? Basically, if
             | consumers could organize as effectively as corporate
             | interest, would this technology have made it to our devices
             | at all?
             | 
             | It's possible that it would have. Perhaps overall people
             | would deem it worthwhile to acknowledge and protect
             | corporate rights holders so that they can continue to
             | produce the media they want to consume and stop people
             | consuming it for free. Personally, I would not have
             | accepted this bargain and I would have left it up to the
             | media companies to manage their own risks and rewards, and
             | I strongly suspect that they would have found profitable
             | ways of doing so that would include non-DRM HD streaming. I
             | think it's tough to say what an educated consumer would
             | think on average because so few consumers think about these
             | things and those that do may have a strong bias that led
             | them to research it in the first place.
             | 
             | * I'm saying Netflix here because it's easier, but in
             | reality I'm sure a lot of the content they licence will
             | require DRM so it's not entirely up to them
        
       | xvector wrote:
       | Relevant, HEADS firmware: https://github.com/osresearch/heads
       | 
       | Definitely worth reading the Wiki: https://osresearch.net/
       | 
       | Can be run on a variety of laptops, including a ThinkPad X230.
       | Ships by default on Librem laptops. Uses the second-to-last
       | approach described by the article (TOTP-based).
        
       | swiley wrote:
       | No one wants this. (the people who understand it don't want it
       | and the people who don't care don't care.)
        
       | [deleted]
        
       | salawat wrote:
       | Trusted computing is almost without fail never about making the
       | device trustworthy to the owner operator. Quite the opposite
       | generally. It just gets marketed as that in the hope folks don't
       | ask too many questions.
        
       | R0b0t1 wrote:
       | You can't. The post is still somewhat valuable, but should really
       | not use "trustworthy."
        
         | sarnowski wrote:
         | Dealing in absolute terms does not help security. It depends
         | entirely on your threat model.
         | 
         | If you consider NSA or similar agencies a problem then you are
         | in a world of pain anyway and using an entry level guiding blog
         | post is certainly not appropriate.
         | 
         | For everyone else, this puts already quite a big defense layer
         | to your arsenal even if not unhackable in absolute terms.
        
           | R0b0t1 wrote:
           | I know. Thus I said:
           | 
           | >The post is still somewhat valuable, but should really not
           | use "trustworthy."
           | 
           | The posted article is dealing in absolutes, not me.
        
         | eklitzke wrote:
         | Why can't you? The article explains how.
        
           | usr1106 wrote:
           | Depends on your level of paranoia and the age of the CPU.
           | 
           | The ME has had many security vulnerabilities and probably
           | more to come. For an appliance some old CPU might be good
           | enough, but it does not get security updates. Some claim the
           | ME might contain a NSA backdoor. That the ME can do
           | networking certainly doesn't give confidence. The US
           | government can order CPUs without ME, but nobody else can.
           | Does not raise confidence either.
        
             | ex_amazon_sde wrote:
             | Please don't call it "paranoia": a whole lot of
             | vulnerabilities have been found in CPUs together with
             | plenty of undocumented functions that look just like
             | backdoors.
             | 
             | On top of that, it is well known that governments research
             | or buy 0-day hardware and software vulnerabilities and keep
             | them secret to be used as weapons.
             | 
             | ME is just a fraction of the attack surface. When I read
             | the title of the article I thought "trustworthy" was about
             | mitigating hardware vulnerabilities.
             | 
             | At this stage it's practically impossible. :(
        
             | eru wrote:
             | What's ME? The article doesn't seem to mention this.
        
               | adrianN wrote:
               | https://en.wikipedia.org/wiki/Intel_Management_Engine,
               | AMD has something similar.
        
               | SilverRed wrote:
               | Every intel cpu past 2008 contains a coprocessor which
               | runs at a higher permissions level than the normal CPU
               | and therefor the OS. Its primary function is DRM for
               | video and theorized backdoor access for governments.
        
               | Google234 wrote:
               | It should be noted that AMD has an equivalent management
               | engineZ
        
               | usr1106 wrote:
               | But ARM hasn't. Or have they added something to their
               | server range of designs?
        
               | mjg59 wrote:
               | The short version is "It's complicated". Most ARM cores
               | have a feature called TrustZone. Effectively, there's a
               | set of system resources that are allocated to TrustZone
               | and not accessible from the normal world. Various events
               | can trigger the CPU to transition into executing code in
               | this "Secure world", at which point the core stops
               | running stuff from the normal world and instead starts
               | running an entirely separate set of things. This can be
               | used to do things like "hardware" key generation, DRM
               | management, device state attestation and so on. Whether a
               | specific platform makes use of TrustZone is largely up to
               | the platform designers, but there's plenty of room to
               | hide backdoors there if you were so inclined.
        
               | usr1106 wrote:
               | Hmm, I have never seen Trustzone as comparable to ME.
               | 
               | Trustzone is a secure execution environment, mostly
               | isolated from normal CPU operation. Wasn't it so that it
               | cannot even access main memory???
               | 
               | ME is really more privileged than the CPU?
               | 
               | I have not heard about Trustzone doing networking. But ME
               | can supposedly do even WLAN while the CPU is not running.
               | 
               | Disclaimer: I am not a hands-on expert at that level,
               | more like an armchair pilot...
        
               | als0 wrote:
               | TrustZone is a CPU mode, hence it is not fully isolated
               | from normal CPU operation. The CPU chooses to enter it
               | and the current CPU state gets saved/restored. It
               | contains the highest exception level, so it is able to
               | access all memory. It does not usually have networking
               | because that would invite complexity, but there is
               | nothing to stop a vendor from putting a full network
               | stack in there and assigning a network peripheral.
               | Typically, it would rely on the main OS to send and
               | receive packets.
        
           | hulitu wrote:
           | It's right there in the article : " In the general purpose
           | Linux world, we use an intermediate bootloader called Shim to
           | bridge from the Microsoft signing authority to a distribution
           | one. "
           | 
           | So you need to trust Microsoft for the first keys :)
        
             | midasuni wrote:
             | " Dan would eventually find out about the free kernels,
             | even entire free operating systems, that had existed around
             | the turn of the century. But not only were they illegal,
             | like debuggers--you could not install one if you had one,
             | without knowing your computer's root password. And neither
             | the FBI nor Microsoft Support would tell you that."
        
               | MaxBarraclough wrote:
               | Source: _The Right to Read_ , a short story by RMS, 1997.
               | 
               | https://www.gnu.org/philosophy/right-to-read.en.html
        
             | mjg59 wrote:
             | We do that for convenience, so you can boot Linux without
             | having to hunt through firmware menus to reconfigure them.
             | But every machine /should/ let the user enroll their own
             | keys[1], so users are free to shift to a different signing
             | authority.
             | 
             | [1] Every machine I've ever had access to has. If anyone
             | has an x86 machine with a Windows 8 or later sticker that
             | implements secure boot but doesn't let you modify the
             | secure boot key database, I have a standing offer that I'll
             | buy one myself and do what I can to rectify this. I just
             | need a model number and some willingness on your part to
             | chip in if it turns out you were wrong.
        
               | AshamedCaptain wrote:
               | Most Surface Pro x86 devices do not let you enroll user
               | keys through the firmware. In fact the original Surface
               | Pro doesn't even have the UEFI MS key, so it can't even
               | boot Shim. Following Surface devices do allow you to
               | enroll the MS UEFI key through a firmware update
               | (requires Windows), and starting from Surface Pro 3 iirc
               | the UEFI MS key is builtin (but still no option to enroll
               | your keys through the firmware).
               | 
               | However, they all do have the option to disable Secure
               | Boot entirely (and you get a permanent red boot screen
               | for the privilege).
        
               | Foxboron wrote:
               | I have been trying to improve the usability of secure
               | boot key management on Linux for the past year by writing
               | some libraries from scratch and sbctl. I have even
               | started writing full integration testing with
               | tianocore/ovmf!
               | 
               | https://github.com/Foxboron/sbctl
               | 
               | It should hopefully end up being an improvement on
               | efitools and sbsigntools. Tried posting about this on HN
               | but somehow it's a topic with little to no interest,
               | strange world!
        
           | tinus_hn wrote:
           | Console and phone manufacturers have chased this dream for
           | decades and each and every one has been hacked to run
           | arbitrary code and applications that are supposed to 'only
           | run on trusted hardware'.
           | 
           | You can make it difficult but defeating an attacker who can
           | touch the hardware is for all intents and purposes
           | impossible.
        
             | mjg59 wrote:
             | Where are the hacks that let you run arbitrary code on an
             | Xbox One running current firmware?
        
               | tinus_hn wrote:
               | Do you think they will never exist?
               | 
               |  _edit_ I found that Microsoft did the smart thing like
               | Sony did with the original PS3 and allowed people to run
               | their own code (but not XBox games) on their consoles,
               | removing a large incentive for people hacking the
               | console.
               | 
               | That doesn't automatically make the security watertight
               | though.
        
               | mjg59 wrote:
               | "Never" is a strong word, but given that they're already
               | previous generation devices and haven't been properly
               | compromised yet, it wouldn't surprise me.
        
       | strstr wrote:
       | The frustrating flaw in these setups is disk integrity. It's
       | pretty consistently {speed, integrity, mutability}, choose two.
       | Dmcrypt, dm-integrity, and dm-verity cover all the pairs, but
       | none of them completely solve the problem. If you have fairly
       | static configuration, I imagine you can set up a blend of, say,
       | dm-verity (for binaries/static files) and dm-integrity (for
       | mutable files) and get something workable.
       | 
       | Caveat: I seem to recall dm-integrity being somewhat flawed and
       | vulnerable to rollback.
        
         | mkj wrote:
         | You could possibly use ZFS with sha256 checksums for that
         | purpose? You would have to somehow sign the merkle root each
         | time you write it, not sure how easy that would be. Perhaps
         | write it to another partition and hope it's atomic enough? Or
         | ZFS encryption would probably do it already if you don't need
         | the system in cleartext.
         | 
         | https://blogs.oracle.com/bonwick/zfs-end-to-end-data-integri...
        
           | t0mas88 wrote:
           | The tricky part with modifications is described in the
           | article: You would have to have the signing key available on
           | the system which usually means it could be extracted from
           | that system and then looses all protections.
        
           | strstr wrote:
           | You then have write amplification from the merkle tree.
           | Ignoring performance, something like this should be possible
           | though. For atomicity, there's going to be some clever
           | journaling based solution.
        
       | [deleted]
        
       | 3np wrote:
       | The next step to this would be to put the actual workloads in a
       | Trusted Execution Environment (Intel SGX) to add another layer of
       | integrity.
        
       | cy6erlion wrote:
       | > Let's say you're building some form of appliance on top of
       | general purpose x86 hardware. You want to be able to verify the
       | software it's running hasn't been tampered with. What's the best
       | approach with existing technology?
       | 
       | Why can we not use something like Guix by declaratively setting
       | up a system [0] and for extra safety have it run in a container
       | [1]?
       | 
       | [0]
       | https://framagit.org/tyreunom/guix/-/blob/99f47b53f755f0a6cb...
       | 
       | [1] https://guix.gnu.org/en/blog/2017/running-system-services-
       | in...
        
       ___________________________________________________________________
       (page generated 2021-06-02 23:02 UTC)