[HN Gopher] Tell HN: It looks like even air gapped Bitcoin hardw...
       ___________________________________________________________________
        
       Tell HN: It looks like even air gapped Bitcoin hardware wallets can
       phone home
        
       We had a great discussion here on HN a few days ago about the
       question whether it is possible to use Bitcoin in a trustless way.
       So that you control your Bitcoin yourself and don't have to trust
       any privileged party to not take it from you:
       https://news.ycombinator.com/item?id=32115693  Interestingly, there
       was a _lot_ of speculation and misinformation. So even on Hacker
       News, this topic is still only vaguely understood.  But also some
       very good information came to light.  The biggest bomb that was
       dropped in the thread received little attention: The fact that
       signing a transaction is not deterministic. This means when a
       hardware wallet is asked to sign a transaction, it can internally
       do that multiple times and then chose from multiple valid
       signatures. This means that it can encode data into the signature.
       For example, it could choose between two signatures with certain
       properties (say one results in an even checksum of the bits of the
       signature and one results in an odd checksum) and thereby
       signalling one bit to the creator of the wallet.  Everytime it
       signals a bit of your seed phrase home, the security of your coins
       is cut in half.  Here is an article about the fact that elliptic
       curve signatures are not deterministic:
       https://medium.com/@simonwarta/signature-determinism-for-
       blockchain-developers-dbd84865a93e  The way I understand it, the
       wallet can chose from a large number of possible signatures and
       thereby signal many bits to its creator. In every transaction.  I
       think a dicsussion about this should be started. The way I
       understand it, it makes it completely impossible to use Bitcoin in
       a trustless way. Even with an air gapped hardware wallet, you are
       always at the mercy of the wallet manufacturer and the delivery
       chain that gets the wallet to you. If it gets swapped out on the
       way to you, you are at the mercy of whoever swapped it out.
        
       Author : JonathanBeuys
       Score  : 47 points
       Date   : 2022-07-21 16:36 UTC (6 hours ago)
        
       | mattdesl wrote:
       | If you are willing to go to any lengths to remove the need to
       | trust a manufacturer, you can't even start with a dice roll as
       | the dice could be weighted against you. But if for the sake of
       | example you trust the dice and it's uniform randomness to
       | construct the private key, you should be able to construct a new
       | signature for each new transaction using the dice as a source of
       | uniform randomness, and write your own code on an always-offline
       | computer. You can then generate the signature data needed for the
       | transaction, write it on paper, and carry it to your online
       | computer, where you plug it in and send it to the network.
       | 
       | At no point in this process, short of physical access to the
       | offline computer, is the ECDSA nonce 'k' known publicly, so I am
       | not sure what you mean by it cutting your security in half with
       | each transaction. If there are 256 bits in a nonce you would need
       | to generate _a lot_ of signatures for this to be a concern, and
       | if you want to mitigate against this you could cycle through new
       | private keys after every Nth signature.
       | 
       | Much more likely attack has to do with how you generated the
       | random value k.
        
         | JonathanBeuys wrote:
         | Writing your own code is not what my post is about.
         | 
         | It is about the fact that even air gapped hardware wallets can
         | phone home.
        
           | actually_a_dog wrote:
           | You've missed the point entirely. Your post is claiming one
           | cannot use BTC in a "trustless" way. The GP is saying "You
           | have to either choose to trust a third party at _some_ point,
           | or go to these extreme lengths that _still_ don 't ensure
           | your security unless your opsec is 100% perfect, every time."
           | 
           | OTOH, your post also doesn't _prove_ that the mass-
           | manufactured hardware in question _is_ actually malicious. At
           | best, you 've shown a way that it _could_. Show me an
           | _actually_ malicious hardware wallet that behaves as you 've
           | described, and you'll have made your point. Until then, all
           | you have is speculation and improbabilities.
           | 
           | In other words, charitably interpreted, you've shown that the
           | hardware equivalent of the C compiler in "Reflections on
           | Trusting Trust" _could_ exist, just as the paper itself
           | showed that such a C compiler _could_ exist[0]. That is all.
           | There is no evidence either one exists at all in the wild.
           | 
           | ---
           | 
           | [0]: Which, I'll admit, is a pretty cool thought exercise,
           | but has precisely _zero_ real world impact.
        
           | mattdesl wrote:
           | What do you mean "phone home"?
           | 
           | Posting a signature on a public ledger does not give
           | information about nonce k which I think is what you are
           | referring to. Each time a new transaction is signed, the k
           | value will be a new random big integer.
           | 
           | If your wallet is able to leak bits in this way it would
           | imply the value k is not chosen uniformly randomly. This is
           | my understanding of ECDSA at least.
        
             | Genbox wrote:
             | "phone home" is not really the right way to describe it.
             | The attack proposed is that a hardware wallet (being a
             | black box) can give the hardware wallet developers
             | information about the private key.
             | 
             | Note that I have very little understanding of blockchain
             | crypto, so I am unable to confirm/deny the information OP
             | gave. However, the way I understand the attack is:
             | 
             | Hardware wallet generates a private key. It keeps this key
             | in internal storage. When a transaction is made, the wallet
             | makes a signature. According to OP, there is a variable
             | here (I'm guessing either multiple private keys, or the
             | ability to choose a signature algorithm, or even embedding
             | a timestamp in the transaction) which the hardware wallet
             | can use to "leak" information.
             | 
             | Let's say the wallet decides to embed a timestamp. Whenever
             | a bit in the private key is 0, the timestamp is even, and
             | when a bit is 1, the timestamp is uneven.
             | 
             | After 4096 transactions, presumably the whole private key
             | is now stored in the blockchain as even/uneven timestamps.
             | 
             | This is of course a very slow way of leaking the private
             | key, but does illustrate the problem of having unverified
             | devices be responsible for crypto results.
        
               | mattdesl wrote:
               | But this comes down to "trusting a compromised device is
               | bad." The device could steal your funds from the moment
               | you send your first transaction (it could only generate a
               | set of known private keys).
               | 
               | Assuming the mode of key signing is not compromised and
               | is producing robust uniform randomness (whether it's a
               | hardware wallet, airgapped device or your own hand-rolled
               | code) it shouldn't leak anything per transaction that
               | would lead to your private key being more easily
               | discoverable.
        
       | wmf wrote:
       | Good wallets are open source and have anti-tamper and anti-
       | interdiction features so the chance of this happening should be
       | pretty low.
        
         | JonathanBeuys wrote:
         | That sounds like you trust the manufacturer of the hardware
         | wallet.
         | 
         | If so: The question was if Bitcoin can be used trustless.
         | 
         | If not: How would you check that the hardware wallet in your
         | hand runs the open source code you trust?
        
           | kreetx wrote:
           | How about if you use your computer to generate the key and
           | sign transactions? Sure, the private key is stealable now,
           | but at least you know what coude you're running..
        
       | Genbox wrote:
       | This is the good ol' Trusted System issue. I've roamed in this
       | area before, both in terms of creating a chain of trust in crypto
       | systems and also at a more philosophical context for voting
       | systems. I'll decompose the issue into more abstract questions:
       | 
       | > Can we ever trust any system?
       | 
       | Yes, to an extent. There is no such thing as a system that can be
       | trusted completely, but we don't need it to be in 99% of cases.
       | One might say "you can trust crypto primitive XYZ. If you use it,
       | it would take 1 billion years to break". That might be true, but
       | side-channel attacks, leaks, statistical biases and whatnot will
       | always be an issue.
       | 
       | To get as close as possible to trust in a system, it needs to be
       | formally verified with proofs. That's the best we can do
       | program/algorithm wise, but even if we trust the program, it
       | cannot trust the system it resides on.
       | 
       | > How can we achieve trust then?
       | 
       | You know how bitcoin is based on a distributed consensus
       | algorithm? It protects the whole system from collapsing due to a
       | bad actor in the system. Even if thousands of people decides to
       | cheat, it won't have any considerable effect.
       | 
       | Let's say you buy a hardware wallet from a reputable vendor - if
       | they decide to cheat, you will be at their mercy. To combat this,
       | you need a way to verify that what it does, it does so correctly,
       | but also without side effects.
       | 
       | This is again something that needs to be formally verified. Any
       | deviation from the spec will stand out like a sore thumb. To
       | achieve this, we need to introduce a verifier.
       | 
       | The verifiers job is to check if the hardware wallet did it's
       | job, but without being in the possession of the private key.
       | There are lots of ways to do this, but a hot topic today is zero
       | knowledge proofs, where the wallet would need to stand up to
       | scrutiny.
       | 
       | The verifier would also need to check the results on the
       | blockchain. Not just that the result generated is correct, but
       | also that it is without side-effects.
       | 
       | > But then we have to trust the verifier!
       | 
       | Yep, and each time we introduce a verifier for the verifier, we
       | will have made the system more trusted. Let's say we have N
       | verifiers, whos best interest is that your wallet did the right
       | thing.
       | 
       | In a transaction, it is not only in _your_ best interest that the
       | transaction is correct (and without side-effects), but also the
       | other party. We can extend this system to be a small group of
       | people in _any_ transaction. If a small group of verifiers all
       | agree with a certain level of consensus, then we can trust the
       | system beyond a reasonable doubt.
       | 
       | This might sounds familiar to those who work with blockchains -
       | and you would be right. It is eerily similar to how it works
       | today. However, the blockchain covers only the cryptographic
       | guarantees. The system needs to be extended to cover formal
       | verification of the system as well.
       | 
       | > Example
       | 
       | Formal verification is a mostly academic exercise for most, so
       | I'll give a small example for those of you who are unfamiliar
       | with it.
       | 
       | Let's say person A and B make a transaction. Both have super
       | secure hardware wallets and the crypto used it state-of-the-art.
       | It should be secure right?
       | 
       | We can review the code of the system, but it is hard to identify
       | mistakes. Who knows, maybe there will be a new area of
       | vulnerabilities in a few years, and we never saw it coming.
       | 
       | Within the area of "correctness", we first need to make a formal
       | specification. We create some testable properties about the
       | system that needs to hold true, no matter the transaction or who
       | is involved (these are called invariants).
       | 
       | So person A transfers 1 bitcoin to person B, they do so by
       | signing a nonce with a private key. Person A checks the nonce and
       | ensures it is indeed random (test 1). The signature is sent to
       | person B, which then tests if the signature is no different than
       | random data (test 2).
       | 
       | Howe test 1 and 2 are performed are incredibly important and very
       | difficult to do, but not impossible.
       | 
       | If test 1 or 2 happens to be non-random, then we can just reject
       | the transaction. We don't know if it was non-random by chance or
       | on purpose, but since it does not live up to our criteria, we
       | will reject it.
       | 
       | This means Person A will check what they got from person B and
       | vice versa. However, why not have a bunch of random people
       | participating in the block chain do the same checks?
       | 
       | If 0.1% of all in people in the blockchain checks the transaction
       | between person A and B, and they all have a say if the
       | transaction gets rejected, then we can trust the system beyond a
       | reasonable doubt.
       | 
       | And no, this system is not perfect. We don't need it to be. We
       | need it to be good enough so it becomes incredible hard to cheat.
       | Also note that I've omitted a lot of details for brevity as well.
        
       | [deleted]
        
       | philomath_mn wrote:
       | Very interesting! The "attacker" wouldn't know which signatures
       | came from their hardware, but I suppose they could easily scan
       | all transactions to find them.
       | 
       | The only fix I can think of would be to evaluate the hardware
       | signatures using statistical tests to try to pick up any bias.
       | This would be a burden on the user, but at least feasible.
        
         | lacker wrote:
         | The malicious wallet could encrypt the data it's reporting
         | before splitting it into bits to report it. Then there won't be
         | any pattern to show up on statistical tests.
        
         | JonathanBeuys wrote:
         | Yes, the attacker would watch every transaction on the
         | blockchain for their bits. Not hard to do, since there are just
         | a few transactions per second.
         | 
         | Interesting idea with the bias checking. Not sure if it is
         | possible. If it is, it would probably need very clever software
         | to do that. One that bombards the hardware wallet with a big
         | number of seeds and transactions and checks if it can find
         | indications of the seeds having an impact on the signatures.
        
       | csdvrx wrote:
       | > Interestingly, there was a lot of speculation and
       | misinformation. So even on Hacker News, this topic is still only
       | vaguely understood.
       | 
       | Indeed - and thanks a lot for the link, that was a super
       | interesting reading!
       | 
       | > The way I understand it, the wallet can chose from a large
       | number of possible signatures and thereby signal many bits to its
       | creator. In every transaction.
       | 
       | Isn't it possible to make that deterministic by adding some rank-
       | ordering heuristic? (ex: always prefer the smallest numerical
       | signature, or with the most consecutive numbers etc)
       | 
       | Then if 2 wallets from 2 different providers disagree, you would
       | know there's a problem!
       | 
       | In a way, it would be doing like in reproducible software builts:
       | controlling the randomness, except it would be done ex-post
       | (ranking the possible choices and selecting one) instead of ex-
       | ante (setting the clock etc).
       | 
       | If that's impractical, a simpler way may be to require the wallet
       | to make say 100 possible signatures, but then randomize which one
       | is used in another independent step.
       | 
       | Also, from the article:
       | 
       | >> "deterministic signing" means that at least one deterministic
       | way to generate signatures exist. It does not imply that a signer
       | can only generate one valid signature. Due to the nature of the
       | signing algorithms, an observer cannot detect if a standard
       | algorithm or a customization was used.
       | 
       | The core problem seems to be that the hardware device obfuscates
       | the algorithm, which should be less of a problem with software
       | you can compile.
       | 
       | > Even with an air gapped hardware wallet, you are always at the
       | mercy of the wallet manufacturer and the delivery chain that gets
       | the wallet to you.
       | 
       | That's because of the above: you need to control for different
       | things (ex: good source of randomness, correct implementation of
       | the algorithm etc)
        
         | jakelazaroff wrote:
         | _> The core problem seems to be that the hardware device
         | obfuscates the algorithm, which should be less of a problem
         | with software you can compile._
         | 
         | Can you trust your compiler, though? What if it changes the
         | algorithm when it compiles your source code?
         | 
         | (See also "Reflections on Trusting Trust" by Ken Thompson:
         | https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html)
        
       | matthewdgreen wrote:
       | If your wallet was compromised during shipment it doesn't need to
       | exfiltrate the seed (this is called a covert channel), unless
       | you're somehow importing fresh private keys. It can just generate
       | all of the keys (and seeds) using randomness that's already known
       | to the attacker.
        
         | JonathanBeuys wrote:
         | To prevent that, the discussion started with an approach where
         | you create your seed phrase with dices. Look at step 1:
         | 
         | https://news.ycombinator.com/item?id=32115693
         | 
         | The question was if it is at all _possible_ to use Bitcoin in a
         | trustless way.
         | 
         | Several hard and maybe impossible to overcome challenges have
         | come up in the thread. The fact that eliptic curve signatures
         | are not deterministic seems to be the most fundamental.
        
           | matthewdgreen wrote:
           | ECDSA signatures can be made deterministic by deriving the
           | nonce deterministically, e.g., by hashing the secret key
           | together with a challenge provided by the user.
           | 
           | The problem now is that you need a way for the wallet to
           | prove to the user that this has been done, without leaking
           | the nonce or key. There are a bunch of ways to do this, but
           | the most basic idea is to generate a zero-knowledge proof
           | (probably a zkSNARK) that shows correctness of the signature
           | w.r.t. the public key. The user would not put this zkSNARK
           | onto the network -- it might contain a covert channel of its
           | own! -- they would just check it locally and then dispose of
           | that part. (Of course if the wallet was stolen by a malicious
           | user the wallet might be able to exfiltrate the secret key to
           | this thief through the zkSNARK portion.)
           | 
           | I'm assuming that all other transaction information is chosen
           | by the user, so there's no other latitude for the wallet to
           | cheat.
        
       | EddySchauHai wrote:
       | That's a neat thought experiment. You'd need the wallet holder to
       | sign a lot of transactions for it to work but maybe that'd be
       | enough of a reduction of crypto integrity for an attack to be
       | successful - especially if the end game is a Coinbase cold wallet
       | or something.
        
         | JonathanBeuys wrote:
         | How many transactions are needed depends on how many bits can
         | be sent home per transaction.
         | 
         | A Bitcoin seed phrase is 128 bit. 32 bit can be easily brute
         | forced. Leaves us with 96 bit. If you can send out 10 per
         | transaction, that is only 10 transactions.
        
           | EddySchauHai wrote:
           | Although I assume you'd be a bit confused why your cold
           | wallet is taking its time to generate a hash or whatever with
           | the 10 bits it needs to modify? I don't know what that time
           | would look like but you'd start to question massively
           | arbitrary delays like 10 seconds one time and 30 minutes the
           | next.
        
       ___________________________________________________________________
       (page generated 2022-07-21 23:02 UTC)