[HN Gopher] Rosenpass - formally verified post-quantum WireGuard
___________________________________________________________________
Rosenpass - formally verified post-quantum WireGuard
Author : lima
Score : 221 points
Date : 2023-02-28 14:40 UTC (8 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| wstuartcl wrote:
| I have not been following this closely but I thought most all of
| the quantum safe algorithms that had been proposed so far had
| been found lacking for traditional attacks very soon after they
| where held up as a standard contender. Has this changed?
| d-z-m wrote:
| One of the KEMS they've elected to (McEliece) has been around
| since the 70's, and has arguably been studied more than the
| others. If you're not quite sure about lattices, I've heard it
| called the "conservative choice" for a PQ KEM.
| jrexilius wrote:
| I believe three of the four contenders for round 4 of the NIST
| competition are still showing secure:
|
| https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
| sevenoftwelve wrote:
| Rosenpass author here;
|
| nope, that is not correct. NIST has elected Kyber as one of the
| algorithms to standardize and we are using that.
|
| As other commenters mentioned (very good info there, thank you
| all!) the other algorithm we use - Classic McEliece - is one of
| the oldest algorithms and has been well studied. There is no
| known efficient attack against it.
| wolf550e wrote:
| Have you seen https://isd.mceliece.org/1347.html ? DJB agrees
| with you.
| Tepix wrote:
| Does formal verification protect against buffer overflows? (it's
| a serious question)
| sevenoftwelve wrote:
| Author here: No, symbolic verification does not protect from
| buffer overflows. Writing the implementation in Rust does
|
| We are investigating ways how to do more formal verification
| for the implementation itself.
| acaloiar wrote:
| > No, symbolic verification does not protect from buffer
| overflows. Writing the implementation in Rust does
|
| I don't believe writing the implementation in Rust does that:
| https://blog.rust-lang.org/2018/09/21/Security-advisory-
| for-...
| emperorcezar wrote:
| One would think that this would be fixed in the last five
| years?
| acaloiar wrote:
| Certainly. What I don't believe is certain is that only
| one such vulnerability has ever existed and none exist in
| Rust today.
|
| It's not pedantic to differentiate between mitigating a
| thing and preventing a thing.
| hhda wrote:
| You can add `#![forbid(unsafe_code)]` to your codebase to
| avoid any unsafe Rust, which should prevent buffer
| overflows. Obviously it may make writing a codebase
| somewhat harder.
| usefulcat wrote:
| Will that restriction also be applied transitively to all
| dependencies?
| majewsky wrote:
| No. That kind of restriction cannot realistically be
| applied to any project above toy scale. The stdlib uses
| unsafe code to implement a large number of memory
| management primitives, because the language is (by
| design!) not complex enough to express every necessary
| feature in just safe code. Rust's intention is merely to
| limit the amount of unsafe code as much as possible.
| hhda wrote:
| For that, I believe you need to use cargo-geiger[0] and
| audit the results.
|
| [0] - https://github.com/rust-secure-code/cargo-geiger
| rurban wrote:
| The amount of reported and unfixed memory bugs in Rust
| went 10x more, not less in the last 5 years.
| Jweb_Guru wrote:
| If you believe you can find a memory unsafety
| vulnerability in this project's Rust code based on the
| existence of those bugs, feel free to do so.
| [deleted]
| fallat wrote:
| > Writing the implementation in Rust does
|
| _sigh_ , not true.
|
| https://tgrez.github.io/posts/2022-06-19-buffer-overflow-
| in-... https://shnatsel.medium.com/how-rusts-standard-
| library-was-v... "This is a buffer overflow bug in the
| standard library's implementation of a double-ended queue."
| "Rust will panic if you attempt to write out of bounds."
|
| Writing the implementation will increase memory safety but
| only if the implementation adheres strictly to safe Rust -
| which means even avoiding ANY packages that use unsafe
| features. The fact Rust can pull in any package that has an
| unsafe {} block means you're not promised to be safe.
|
| The equivalent could be said about writing the implementation
| in JavaScript, Python, etc... (that they protect against
| buffer overflows)
| sevenoftwelve wrote:
| Granted, writing things in Rust doesn't exclude the
| possibility of a buffer overflow entirely. It does help
| make it much less likely.
| fallat wrote:
| Yes, I mentioned it :)
| witrak wrote:
| So any claim such as your previous one is rather of no
| value.
|
| >It does help make it much less likely.
|
| Yeah... To the same extent as the infamous proof of
| formal correctness of an example program published in a
| book, until the program was tested negatively by a
| student some months later.
| ekiwi wrote:
| They verified the protocol, not the actual implementation:
| https://github.com/rosenpass/rosenpass#security-analysis
|
| This is still a pretty neat result! End-to-end proofs from high
| level protocol to low level implementation are mostly still a
| research topic.
| touisteur wrote:
| Although it would be a great exercise in hybrid SPARK+Coq
| proof. If (that's a big if) you can specify your algorithm in
| SPARK then (I think) you can use either the SPARK
| automated/guided prover, or when it can't discharge the
| proof, use some predefined lemmas, and barring that go down
| to the interactive Coq environment (or Isabelle, I've seen it
| done once) and discharge the verification conditions.
|
| Not sure anyone has published such a multi-layer spec and
| proof effort /with crypto code in the mix/.
| jraph wrote:
| Related: Coq - https://coq.inria.fr/
|
| And CompCert, a formally verified C compiler written in Coq:
| https://compcert.org/
|
| (even then, there are parts which are not formally verified,
| mostly at the interfaces with the outside world)
| sevenoftwelve wrote:
| Author of Rosenpass here;
|
| Coq is fairly generic; it has a long history and made it
| possible to write some really cool proofs such as a proof
| of the four colors theorem, but writing crypto proofs is
| really hard using Coq.
|
| For symbolic verification Tamarin and ProVerif are the
| tools of choice; I used ProVerif.
|
| For proofs of security for protocols EasyCrypt and
| CryptoVerif can be used. CryptoVerif, ProVerif and Coq
| where developed at the same Institute by the way; at Inria
| Paris.
| hinkley wrote:
| The code signing project I worked on was formally verified. The
| whole time they were verifying it I found horrible bugs that
| needed fixing. One even made it past soft launch. So great job
| boys but this doesn't help me.
| jraph wrote:
| What is a bug in the spec? Or in a part that was not formally
| verified?
| hinkley wrote:
| There are a couple of presentations out there about how
| challenging it is to use XML-Sig correctly. The XML part
| introduces a whole raft of ambiguities. Chief amongst them
| is document.getElementByID() and element.getElementByID()
| can return different answers. Compression file formats have
| similar ambiguities you need to work out to avoid
| doppelgangers, and then also avoid overwriting system files
| via canonical path bugs.
|
| Essentially the answer depends on who you ask. For my part
| I would say both.
| l-lousy wrote:
| From the read me:
|
| As with any application a small risk of critical security
| issues (such as buffer overflows, remote code execution)
| exists; the Rosenpass application is written in the Rust
| programming language which is much less prone to such issues.
|
| I think their formal analysis is only security/crypto related,
| at least for the time being.
| wucke13 wrote:
| [dead]
| rurban wrote:
| My verifier (cbmc) does, the ones they used not. Frama-C/Eye
| would also catch buffer overflows.
| jraph wrote:
| (my answer is general and not specific to this submission)
|
| The question is too broad to be answered, there are many
| different formal verification techniques (including static
| formal verification techniques, and also dynamic formal
| verification techniques which happen at runtime), and you could
| be formally verifying only specific properties of the system.
|
| Now, if your formal verification technique forces you to check
| that each index you use is within bounds (for instance, by
| forcing you to write loop invariants for each loop, but that's
| not sufficient because you can index buffers outside a loop or
| with something unrelated to the loop invariant), then yes, you
| have proved that you will not overflow buffers.
|
| But those pesky implementations are always imperfect and never
| totally proved correct, what's more they run on pesky hardware
| which could have flaws and which is usually not itself
| perfectly verified, so...
|
| And then you have model checking, which is also a formal
| verification technique. You can prove that you won't overflow
| buffers... in the model (which is a spec). That proves that
| your _spec_ is sound and that you _can_ implement it without
| flaws, but it does not actually check that your implementation
| is correct, of course. Unless your model checking tool can also
| build the implementation and this feature is proved correct.
|
| edit: it seems my model checking paragraph is more relevant
| than I expected, this submission is actually about model
| checking if it checks the _protocol_ (and not the
| implementation).
| ape4 wrote:
| I suppose if its implemented in a language that is formally
| proven to not allow buffer overflows.
| thadt wrote:
| Maybe not in general, but there _is_ really interesting work
| happening in the area of verified protocols generating safe C
| code (memory safe, overflow safe, timing safe, etc). In
| particular Project Everest has a set of verified primitives
| underlying higher level protocols like TLS and more recently
| all Noise variants (https://eprint.iacr.org/2022/607).
|
| As WireGuard is based on a Noise construction, it seems
| reasonable to hope that once formally verified PQ primitives
| are in place, a fully verified protocol implementation could be
| generated?
| bflesch wrote:
| I'm too stupid to understand the crypto technicalities. Is this
| really a good solution? Or embrace, extend, extinguish targeted
| on Wireguard?
|
| The paper abstract mentions a "cookie"-like concept, and from
| websec I know that cookies are not always the optimal solution
| and historically cookie implementations had a lot of attack
| surface.
|
| EDIT: Seems to come from German Max-Planck Institute which is
| funded by German government.
| gzer0 wrote:
| The article refers to a "cookie" as a "biscuit"
|
| _" Lacking a reliable way to detect retransmission, we remove
| the replay protection mechanism and store the responder state
| in an encrypted cookie called "the biscuit" instead. Since the
| responder does not store any session-dependent state until the
| initiator is interactively authenticated, there is no state to
| disrupt in an attack."_
|
| Both WG and PQWG are vulnerable to state disruption attacks;
| they rely on a timestamp to protect against replay of the first
| protocol message. An attacker who can tamper with the local
| time of the protocol initiator can inhibit future handshakes,
| rendering the initiator's static keypair practically useless.
|
| The use of the insecure NTP protocol is the reason for the
| "cookie" / "Biscuit" mechanism.
| Fnoord wrote:
| AFAIK a NTP client doesn't accept a value which highly
| differentiates from the current time. At least, not without
| user interaction. Does that render this attack less likely?
| sevenoftwelve wrote:
| Rosenpass author here.
|
| It does yes. But it is a mitigation, not a real fix.
|
| An attacker could still just speed up time. Although not
| being able to produce a KillPacket for the year three
| thousand is a good thing :)
| wucke13 wrote:
| [dead]
| Fnoord wrote:
| > EDIT: Seems to come from German Max-Planck Institute which is
| funded by German government.
|
| On the Github repo it says:
|
| "Supported by
|
| Funded through NLNet with financial support for the European
| Commission's NGI Assure program."
|
| On the website it says:
|
| "Funded through NLnet with financial support from the European
| Commission's NGI Assure program."
| bflesch wrote:
| On the paper it says MPI
| touisteur wrote:
| European program funding a University team, no?
| sevenoftwelve wrote:
| Rosenpass author here; I myself am independent, thus funding by
| NLNet. We have some project participants who are Freelancers;
| two of my co-authors are employed at research institutes. One
| of my co authors is employed at MPI-SP.
|
| The cookie thing is a defense against WireGuard CVE-2021-46873;
| the attack is in my view not bad enough to get rid of the
| WireGuard protocol. WG is still the standard for pre-quantum
| VPN implementations. Rosenpass also needs to use post-quantum
| crypto-primitives that need a lot of cpu and memory resources.
|
| Rosenpass and WireGuard work together; Rosenpass runs in
| userspace and gives keys to WireGuard so we do not plan to
| replace it any time.
|
| It would be possible to apply the biscuit mechanism to
| classical WireGuard; unfortunately that would cause a protocol
| incompatibility. I am not sure if they are going to take that
| path.
| rogers18445 wrote:
| Mullvad VPN has a PQC key exchange feature, they are even using
| two algorithms in parallel: McEliece and Kyber.
|
| https://github.com/mullvad/mullvadvpn-app/tree/main/talpid-t...
| inChargeOfIT wrote:
| Starred! And thanks to the author for taking the time to answer
| so many questions here!
| madspindel wrote:
| Why not just use PresharedKey in Wireguard?
| foobiekr wrote:
| Why not just use preshared keys in all VPNs like IPSEC?
|
| Because key exchange and key rotation is a huge problem.
| yjftsjthsd-h wrote:
| Is it any harder than exchanging and rotating asymmetric
| keys?
| CodesInChaos wrote:
| That doesn't solve the key management problem, it just defines
| it as out-of-scope, since you still need to exchange that
| preshared key outside Wireguard.
| sevenoftwelve wrote:
| Author here;
|
| We are :) Rosenpass is a fancy way of generating a PSK for
| WireGuard.
| d-z-m wrote:
| You still can. The problem arises when you don't actually wanna
| pre-share the key, and you still want post-quantum forward
| secrecy. Then you need a PQ KEM like McEliece or Kyber to run a
| PQ-secure key establishment.
| miloignis wrote:
| That's what they're doing, generating a key using post-quantum
| crypto and using it as the PSK - from TFA: "The tool
| establishes a symmetric key and provides it to WireGuard. Since
| it supplies WireGuard with key through the PSK feature using
| Rosenpass+WireGuard is cryptographically no less secure than
| using WireGuard on its own ("hybrid security"). Rosenpass
| refreshes the symmetric key every two minutes."
| Fnoord wrote:
| You can do that _on top of_ a public /private keypair, too.
| gzer0 wrote:
| Using a PSK alone doesn't make WireGuard quantum-safe. The
| security of the key exchange mechanism in WireGuard, which
| relies on the Diffie-Hellman protocol, is still vulnerable to
| quantum attacks.
|
| If an attacker were to obtain the PSK and use a quantum
| computer to break the Diffie-Hellman key exchange, they would
| be able to decrypt the VPN traffic.
|
| This is currently the thought-process and main reason behind
| why PQWG (Post Quantum Wireguard) are actively being researched
| [1].
|
| [1] https://ieeexplore.ieee.org/document/9519445/
| d-z-m wrote:
| > Using a PSK alone doesn't make WireGuard quantum-safe.
|
| Not sure what you're trying to say here. If you share the PSK
| out-of-band, securely, then wireguard _is_ quantum resistant
| (I wouldn 't say quantum-safe, because I'm not that
| optimistic).
|
| > If an attacker were to obtain the PSK
|
| Indeed if the attacker _obtains the PSK_ then obviously the
| PSK isn 't going to help you.
| ThePhysicist wrote:
| Wireguard explicitly mentions that mixing in a PSK provides
| post-quantum security [1].
|
| 1: https://www.wireguard.com/protocol/
| majewsky wrote:
| Please be careful in your quoting. The page you linked says
| "post-quantum _resistance_ ", not "post-quantum _security_
| " (which would be a much stronger claim).
| [deleted]
| yjftsjthsd-h wrote:
| > If an attacker were to obtain the PSK
|
| I believe it is traditional, in most threat models, to assume
| that the attacker doesn't have your private keys.
| _joel wrote:
| This available in tailscale?
| _joel wrote:
| Interesting people downvote me without bothering to answer.
| I'll take that as a no, but it's valid question, depsite the
| downvotes.
| chaxor wrote:
| How does this compare to the work done by the Netherlands group
| (Hulsing/Zimmermann, etc al) [1] and the Kudeldki group from
| Switzerland (Raynal/Genet/Romailler) [2]? It's nice to see
| someone making this more available. I had thought about trying to
| push the pq-wg implementation from Kudelski group to wg or trying
| it out, but I never had the time. Rust implementation seems to be
| an improvement of implementation, but I don't know about the
| underlying proofs.
|
| [1] https://eprint.iacr.org/2020/379.pdf [2]
| https://csrc.nist.gov/CSRC/media/Presentations/pq-wireguard-...
| [3] https://github.com/kudelskisecurity/pq-wireguard
| sevenoftwelve wrote:
| We are based on the work of Hulsing, Ning, Zimmermann, Weber
| and Schwabe and in contact with the group. I havn't heard about
| kuedlki; the slides seem to refer to a reimplementation of the
| 2020 paper and the go implementation has been stale for two
| years. They also seem to use a tweaked krystals implementation
| which I would not trust. The link to the blog post they refer
| to dis dead.
|
| The Rosenpass protocol builds on the 2020 paper but also adds
| security against state disruption attacks (CVE-2021-46873). The
| implementation is actively maintained, written un Rust not in
| go. We use Classic McEliece and Kyber from the OQS library.
| yellow_lead wrote:
| > We are working on a cryptographic proof of security, but we
| already provide a symbolic analysis using proverif as part of the
| software package
|
| How can you prove this? There is still no mathematical proof that
| i.e discrete log is NP complete
|
| edit - I see it's a WIP but even the definition of secure seems
| difficult
| dcow wrote:
| Wouldn't the proof be: assuming discrete log is NP hard, then
| these specific properties hold?
| sevenoftwelve wrote:
| Rosenpass author here;
|
| There is a confusion about terminology here I think.
| Mathematical proofs including cryptography proofs use models
| simplifying reality; i.e. the real practical system might still
| be susceptible to attacks despite a proof of security.
|
| For crypto primitives (classic mc eliece, curve25519, ed25519,
| RSA, etc etc) the standard for proofs is currently showing that
| they are as hard as some well studied mathematical problem.
| This is done by showing that an attack on the primitive leads
| to an attack on the underlying mathematical primitive. The
| proof for Diffie-Hellman shows that attacking DH leads to an
| efficient solution for the discrete log problem. I.e. the proof
| is a _reduction_ to the underlying primitive.
|
| No primitive is perfectly secure (at least a brute force - i.e.
| guessing each possibility is possible); there is some
| probability that the adversary can guess the right key. We call
| this probability the _adversary 's advantage_. One task in
| cryptoanalysis is to find better attacks against primitives
| with a higher advantage; if an attack with a polynomial time
| average runtime is found, the primitive is broken. Finding a
| higher non-polynomial attack is still an interesting result.
|
| The standard for protocols is proving that the protocol is
| secure assuming the primitives are secure; since multiple
| primitives are used you basically get a formula deriving an
| advantage for breaking the entire protocol. The proof is a
| _reduction_ to a set of primitives.
|
| We did not build a proof in that gold standard, although we are
| working on it. We built a proof in the _symbolic model_ - known
| as a _symbolic analysis_. This uses the _perfect cryptography
| assumption_ ; i.e. we assumed that the advantages for each
| primitive are zero. Google "Dolev-Yao-Model".
|
| This makes the proof much easier; a proof assistant such as
| ProVerif can basically find a proof automatically using logic
| programming methods (horn clauses).
|
| The definitions of security are fairly well understood;
| unfortunately there is a lot to go into so I can't expand on
| that here. Looking up "IND-CPA" and "IND-CCA" might be a good
| start; these are the security games/models of security for
| asymmetric encryption; you could move on to the models for key
| exchange algorithms there. Reading the [noise protocol
| spec](https://noiseprotocol.org/) is also a good start.
| tetha wrote:
| A professor in university had an interesting illustration of
| the attackers advantage.
|
| First off, an attack is straight up impossible. If you need
| to invest ~ 10k operations for each atom in the observable
| universe to break a system with more than 50% probability,
| well. That won't get broken, until breakthroughs in related
| mathematics happen. Even if you were lucky to guess a key
| once, you will never be twice.
|
| Then, you enter the area of throwing money at it. You can
| conquer quite a few exponents of two of search space if you
| throw a distributed system worth billions of dollars at it.
| And a couple more millions in change in post-docs shaving off
| fractions off of that exponent. Here you are usually safe,
| since it'll be hard even with all that hardware, manpower and
| math research.
|
| But once it's exponential growth with lower exponents or even
| polynomial, it's just an implementation and optimization
| issue on the way to real-time decodeability.
|
| However, even if the math is hard, the implementation might
| not be. And that's why a formally proven implementation of a
| very hard algorithm is exciting. If the implementation is
| provably as hard as discrete logarithms, and you get broken,
| a silly amount of modern crypto gets broken all at once.
|
| Or we might learn something about formal verification and
| your method and tooling. Which is also good progress.
___________________________________________________________________
(page generated 2023-02-28 23:00 UTC)