[HN Gopher] Safe curves for Elliptic Curve Cryptography [pdf]
       ___________________________________________________________________
        
       Safe curves for Elliptic Curve Cryptography [pdf]
        
       Author : sebgan
       Score  : 81 points
       Date   : 2024-08-12 01:48 UTC (2 days ago)
        
 (HTM) web link (eprint.iacr.org)
 (TXT) w3m dump (eprint.iacr.org)
        
       | zdw wrote:
       | Seems to be overloaded, see also the earlier site on this topic
       | from the authors: https://safecurves.cr.yp.to
        
       | fguerraz wrote:
       | It begs the question: what's wrong with RSA??? I've always heard
       | it's because implementation is hard and ECC magically solves most
       | of these problems, but clearly that's not the case...
        
         | adrian_b wrote:
         | Even if many older RSA implementations had serious security
         | bugs, that is not the reason why it is not preferred any more.
         | 
         | For equivalent security with ECC (or with AES) at their typical
         | parameters (i.e. 256 to 512 bit elliptic curves or 128-bit to
         | 256-bit AES keys), RSA must use very long keys, even longer
         | than any RSA keys that are used in practice (which are normally
         | no longer than 4096 bits), which makes the RSA operations slow
         | and adds a large overhead to the communication protocols that
         | must send and receive such long keys.
        
           | upofadown wrote:
           | >RSA must use very long keys, ...
           | 
           | But probably no longer than 2048 bits:
           | 
           | https://articles.59.ca/doku.php?id=em:20482030
        
             | adgjlsfhk1 wrote:
             | imo this article dramatically understates the likelihood
             | that the NSA has unpublished factoring algorithms. it also
             | shows the CPU performance plateau, but GPUs have mostly
             | followed exponential performance scaling
        
               | upofadown wrote:
               | The bottleneck is with a process that can't be
               | parallelized very well. So GPUs are not useful here.
               | 
               | The NSA might have a computer from a crashed alien
               | spacecraft as well but we have to work with what we know.
               | Of course that alien computer is well known to be
               | helpless against RSA and effective against everything
               | else... :)
        
               | adgjlsfhk1 wrote:
               | that bottleneck is a bottleneck specific to the GNFS.
               | There are about a dozen pretty different factoring
               | algorithms known to the public (e.g. trial division,
               | Polard, ECM, quadratic sieve, GNFS). I don't think any
               | mathematicians believe that the GNFS is the optimal
               | factoring algorithm out there. GNFS has runtime L(1/3,
               | cbrt(64/9)) if there is a known algorithm of say L(1/3,
               | 1.5) which would be a pretty minor optimization
               | equivalent to making the GNFS as good as the SNFS, or a
               | bigger breakthrough of say, L(1/4, 2.5), 4096 bit keys
               | could easily be necessary. For reference, this amount of
               | improvement is about the same as the difference between
               | the Quadratic sieve and GNFS.
        
               | upofadown wrote:
               | Why yes, if there was an algorithm known good enough to
               | make 4096 bit keys necessary then 4096 bit keys would be
               | necessary. But there isn't and there has been little
               | progress for a long time now.
        
               | adgjlsfhk1 wrote:
               | > But there isn't and there has been little progress for
               | a long time now.
               | 
               | publicly. The algorithm side and the hardware side are
               | really really different because we can be pretty sure
               | that the NSA doesn't have a giant super computer that is
               | either dramatically more efficient than commercially
               | available computers, or that uses more than say 1% of US
               | electricity consumption. On the algorithm side, we know
               | that the NSA employs a decent number of number theory
               | people, and given the way that math research progresses,
               | it's pretty easy to imagine that one of those people has
               | found a slight algorithmic improvement over public state
               | of the art. CFRAC was invented in the 1970s. QS invented
               | in the 1980s. The NFS was the 1990s with some
               | improvements in the 2000s. If we go another 50 years or
               | so with no progress, it maybe starts to be reasonable to
               | believe that there isn't a secret algorithm that's
               | better, but with 5 improvements in the last 50 years, it
               | seems pretty likely that there could be 1 more that has
               | been found but not published.
        
         | ramchip wrote:
         | "Seriously, stop using RSA"
         | 
         | https://news.ycombinator.com/item?id=30879442
        
           | bvrmn wrote:
           | It's hilarious foil-hat level shit here:
           | https://blog.trailofbits.com/2019/07/08/fuck-
           | rsa/#comment-91...
        
             | ivlad wrote:
             | He is not wrong on the main claim, which is "All ECC
             | (X25519-P521) will be broken by private sector quantum
             | chips before RSA-2048 due to the physical limitations of
             | stabilizing qubits."
             | 
             | Shor's algorithm is polynomial, however the number of
             | qubits required is in order of O(log(n)^2), where n is the
             | length of the key. Because ECC keys are much shorter (e.g.,
             | 256 to 521 bits) than RSA (2048 to 4096 bits), a smaller
             | quantum computer would be needed to attack ECC.
        
               | genewitch wrote:
               | This is the first time in nearly 30 years I've ever heard
               | the claim that you only need 18-20 qubits for something
               | like a 256 bit key.
               | 
               | I've only ever seen the claims of 1:1 key bit to qubits.
               | Aren't there existing claimed quantum computers with 20
               | qubits? Isn't Canada's machine an order of magnitude more
               | qubits?
        
               | red_admiral wrote:
               | I think it matters not only how many qubits you have, but
               | how they are "connected". As far as I know, it's a far
               | easier problem to put N qubits in a chain with O(N)
               | connections, than it is to have N qubits with O(N^2)
               | connections to form a complete graph. And I believe the
               | second example is what you need to do Shor's algorithm.
        
               | ivlad wrote:
               | I am sorry, I should've read it twice before posting. I
               | meant, QC to attack longer keys should be bigger
               | proportionally. Number of operations is logarithmic.
               | Silly me.
        
               | red_admiral wrote:
               | RSA needs 4k (or 16k) keys because, with index calculus,
               | these sizes reduce to a subproblem where you effectively
               | need only 2^128 (or 2^256) time rather than the 2^{4k}
               | for brute-force.
               | 
               | I think, but I may be misremembering, that you could
               | apply Shor's algorithm to speed up index calculus by
               | going for the subproblem directly? In this case RSA would
               | fall too.
               | 
               | I note that the NSA recommendations are "move to post-
               | quantum crypto", not "go back to RSA" so I infer that
               | they think RSA-4096 might still be vulnerable.
        
               | ivlad wrote:
               | I am not NSA, but I think their idea is something along
               | the "once Shor's algorithm is practical for key sizes of
               | hundreds of bits, it's only a matter of a few years of
               | engineering effort to make it feasible for thousands bits
               | long keys". In other words, there is no sufficient safety
               | margin for larger keys.
        
               | tptacek wrote:
               | This is deeply silly.
        
               | ivlad wrote:
               | ... because?
               | 
               | Edit: yes, I read it and it is.
        
               | less_less wrote:
               | I'm not sure this is true anymore. There are recent
               | improvements in quantum factoring algorithms, which
               | significantly reduce the number of qubits required. See
               | eg https://eprint.iacr.org/2024/222
        
           | upofadown wrote:
           | My comments on that article:
           | 
           | https://articles.59.ca/doku.php?id=pgpfan:rsabad
        
         | red_admiral wrote:
         | According to the 2020 NIST recommendations, if I'm reading
         | correctly, to get the equivalent of 256 bits of symmetric
         | security, ECC needs 512 bit keys (the factor 2 seems
         | unavoidable for any public key system, because math), but for
         | both RSA and finite-field crypto, you need 15k bit keys to get
         | the same security level.
         | 
         | This is due to the multiplication group modulo a prime (or a
         | pair of primes in RSA) being vulnerable to "index calculus", a
         | faster-than-brute-force way of attacking things.
         | 
         | As the paper says, the main point of ECC is being impervious to
         | index calculus by design, based on an argument by Victor Miller
         | in 1986 about the structure of "point heights" on elliptic
         | curves.
         | 
         | RSA implementations have also led to vulnerabilities in the
         | past, and one of the big claims of djb (as the paper's first
         | author is called in the crypto scene) is that Curve25519 and
         | friends are designed specifically to select, among many secure
         | choices, one that is particularly easy to implement without
         | falling into any of the usual traps.
        
         | ogogmad wrote:
         | Tl;dr reason: RSA requires larger key sizes, which makes it
         | slower. ECC makes cryptanalysis more complicated, so the keys
         | can be made smaller, which makes it faster.
        
         | tptacek wrote:
         | Everyone else is talking about key lengths here, but the better
         | reason to avoid RSA is that it's full of footguns that ECC
         | hides from you. ECC constructions don't generally expose a
         | direct encrypt-to-the-public-key operation, which is almost
         | never what you want (unless you're doing RSA-KEM, which nobody
         | does) but frequently what you get. The de facto standard RSA
         | padding --- by the way, another parameter ECC doesn't demand
         | you fix for your system --- is PKCS1v15, which is admits a
         | padding oracle attack that TLS has spent 2 decades going
         | through contortions avoiding. Even the process of randomly
         | generating keys --- for Curve25519 this is practically a simple
         | as "fill a buffer with random bytes" --- has been fraught with
         | problems.
         | 
         | The reason not to use RSA is that you're not going to get it
         | right, because it's hard to get right. In cryptography, that
         | makes it intrinsically bad on the merits.
        
           | less_less wrote:
           | I mostly agree with this, but you could still have advice of
           | like "use only xxx flavor of RSA-KEM for key establishment
           | and yyy flavor of RSA-FDH-SHAKE for sigs" and things would be
           | fine. Those still aren't exactly _easy_ to implement, but
           | they aren 't really any harder than ECC. But they do have
           | bigger keys and are slower, especially for keygen.
        
             | dadrian wrote:
             | They are harder to implement than ECC.
        
               | less_less wrote:
               | Really? My experience is that RSA in general is harder
               | because it has so many knobs, but if you just lock it to
               | the easiest-to-implement-securely choices then it's not
               | so bad.
               | 
               | However, for DJB's point there is a major advantage for
               | X25519/X448 specifically: it's hard to get a huge speedup
               | by implementing them insecurely, and the same is not true
               | of RSA.
        
         | marcosdumay wrote:
         | It's so full of foot-guns that the implementation you are using
         | almost certainly missed a few;
         | 
         | It has been partially broken so many times that it's expected
         | it will be partially broken again; (That's how foot-guns are
         | born.)
         | 
         | It is slow and expensive, even more because every time it's
         | broken people have to increase their key size, making it slower
         | and more expensive;
         | 
         | It's very flexible, so that people keep using it wrong again
         | and again. Arguably that could be a benefit, but it's hard
         | enough to make people use it right, making them get an
         | innovative usage right is almost impossible.
        
       | mike_hearn wrote:
       | The naming is unfortunate. Some people (Ubuntu...) have concluded
       | that because some curves are branded "safe" that must mean all
       | the other curves are therefore "unsafe" and must be aggressively
       | phased out as if they were MD5. They're changing apt to refuse to
       | download from repositories signed using NIST curves, for
       | instance.
       | 
       | This doesn't make a whole lot of sense unless you think the NSA
       | have an unknown backdoor that nobody was able to find. But that
       | isn't their stated justification. Instead they cite djb's
       | website. It's apparently not clear enough that the "SafeCurves"
       | are safe in the sense of being easier for cryptographers to
       | implement without bugs, not in the sense that if you have two
       | correct implementations of a "safe" and "unsafe" curve both are
       | equally cryptographically strong.
       | 
       | Therefore if people want to migrate to the "safe" curves over
       | time, that's fine, but it's more like migrating stuff from C++ to
       | Rust. It doesn't automatically imply the prior codebase was
       | buggy, and if you do know of a specific bug then you need to fix
       | it anyway so such migrations are always about reducing the risk
       | of future problems through better engineering practices. That
       | doesn't justify creating a lot of disruption across a distributed
       | ecosystem.
        
         | red_admiral wrote:
         | People migrating to djb's curves, or at least allowing them as
         | a first-class citizen, include                 - SSH (which
         | includes git over SSH - github suggests djb's Curve 25519 as
         | default: https://docs.github.com/en/authentication/connecting-
         | to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-
         | the-ssh-agent)       - TLS (recommended in n1.3)       - NIST
         | (allows Curve25519, but isn't the default choice)        -
         | various cryptocurrency crap
         | 
         | The people not on djb's curves yet are PGP/GPG/OpenPGP
         | (available as an "advanced" option but not by default, for
         | backwards compatibility) and as a consequence, debian's package
         | signing (that mostly uses GPG with RSA, afaik). So ubuntu is in
         | good company, even if it makes their job of working with
         | "upstream" harder. [EDIT: apparently changed now - GPG has
         | joined the ranks of djb-by-default]
         | 
         | It's only like migrating from C to rust for the person
         | implementing the crypto package and singature verifier. For the
         | average package maintainer, they just have to generate a new
         | key and pass a new command line flag to their sign command.
        
           | normaler wrote:
           | GPG is using ed25519 keys as default since 2.3
           | 
           | https://lists.gnupg.org/pipermail/gnupg-
           | announce/2021q2/0004...
        
           | mike_hearn wrote:
           | Yes, but SSH isn't e.g. forbidding you from connecting to
           | servers that use RSA or secp256r1, whereas they want to make
           | apt refuse to access repositories that aren't using djb's
           | curves.
        
             | red_admiral wrote:
             | SSH by default isn't, but admins can set up their own list
             | of allowed ciphers in their config file. Github, for
             | example, has banned ssh_dss - I think they still support
             | ECDSA, but they only mention Curve25519 and RSA (with
             | caveats) on their SSH keys page.
        
               | throw0101a wrote:
               | > _SSH by default isn 't_
               | 
               | Yes, and _that 's_ fine. But if SSH _mandated_ a certain
               | thing and disallowed even admins to change it it would be
               | the equivalent problem.
               | 
               | It's Ubuntu _preventing_ the use of _anything but_
               | "SafeCurves" that's the problem.
               | 
               | If Ubuntu/Canonical want to use them--fine. (Maybe.+) But
               | _don 't disable_ the functionality for admins.
               | 
               | + Some regulated industries need to use certain Approved
               | Algorithms, which may or may not include your favourite
               | ones. Further there may be all sorts of other (workflow)
               | tooling that may not support your favourite ones either,
               | and forcing your favourites on other people (especially
               | _taking away_ other options) is a jerk move.
        
               | red_admiral wrote:
               | So, I'm mostly with you here, but I'd say disallowing
               | e.g. MD5 (horribly broken) or anything involving Dual_EC
               | (horribly suspicious) by force would be reasonable.
               | ECDSA/secp* curves, that's more controversial.
               | 
               | The TLS 1.2 -> 1.3 upgrade also disallowed a lot of
               | previously used things, and this was generally considered
               | to be a great improvement (though of course TLS endpoints
               | can be backwards-compatible).
        
         | Y_Y wrote:
         | > unless you think the NSA have an unknown backdoor that nobody
         | was able to find
         | 
         | Your post names sense to me, but I don't think it's too crazy
         | to allow for the possibility that e.g. the NSA has backdoored
         | some of the "recommended" crypto constants in current use.
        
           | red_admiral wrote:
           | Snowden said that they were doing this, I think?
        
             | jiggawatts wrote:
             | The NSA _definitely_ put back doors into cryptographic
             | algorithms by forcing their variant into the NIST
             | standards. E.g.: https://en.wikipedia.org/wiki/Dual_EC_DRBG
        
               | tialaramex wrote:
               | Dual EC DRBG is a bad design and so you shouldn't use it.
               | It is _reasonable_ to believe it 's backdoored, but only
               | the same way it would be _reasonable_ to believe David
               | Cameron stuck his dick in a dead pig. Some people say he
               | did, he says he didn 't, there's no conclusive proof
               | available - and it's worth noting the "source" is a man
               | who has a grudge against Cameron.
               | 
               | My favourite also reasonable explanation for the
               | mysterious Dual EC DBRG constants is that some senior
               | person picked their favourite numbers (birthday of a
               | niece, phone number of a friend, whatever) not realising
               | that these should be Nothing Up My Sleeve Numbers. Later
               | when a subordinate says "For documentation we need to
               | explain these numbers" it was late to change them and so
               | the agency can't do better than insist they were chosen
               | at random.
               | 
               | If this was crucial technology we should do the work to
               | re-make it with Nothing Up My Sleeve Numbers, but it's
               | garbage, indeed that's one reason for the suspicion, this
               | is a very complicated machine, well suited to hiding a
               | backdoor, why do this at all?
        
               | red_admiral wrote:
               | As far as I'm aware, the other problem with Dual EC is
               | there's two kinds of componets you can use to build a
               | DRBG: "chaotic" (think hash functions, or components
               | thereof) and "algebraic" (highly structured but
               | guaranteed to have certain properties such as minimum
               | period size).
               | 
               | Cryptographic common sense is that if you use an
               | "algebraic" generator, you feed the output through a
               | "chaotic" one at the end. This can't possibly harm
               | security (as long as the output transformation doesn't
               | depend on secret state) as there's a reduction in which
               | the adversary just does the transformation themselves.
               | This is even more important if the algebraic
               | transformation is efficiently invertible in principle,
               | for example if someone has extra secret knowledge (such
               | as the dlog of the base point in use).
               | 
               | If they'd used Dual-EC followed by SHA1 or something that
               | would have not only been better according to folk wisdom,
               | and demonstrably no worse for security (and costing very
               | little compared to the EC operations) but it would also
               | have shut down a lot of conjectured attacks that one
               | could do with twiddled constants.
               | 
               | Yet for some reason, Dual-EC decided to go with an
               | algebraic approach without a "chaotic" output
               | transformation, which is either extreme incompetence or
               | strong evidence that someone is up to no good.
        
               | tptacek wrote:
               | This is really overthinking it. Dual EC is an asymmetric
               | RNG; it "encrypts" its state to a public key, with an
               | undisclosed private key. To believe it wasn't backdoored
               | (as some people, me included, did --- long story!) you
               | have to take on faith that nobody in the world actually
               | has that private key. We're now fairly sure someone does.
               | That's the whole story.
        
               | red_admiral wrote:
               | You're probably right, I'm just saying that if you want
               | to design a RNG that doesn't look extremely suspicious,
               | you do it in such as way that there is no way to have a
               | private key to start with. For example with a hash
               | function somewhere in the loop, since these are
               | (presumably) one-way.
        
               | less_less wrote:
               | > Dual EC DRBG is a bad design and so you shouldn't use
               | it. It is reasonable to believe it's backdoored, but only
               | the same way it would be reasonable to believe David
               | Cameron stuck his dick in a dead pig. Some people say he
               | did, he says he didn't, there's no conclusive proof
               | available - and it's worth noting the "source" is a man
               | who has a grudge against Cameron.
               | 
               | Hard disagree. Dual_EC_DRBG more-or-less encrypts the
               | next state of the random number generator to a certain
               | elliptic curve public key, cuts off a few bits, and
               | outputs the truncated ciphertext as a "pseudorandom
               | number". Given this framing, the backdoor is obvious:
               | guess the truncated bits, decrypt the ciphertext, and
               | check whether the decrypted next state is consistent with
               | later data that depend on that DRBG.
               | 
               | This is a pretty fucking weird design unless you're
               | intending a backdoor. It's orders of magnitude slower and
               | more complex than just symmetric-key encryption or
               | hashing, which are traditionally used for DRBGs, and
               | elliptic curve ciphertexts aren't uniformly pseudorandom
               | so it's slightly biased as a DRBG as well. The claimed
               | justification is that it's secure based on properties of
               | elliptic curves, but even aside from the backdoor this is
               | false (since it's biased), and anyway you'd be going for
               | provable security here but there isn't any proof (even
               | aside from the bias AND the backdoor... except maybe in
               | the generic group model, which is too strong to be
               | relevant here). Also, the backdoor potentials of
               | Dual_EC_DRBG were discussed both before and after
               | standardization, but it was standardized and used anyway.
               | 
               | It is _conceivable_ that NSA chose this design through a
               | firm but misguided belief that it has superior security
               | properties, and that they don 't have the private key
               | corresponding to that public key, and that they paid RSA
               | security $10 million to make it the default DRBG in BSAFE
               | because they were just that proud of their generator, and
               | they didn't notice the backdoor before publication, and
               | they didn't consider the need for nothing-up-my-sleeve
               | numbers because they weren't paying any attention. But
               | IMHO it's much more _reasonable_ to believe that
               | Dual_EC_DRBG 's backdoor is intentional, and that NSA
               | does know the private key corresponding to their
               | published recommended public key.
               | 
               | Also note that even if NSA doesn't have the backdoor key,
               | a malicious actor can substitute their own backdoor key
               | and everything just keeps working. With this change of 64
               | bytes of code, fully permitted by the spec, that actor
               | (and nobody else) now controls the backdoor. This
               | happened to Juniper networks.
               | 
               | --
               | 
               | None of this is apparently the case for the NIST elliptic
               | curves, though. While it is possible that NSA knows a
               | secret weakness in them (or, indeed, in all elliptic
               | curves!), even 25 years later there is no publicly known
               | class of weak curves that could include the NIST curves.
               | Furthermore, the spec does include the values that hash
               | to the curves' coefficients: while this doesn't guarantee
               | that those values were really generated randomly, it
               | significantly constrains how a hypothetical backdoor
               | could work, and what the families of vulnerable curves
               | could look like. If there were a backdoor, it would also
               | in some sense be a less powerful backdoor than in
               | Dual_EC_DRBG: the Dual_EC backdoor is only exploitable to
               | someone who has the private key, but given the
               | constraints on the NIST elliptic curves, it is likely
               | that anyone who figured out the math of a hypothetical
               | backdoor could exploit it.
               | 
               | IMHO it is still _reasonable_ to believe that the NIST
               | curves are backdoored. But IMHO they are very likely not
               | backdoored, and I think my view also matches the
               | consensus of the crypto community.
        
               | red_admiral wrote:
               | Bernstein's argument is that, even if the NIST curves are
               | not backdoored mathematically, they're designed in a way
               | to strongly coerce implementors towards a route which
               | leaves side-channels open, such as non-constant-time
               | implementations or sloppy handling of invalid data such
               | as points that do not lie on the curve, or in a small-
               | order subgroup (all of which has been seen in practice).
        
               | less_less wrote:
               | Bernstein has multiple arguments. Section 7 is about ways
               | that the NIST curves could hypothetically be backdoored,
               | and about designing curves in a "rigid" way such that
               | they are less likely to be backdoored ... at least
               | assuming that small parameters, or parameters near powers
               | of 2, aren't more likely to be weak.
               | 
               | But yeah, he also argues that Montgomery / Edwards curves
               | are easier to implement securely. IMHO he's right, but he
               | exaggerates the difference. Montgomery and Edwards curves
               | still have plenty of footguns, and some of these are not
               | present in prime-order short Weierstrass curves like the
               | NIST curves. In particular, the fact that Montgomery and
               | Edwards curves must have a cofactor of at least 4 leads
               | to problems: side-channel attacks like "May the 4th be
               | with you", EdDSA signature validity not actually being
               | defined, needing additional complexity (Ristretto/Decaf)
               | to avoid protocol changes, etc.
        
           | api wrote:
           | There's a strong argument against back doors of this type in
           | ECC.
           | 
           | To back door the NIST curves would require that the NSA knows
           | a secret attack against ECC and used brute force search to
           | find seed hashes to generate the curves to be vulnerable. It
           | would have to be a secret attack since nobody else has found
           | it.
           | 
           | Thing is... if this is true it means there is a secret attack
           | against some elliptic curves.
           | 
           | Using 1990s technology they couldn't have brute forced all
           | that many curves, meaning some non-trivial percentage of
           | curves must be vulnerable.
           | 
           | How do we know curve25519 isn't vulnerable to this secret
           | attack? We don't.
           | 
           | The ultimate conclusion is that if NIST curves are backdoored
           | using secret math we shouldn't use ECC at all, at least
           | unless NSA discloses the math. But they couldn't do that
           | without blowing up the Internet since so much uses the NIST
           | curves. It would be an argument to phase out all ECC.
        
             | schoen wrote:
             | NSA did make a mysterious announcement a few years ago that
             | people should not use ECC and should go back to older
             | public-key methods. Of course, due to their fundamental
             | conflict of interest and reluctance to share their
             | rationale, very few organizations that didn't have to
             | follow that guidance apparently did so.
        
               | api wrote:
               | If they won't share the rationale then we are back to
               | speculating.
               | 
               | They are now pushing PQ crypto because they think,
               | probably reasonably, that we are one or two breakthroughs
               | from a scalable quantum computer.
        
               | cheschire wrote:
               | "Post quantum", for anyone else confused by this
               | seemingly obscure acronym.
        
               | zahllos wrote:
               | This is a bit misinformed.
               | 
               | NSA required the use of "Suite B Cryptography" for
               | commercial vendors of government systems, which in its
               | latest revision meant ECC. However, vendors were (and
               | are) slow to adopt ECC from the previously used RSA. If
               | you want public evidence of how slow such transitions can
               | be, check any other commercial crypto like certificate
               | authorities and see which trust chains are entirely
               | elliptic curve. Mostly people are stuck on RSA, even
               | though elliptic curves broadly offer better speed and
               | smaller keys/signatures for the same or better levels of
               | security. There's also still plenty of deployed DES and
               | SHA-1, even though the former is inadvisable and the
               | latter inexcusable. In fact, from what I read in the
               | response to the NSA proposing to drop SHA-3 in favour of
               | SHA-2 in the new PQ standards, vendors were a bit
               | frustrated at the change because of the short timescale
               | involved in the migration. I interpret the demanding
               | schedule for adoption of PQC as a deliberate choice by
               | NSA - a somewhat passive aggressive response to vendors
               | to tell them to get their acts together, based on their
               | experience of trying to roll out ECC.
               | 
               | What NSA said is "if you haven't migrated to elliptic
               | curve cryptography, you should now wait for post quantum
               | and then start on that". You can read that message here: 
               | https://web.archive.org/web/20151123081120/https://www.ns
               | a.g... and here is the exact quote:
               | 
               | > For those partners and vendors that have not yet made
               | the transition to Suite B elliptic curve algorithms, we
               | recommend not making a significant expenditure to do so
               | at this point but instead to prepare for the upcoming
               | quantum resistant algorithm transition.
               | 
               | I don't think there's much mystery here - basically it
               | amounts to a bunch of procurement rules and guidelines.
        
               | schoen wrote:
               | I wasn't aware of this interpretation. Thanks.
        
             | mike_hearn wrote:
             | The argument for a back door goes like this:
             | 
             | 1. The NSA is one of the world's only institutions that has
             | any use for the otherwise irrelevant specialism of
             | designing asymmetrically backdoored cryptography
             | algorithms. They also have (or had) lots of maths PhDs
             | writing internal papers. It's reasonable to assume they
             | know things about kleptography that others don't, as it's
             | not a well funded sub-field of cryptography outside the
             | signals intelligence world. So if there was an attack
             | they'd discovered it wouldn't be surprising if others
             | didn't find it.
             | 
             | 2. A good protection against kleptographic attacks is to
             | use "nothing up my sleeve numbers", where the constants
             | that go into an algorithm are derived from some well known
             | source that isn't suspicious.
             | 
             | 3. The NIST curves know about this sort of risk and attempt
             | to use NUMS numbers. The constants are derived from the
             | output of SHA1, which at the time was considered secure.
             | 
             | 4. But the input to SHA1 is a giant random-looking number,
             | not something obvious and above suspicion. Thus the setup
             | fails to provide the assurance it was supposed to create
             | because the NSA could have searched for a weak curve (if
             | there was such a thing to begin with).
             | 
             | The argument that curve25519 wouldn't be susceptible is
             | simply that curve25519 uses NUMS numbers properly, and so
             | there's no wiggle room where djb or anyone else could have
             | done a secret scan to find curves with certain properties.
             | 
             | As may be clear, how strong you think the above argument /
             | problem is, will depend entirely on your intuition about
             | how well explored kleptography is by non-secret research.
             | Unfortunately as people generally don't publish negative
             | results that's very hard to judge.
        
               | tptacek wrote:
               | From memory, so I'm probably wrong, but: I thought the
               | bona fides for Curve25519's design was that it
               | demonstrated clear engineering reasons for all its
               | choices, and Bernstein's issue with other curves was that
               | NUMS constants (like pi, e, etc) were manipulable in the
               | sense that you could take permutations of them --- the
               | (silly) B4D455 paper.
        
         | api wrote:
         | Infosec suffers from a huge cargo cult and mindless sticky meme
         | problem because so few people actually understand it.
         | 
         | There are still admins who block all ICMP because of the "ping
         | of death," a Windows bug from either the late 1990s or 2000s.
         | They don't know this though. They just heard that ICMP is
         | "dangerous."
         | 
         | People also don't use IPv6 because they think NAT is a security
         | feature.
         | 
         | I guess it's similar to baseless health fears and happens
         | whenever people don't really understand a domain. You get a
         | proliferation of _lore_ that is just repeated.
        
           | upofadown wrote:
           | In either the case of infosec or health, the idea at work is
           | that you can ensure that something is good simply by
           | identifying and removing all that is bad. ... and everyone
           | feels free to determine for themselves what is bad with no
           | deep understanding of the involved systems.
        
           | leeter wrote:
           | > People also don't use IPv6 because they think NAT is a
           | security feature.
           | 
           | Literally had a sales engineer parrot this at me awhile back.
           | I had to point out that the service they were offering was on
           | the open internet. It only got worse from there. Le sigh...
        
           | esdf wrote:
           | Windows had an ICMP CVE last year and also just released a
           | patch for an IPv6 CVE. OpenSSH on Linux had a CVE recently
           | too. Security in depth is reasonable and not baseless.
        
         | marcosdumay wrote:
         | No, a lot of the work is also about it being way harder to add
         | a backdoor into a "safe" curve than into other ones.
         | 
         | And yeah, the NSA have done that before1. Why do you
         | immediately assume they didn't do it now? Even more because
         | it's basically impossible to discover if the curve is
         | compromised or not just by looking at it.
         | 
         | 1 - Granted, it was on an algorithm that was way further from
         | those safety criteria than the main ECC ones. But if that's the
         | reasoning, the claim becomes "I want my curve _this_ safe, no
         | further ", what isn't a good argument.
        
         | glitchc wrote:
         | It's always a problem when scientific guidance becomes
         | dogmatic.
        
           | wahern wrote:
           | Bernstein is savvy enough that I suspect he anticipated how
           | the term "safe" would be interpreted by different groups.
           | That is, he expected and perhaps even intended the subsequent
           | cargo culting.
        
       | madduci wrote:
       | I would like to see if the authors of this paper have released
       | some kind of benchmark/validation script/tool implementing the
       | maths and checks behind their paper
        
         | less_less wrote:
         | There's a bunch of validation software at
         | https://safecurves.cr.yp.to/verify.html
        
       | nemoniac wrote:
       | "We could prove our innocence by disclosing the details, if only
       | we could remember them", Jerry Solinas, NSA, page 18.
        
         | throw0101a wrote:
         | Apparent backstory:
         | 
         | > _Some of the backstory here (it 's the funniest fucking
         | backstory ever): it's lately been circulating --- though I
         | think this may have been somewhat common knowledge among
         | practitioners, though definitely not to me --- that the
         | "random" seeds for the NIST P-curves, generated in the 1990s by
         | Jerry Solinas at NSA, were simply SHA1 hashes of some variation
         | of the string "Give Jerry a raise"._
         | 
         | > _At the time, the "pass a string through SHA1" thing was
         | meant to increase confidence in the curve seeds; the idea was
         | that SHA1 would destroy any possible structure in the seed, so
         | NSA couldn't have selected a deliberately weak seed. Of course,
         | NIST/NSA then set about destroying its reputation in the
         | 2000's, and this explanation wasn't nearly enough to quell
         | conspiracy theories._
         | 
         | > _But when Jerry Solinas went back to reconstruct the seeds,
         | so NIST could demonstrate that the seeds really were benign, he
         | found that he 'd forgotten the string he used!_
         | 
         | * https://news.ycombinator.com/item?id=37784499#unv_37784770
        
       ___________________________________________________________________
       (page generated 2024-08-14 23:01 UTC)