[HN Gopher] Cryptanalysis of GPRS Encryption Algorithms GEA-1 su...
___________________________________________________________________
Cryptanalysis of GPRS Encryption Algorithms GEA-1 suggest
intentional weakness
Author : anonymfus
Score : 352 points
Date : 2021-06-16 15:39 UTC (7 hours ago)
(HTM) web link (eprint.iacr.org)
(TXT) w3m dump (eprint.iacr.org)
| corty wrote:
| The real interesting question is the proprietary crapto in 4G and
| 5G. How long until that backdoor is proven?
| gentleman11 wrote:
| Pardon my ignorance, what is gprs and GEA-1 and what is it's
| significance?
| ronsor wrote:
| GPRS is the general packet radio service, used for (very slow)
| data and SMS transfer in combination with the old GSM protocol.
| gentleman11 wrote:
| So, it encrypts sms messages and is still in use?
| giantrobot wrote:
| It doesn't encrypt SMS. It's over the air encryption for
| packet data (not calls or SMS) like WEP or WPA on WiFi. On
| GSM networks SMS was transmitted in leftover signaling
| space on voice circuits.
| secondcoming wrote:
| RACH: https://en.m.wikipedia.org/wiki/Random-
| access_channel
| meepmorp wrote:
| GPRS is the mobile data standard for GSM mobile phones. It's
| from the 2G era, and is old and slow. GEA-1 is an encryption
| algorithm used with GPRS.
| gentleman11 wrote:
| So it's no longer in use?
| meepmorp wrote:
| No, GPRS is still around; the 2 and 3G networks aren't
| supposed to go away until late 2022 in the US, and then
| there's the rest of the world to consider.
| giantrobot wrote:
| In the US AT&T shut down their 2G service at the end of
| 2019. Only T-Mobile has GSM/GPRS service active and that
| is shutting down at the end of this year. It's 3G
| services that will be shutting down through the end of
| 2022/3.
| meepmorp wrote:
| Yeah, like I said, currently operating 2 & 3G networks
| are scheduled to go away by late 2022, and GPRS is still
| in service.
| cestith wrote:
| We need to consider illegitimate carrier devices, too.
| Will the Stingray type devices stop supporting it? If the
| phones still fall back to it, it's still a threat.
| andyjohnson0 wrote:
| I'd be surprised if there are many phones that still use
| it, but I'd be equally unsurprised if there weren't a lot
| of embedded devices (utility substations, gas/electricity
| meters, security systems, traffic light controllers, etc.)
| that still use GPRS modems to phone home to monitoring
| systems. Its simple, lightweight, and supports PPP (for
| devices that use serial comms) and X.25 as well as IP.
| tinus_hn wrote:
| I presume phones just use what the network tells them to
| use.
| simias wrote:
| Even then, I'd expect (perhaps naively) that the vast
| majority of smartphone data transmissions these days are
| done on top of HTTPS, so breaking the first layer would
| only get you so far.
| Fordec wrote:
| Actually, because of legacy support issues and quality of
| service requirements, a lot of Emergency services are still
| using it instead of newer tech. Also, a lot of low data
| consumption IoT devices use older, but also cheaper,
| chipsets that use this for their telemetry backbone.
| weinzierl wrote:
| Excerpt from the abstract:
|
| _" This paper presents the first publicly available
| cryptanalytic attacks on the GEA-1 and GEA-2 algorithms."_
|
| [..]
|
| _" This unusual pattern indicates that the weakness is
| intentionally hidden to limit the security level to 40 bit by
| design."_
|
| So in other words: GPRS was intentionally backdoored.
| baybal2 wrote:
| As were its other cyphers.
|
| Even in their best case effort, 64 bit keys were in the realm
| of supercomputer level bruteforcing by late nineties if just
| few more cypher quality degradations, and key leaks were know.
| est31 wrote:
| > So in other words: GPRS was intentionally backdoored.
|
| Note that this high level insight isn't really a contribution
| of the paper, given that the authors of the algorithm basically
| admitted this themselves. Excerpt from the paper:
| It was explicitly mentioned as a design requirement that "the
| algorithm should be generally exportable taking into
| account current export restrictions" and that "the
| strength should be optimized taking into account the above
| require- ment" [15, p. 10]. The report further contains
| a section on the evaluation of the design. In
| particular, it is mentioned that the evaluation team came to
| the conclusion that, "in general the algorithm will be
| exportable under the current national export
| restrictions on cryptography applied in European countries" and
| that "within this operational context, the algorithm provides
| an adequate level of security against eavesdropping of
| GSM GPRS services"
|
| Basically, export regulations from that era implied that you
| had to make your algorithm weak intentionally. The main
| contribution of the paper was to give a public cryptoanalysis
| and point out the specific location of the flaw. I think it's
| highly interesting. Another excerpt: Further,
| we experimentally show that for randomly chosen LFSRs, it is
| very unlikely that the above weakness occurs.
| Concretely, in a million tries we never even got close
| to such a weak instance. Figure 2 shows the distribution of the
| entropy loss when changing the feedback polynomials of
| registers A and C to random primitive polynomials. This
| implies that the weakness in GEA-1 is un- likely to
| occur by chance, indicating that the security level of 40 bits
| is due to export regulations.
|
| Also, even though operators don't deploy GEA-1 any more,
| according to the paper many semi-recent phones still support
| GEA-1 (e.g. iPhone 8, Samsung Galaxy S9, OnePlus 6T, ...). The
| paper describes a possible downgrade attack that allows the
| session key to be obtained which can be used to extract _prior_
| communication that happened under more secure encryption
| algorithms (section 5.3). This is big. The paper authors
| managed to get a test added to the conformance test suite that
| checks that GEA-1 is _not_ supported, so hopefully future
| phones will drop the GEA-1 support.
| kodablah wrote:
| > Basically, export regulations from that era implied that
| you had to make your algorithm weak intentionally
|
| I think the question then becomes, is the regulation still
| satisfied if the specifics of the intentional
| limitation/weakness/exploit are undocumented? It's likely
| moot these days, but curious nonetheless.
| SSLy wrote:
| >operators don't deploy GEA-1 any more
|
| in 1st world maybe
| zokier wrote:
| Worldwide. Quoth the paper:
|
| > According to a study by Tomcs anyi et al. [11], that
| analyzes the use of the ciphering algorithm in GRPS of 100
| operators _worldwide_ , most operators prioritize the use
| of GEA-3(58) followed by the non-encrypted mode GEA-0(38).
| Only a few operators rely on GEA-2(4), while _no operator
| uses GEA-1(0)_.
| est31 wrote:
| Yeah I omitted maybe too much in the summary. The paper
| mentions that 2G is used in some places as a fallback, so
| operators still support it if the phone doesn't support
| newer standards.
|
| In fact in Germany, the country of one of the paper's
| authors, precisely this is happening: 3G is being turned
| off while 2G is used as fallback for devices that don't
| support LTE. Apparently there are some industrial use cases
| of 2G that still rely on it. In Switzerland, they are
| instead turning off 2G and keeping 3G as the fallback.
|
| IDK how the situation is in third world countries, but note
| that India at least is in the top ten when it comes to LTE
| coverage. https://www.speedtest.net/insights/blog/india-4g-
| availabilit...
| slim wrote:
| Also 2G is less battery hungry. We use it for our
| professional applications on Android
| jandrese wrote:
| Your list of 2017-2018 phones suggests that manufacturers
| have already dumped GEA-1 on current hardware.
|
| I kind of suspect that the weakness of GEA-1 is one of those
| industry secrets that everybody knows but nobody talks about.
| matthewdgreen wrote:
| >Note that this high level insight isn't really a
| contribution of the paper
|
| The problem with this statement is that nobody outside of the
| design staff understood _how_ the algorithm was weak, or
| (AFAIK) precisely what the criteria for "weak" actually
| were. Moreover -- after the export standards were relaxed and
| GEA-2 had shipped, nobody came forward and said "remove this
| now totally obsolete algorithm from your standards because we
| weakened it in this way and it only has 40-bit security"
| which is why it is present in phones as recent as the iPhone
| 8 (2018) and potentially may be vulnerable to downgrade
| attacks.
|
| There are some stupid ways to weaken a cipher that would make
| it obvious that something was weak in the design, e.g., just
| truncating the key to 40 bits (as IBM did with DES from
| 64->56 bits, by reducing the key size and adding parity
| bits.) The designers didn't do this. They instead chose a
| means of doing this that could only be detected using a
| fairly sophisticated constraint solver which (may not have
| been) so easily available at the time. So I don't entirely
| agree with this assessment.
| est31 wrote:
| > nobody outside of the design staff understood how the
| algorithm was weak
|
| And, as I mention, pointing that out _was_ a contribution
| of the paper.
|
| > or (AFAIK) precisely what the criteria for "weak"
| actually were
|
| I think this 40 bit limit is well documented for other
| encryption algorithms. I couldn't find it in any (old) US
| regulation text though after a cursory search.
| The "U.S. edition" supported full size (typically 1024-bit
| or larger) RSA public keys in combination with full size
| symmetric keys (secret keys) (128-bit RC4 or 3DES in SSL
| 3.0 and TLS 1.0). The "International Edition" had its
| effective key lengths reduced to 512 bits and 40 bits
| respectively (RSA_EXPORT with 40-bit RC2 or RC4 in SSL 3.0
| and TLS 1.0)
|
| https://en.wikipedia.org/wiki/Export_of_cryptography_from_t
| h...
| matthewdgreen wrote:
| > And, as I mention, pointing that out was a contribution
| of the paper.
|
| Maybe I didn't make it clear. The open question prior to
| this paper was not "precisely how did the algorithm
| implement a specific level of security", the question
| was: what is that specific level of security? This was
| totally unknown and not specified by the designers.
|
| Notice that the specification doesn't define the desired
| security, in the same way that it defines, say, the key
| size. It just handwaves towards 'should be exportable'. I
| can't find a copy of the requirements document anymore,
| but the quote given in the spec doesn't specify anything
| more than that statement.
|
| >I think this 40 bit limit is well documented for other
| encryption algorithms. I couldn't find it in any (old) US
| regulation text though after a cursory search.
|
| In the United States (note: GEA-1 was not a US standard)
| some expedited licenses were granted to systems that used
| effective _40 bit keys_. In practice (for symmetric
| ciphers) this usually meant RC2 and RC4 with explicitly
| truncated keys. GEA-1 does not have a 40-bit key size --
| a point I made in the previous post. It has a 64-bit key
| size. Nowhere does anyone say "the requirement for this
| design is effective 40-bit security": they don't say
| anything at all. It could have had 24 bit security, 40
| bit security, 56 bit security or even 64-bit security.
|
| ETA: Moreover, there is more to this result than _40-bit
| effective keysize_. A critical aspect of this result is
| that the attackers are able to recover keys using 65 bits
| of known keystream. The uncompromised GPRS algorithms
| require several hundred (or over 1000) bits. Note that
| these known plaintext requirements are somewhat
| orthogonal to keysize: capturing 65 bits of known
| keystream is possible in protocols like GPRS /IP due to
| the existence of structured, predictable packet headers
| -- as the authors point out. Capturing >1000 bits may not
| be feasible at all. That's really significant and
| interesting, and not the result one would expect if the
| design goal was simply "effective 40-bit key size". One
| has to wonder if "ability to perform passive decryption
| using small amounts of known plaintext" is also included
| in the (missing) security design requirements document. I
| bet it isn't.
| efitz wrote:
| Why the heck don't consumers have a seat at the table while all
| the 5G technology is being developed? I want open protocols and
| publicly documented cryptosystems based on published protocols.
| Instead we are just enabling the surveillance state.
| admax88q wrote:
| Because telecoms have this government supported oligopoly.
|
| It's always been fascinating to me how we have this parallel
| infrastructure between the open internet and the locked down
| telecoms. The free for all that is the internet has evolved
| much more robust protocols, but the telecoms continue to
| operate in their own parallel problem space, solving a lot of
| the same problems.
|
| They also fight tooth and nail to prevent being dumb pipes.
| acdha wrote:
| I think you're mixing up a number of valid concerns which have
| different threat models: for example, the mass surveillance
| risks tend to involve things like carriers doing network
| traffic analysis or location monitoring, and things like open
| protocols or crypto don't really change the situation there
| since it doesn't depend on breaking encrypted traffic (unlike
| in the pre-TLS era when traffic wasn't encrypted).
|
| Similarly, you can develop and document a crypto system but
| unless you're publicly funding a lot of adversarial research
| that doesn't prevent something like Dual_EC_DRBG being
| submitted in bad faith. I haven't seen any indication that the
| NIST team thought they were developing open standards -- it's
| not like they sent a pull request from nsa/add-secret-backdoorz
| -- and the level of effort needed to uncover these things can
| be substantial, requiring highly-specialized reviewers. That
| also hits all of the usual concerns about whether the
| cryptographic algorithm is secure but used in an unsafe manner,
| which can be even harder to detect.
|
| The biggest win has simply been that these issues get a lot
| more public discussion and review now than they used to, and
| the industry has collectively agreed not to trust the
| infrastructure. Switching to things like TLS has done more to
| protect privacy than all of the lower level standards work and
| that's nice because e.g. a Signal or FaceTime user is still
| protected even if their traffic never touches a cellular
| network.
| firebaze wrote:
| "Consumers" as an opaque crowd wouldn't be able to judge
| anything this involved. The discussion partners we're talking
| about are mostly politicians, who are primarily educated in the
| field of convincing despite having no background at all,
| referring to higher authority et al. Our typical, maybe even
| scientifically educated consumer wouldn't be able in the least
| of realistically forming an opposing opinion, and even if
| she/he did, it would be a very hard sell against battle-
| hardened politicians.
|
| This field is too complex even for quite a few if not most IT-
| centric professions.
|
| From my point of view, we have to put the blame on us. We
| should treat anyone supporting invasive technologies (in the
| sense of subverting privacy and basic human rights) as an
| outcast.
|
| The impression I get (maybe not primarily from HN) is the
| opposite. We'd not take that job (pay...), but still we appear
| to honor their efforts. Or we _do_ take that job because of pay
| on the opposite spectrum.
| oneplane wrote:
| Because of money and or mix of politics and imagined power.
|
| In addition, the systems are vast and complex, too big for one
| person to capture and understand. This means you get into the
| area of design teams, business teams, politics and geo stuff by
| default. Even re-implementing the specification (or most of it)
| in FOSS is extremely hard, and that's with all the information
| being publicly available. Designing it is an order of magnitude
| harder.
|
| Besides the systems in isolation, we also have to deal with
| various governments, businesses, and legacy and migration paths
| in both cases.
|
| Ironically, because all of this and the huge amount of people
| involved, consumers _are_ involved in this. It's not like the
| GSM, 3GPP etc. don't use their own stuff.
| BTCOG wrote:
| Because, it's purposely this way.
| IncRnd wrote:
| The reason consumers don't have a seat at the table is that the
| techology has nothing to do with consumers, other than
| harvesting consumer's money and data.
| tialaramex wrote:
| _Consumers_ don 't have anything useful to bring to this table.
|
| Historically the realisation that you need outside
| _Cryptographers_ (not consumers) if you actually want to do
| anything novel with cryptography+ was slow to arrive.
|
| Even on the Internet, for PGP and SSL there was no real outside
| cryptographic design input. In SSL's case a few academics
| looked at the design of SSLv1, broke it and that's why SSLv2 is
| the first version shipped. Only TLS 1.2 finally had a step
| where they ask actual cryptographers "Is this secure?" and that
| step was _after_ the design work was finished. TLS 1.3 (just a
| few years ago) is the first iteration where they have
| cryptographers looking at the problem from the outset and the
| working group rejected things that cryptographers said can 't
| be made to fly.
|
| And TLS 1.3 also reflects something that was effectively
| impossible last century, rather than just a bad mindset. Today
| we have reasonably good automated proof systems. An expert
| mathematician can tell a computer "Given A, B and C are true,
| is D true?" and have it tell them either that D is necessarily
| true or that in fact it can't be true because -something- and
| that helps avoid some goofs. So TLS 1.3 has been proven (in a
| specific and limited model). You just could not do that with
| the state of the art in say 1995 even if you'd known you wanted
| to.
|
| Now, we need to get that same understanding into unwieldy SDOs
| like ISO, and also into pseudo SDOs like EMVco (the
| organisation that makes "Chip and pin" and "Contactless
| payment" work) none of which are really getting the
| cryptographers in first so far.
|
| + "But what I want to do isn't novel". Cool, use one of the
| existing secure systems. If you can't, _no matter why_ then you
| 're wrong you did want to do something novel, start at the top.
| tptacek wrote:
| I don't think that's true about SSL/TLS. SSLv2, the Netscape
| protocol, was a shitshow, but SSL3, its successor and the
| basis for TLS, has Paul Kocher's name on the RFC. The mixed
| MD5/SHA1 construction is apparently Kocher's doing.
| ezekg wrote:
| > Instead we are just enabling the surveillance state.
|
| It's a sad state of affairs. I honestly believe people actually
| want this, or have at least been conned into wanting it.
|
| Giving up liberty (in this case, privacy) for the guise of
| safety is all the rage these days
| mindcrime wrote:
| _I want open protocols and publicly documented cryptosystems
| based on published protocols. Instead we are just enabling the
| surveillance state._
|
| I think you just answered your own question.
| Pokepokalypse wrote:
| we don't tend to vote in our own best interest; whether it's
| politics, or economically.
| gruez wrote:
| How does "open protocols and publicly documented cryptosystems"
| help when the carriers are mandated by law to have backdoors so
| they can fulfill "lawful intercept" requests? You're better off
| treating it as untrusted and using your own encryption on top
| (eg. Signal).
| corty wrote:
| It doesn't help against the local authorities. But it will
| help against criminals and foreign authorities. E.g. most of
| the worlds capitals are packed with IMSI-catchers and passive
| eavesdropping devices operated from embassies. This spying on
| foreign soil would be impossible if mobile phones were any
| good with regards to security.
|
| And signal isn't really very helpful in this scenario,
| because it doesn't properly protect against MitM attacks.
| xxpor wrote:
| >And signal isn't really very helpful in this scenario,
| because it doesn't properly protect against MitM attacks.
|
| I suppose it depends on where exactly the Middle here is,
| but for basic MitM of the physical network, if nothing else
| shouldn't the TLS connection to Signal's servers be
| sufficient?
| m4x wrote:
| How does signal fail to protect against MITM attacks? Given
| that it's end-to-end encrypted, wouldn't an attacker have
| to force a change of keys to MITM you? In which case you
| should be notified by signal that the keys were recently
| changed.
| corty wrote:
| Signal only implements a very weak form of trust-on-
| first-use for keys. So there is no authentication and no
| security for a first contact. Subsequent communication
| can be protected by meeting in person and comparing keys,
| which nobody knows about. Signal doesn't ever tell you
| about this necessity and doesn't have any option to e.g.
| pin the key after manual verification or even just set a
| "verified contact" reminder.
|
| Being warned about a changed key is only sensible at all
| if the one before that was verified. Otherwise, how do
| you know everything wasn't MitMed in the first place?
| Also, most users ignore the warning if the next message
| is "sorry, new phone, Signal doesn't do key backups".
| Which everyone will understand and go along with because
| they either don't know about the danger there. Or because
| they know Signal really doesn't do shit to provide
| authentication continuity through proper backups.
|
| Signal is only suitable for casual communication. Against
| adversaries that do more than just passive dragnet
| surveilance, Signal is either useless or even dangerous
| to recommend. It is intentionally designed just for this
| one attack of passive dragnet surveilance, nothing else.
| Please don't endanger people by recommending unsuitable
| software.
| hsbauauvhabzb wrote:
| Are there any reasonable case studies of individuals or
| groups being targeted by pitm of signal?
| mytailorisrich wrote:
| Encryption in cellular systems is to protect over-the-air
| signals. It's irrelevant when it comes to 99% of legal
| interception because for that law enforcement simply asks the
| network operator to share the plaintext traffic from within
| the network.
|
| If you want that _no-one_ be able to evesdrop then yes you
| have to have your own encryption on top. These days a lot of
| data already goes through TLS but for instance standard voice
| calls are obviously transparent to operators.
| theptip wrote:
| Implementing a legally-mandated wiretap requirement like
| CALEA doesn't require you to break your protocol (i.e. the
| transport layer). It is implemented at the application layer,
| on the server. You can still have cryptographically secure
| communication between client and server while complying with
| wiretap laws.
|
| If you're concerned about your government intercepting your
| communications with a warrant, there's not really anything
| you can do except move to an E2E encrypted app like Signal.
| But if you're OK with only being monitored if a judge signs a
| warrant, then the GP's suggestion helps.
|
| These protocol backdoors are more dangerous than application-
| level wiretaps because anyone can find and use them; they
| might be private at first, but once they are discovered
| there's usually no way to fix them without moving to a new
| protocol (version).
|
| Protocol breaks seem to me to be more in the category of
| "added by the NSA through subterfuge or coercion in order to
| enable illegal warrantless surveillance", which I find much
| more concerning than publicly-known processes with (at least
| in theory) established due process like CALEA wiretaps.
|
| > You're better off treating it as untrusted and using your
| own encryption on top (eg. Signal).
|
| But yes, this is a sensible approach to the world-as-it-
| currently-is.
| pope_meat wrote:
| Especially a secret judge, in a secret court.
|
| I always consent to that.
| throw0101a wrote:
| > _Why the heck don't consumers have a seat at the table while
| all the 5G technology is being developed?_
|
| A lot of these standards are generally created by industry
| consortia, and participation in standards setting is limited to
| companies who are members.
|
| This isn't the IETF where any rando can join a mailing list and
| chime in on a proposed standard/draft.
|
| IEEE (Ethernet) is somewhere in the middle: you have to be an
| individual member (though may have a corporate affiliation),
| and make certain time/attendance commitments, but otherwise you
| can vote on drafts (and many mailing lists seem to have open
| archives):
|
| * https://www.ieee802.org/3/rules/member.html
|
| * https://www.ieee802.org/3/
| this_user wrote:
| All of the GSM algorithms were weak, and deliberately so. It is
| well known that the French wanted it that way, while the German
| side wanted a bit more safety on account of the ongoing Cold War.
| So compromises like the infamous A5 algorithm were created whose
| security was mainly based on the algorithm itself being kept a
| secret - the cardinal sin of cryptography.
| slownews45 wrote:
| Too many coincidences for this to be by chance.
|
| IBM had Commercial Data Masking Facility which did key shortening
| to turn a 56 bit cipher into a 40 bit one.
|
| Now we've got this weird interaction which similarly reduces key
| length.
|
| Seems pretty obviously intentional?
| fragbait65 wrote:
| I agree, it is probably intentional, but I think it remains to
| be proven that it was malicious?
|
| Maybe 40 bit was seen as sufficient at the time, but are there
| any engineering reasons to actually shorten the key
| intentionally, does it improve the transfer rate in any way?
|
| I can't think of any, but I'm no expert, so maybe somebody else
| can chime in?
| raphlinus wrote:
| Depending on your definition of "malicious," I think it
| clears that bar. The problem is not making a good-faith
| argument that 40 bits was sufficient (which was done for some
| extent for export-approved 40 bit SSL), but that it misleads
| people into believing that it's 64 bit encryption while it
| only has 40 bits of strength for people who are in on the
| backdoor.
|
| And as far as the other half of your question, no, there's no
| possible benefit (other than to the backdoor owners) from a
| smaller keyspace, as it goes through the motion of encrypting
| with the larger one.
| fragbait65 wrote:
| Thanks for the explanation!
| [deleted]
| slownews45 wrote:
| At the time 40 bits was not considered a backdoor, it was
| considered a weakness that would allow folks like NSA (with
| big budgets and intercept capabilities) to wiretap the way
| they had other communication approaches.
|
| Some situations, rather than designing new codecs, they would
| just weaken key gen side. The IBM effort there was public to
| allow for easier export, but also an approach that could be
| used to hide the weakness which in other settings may have
| been beneficial. It's possible however that folks involved
| understood what was going on to a degree but that it was seen
| as necessary to avoid export / import restrictions.
|
| More recently I think places like China ask that the holders
| of key material be located in country and make the full keys
| available or accessible to their security services. Not sure
| how AWS squares that circle unless they outsource their data
| centers in China to a third party that can then expose the
| keys for China to use.
| gnfargbl wrote:
| > Not sure how AWS squares that circle unless they
| outsource their data centers in China to a third party that
| can then expose the keys for China to use.
|
| If you check something like the Public Suffix List [1], you
| will notice that Amazon maintains separate com.cn domains
| for several of its services in China. Amazon doesn't appear
| to do that for any other country. It follows that AWS in
| China might well be isolated from AWS elsewhere.
|
| [1] https://publicsuffix.org/list/public_suffix_list.dat
| corty wrote:
| 40bits was "export level" encryption, i.e. stuff you could
| safely export to foreign enemy countries. Because western
| secret services were certain they could decrypt that if
| necessary. So this is malicious without a doubt.
| cestith wrote:
| Indeed. The question is where the malice lies. If I make a
| product for US use and export use, am I malicious by
| telling export customers that I'll only support a weaker
| key by US law? Or it the malice in the US requiring that?
| Can we expect companies, especially international
| conglomerates, to give up on potential markets because to
| protest a law?
| corty wrote:
| Far simpler: An entity (here: the US) is malicious
| towards its enemies, makes malicious laws and commits
| malicious deeds to harm them. All the others are just
| proxies for that maliciousness (provided they don't have
| their own separate agenda there).
| yarg wrote:
| I don't necessarily think that you're wrong - but it's a moot
| point since the consequences are the same.
|
| For example, I don't think James Comey was acting with
| malicious intent calling for universal crypto back-doors; I
| do however think that he was dangerously naive and deeply
| wrong-headed.
|
| No back-door will ever remain unbreached and by baking
| vulnerabilities into the specification you're paving the way
| for malicious manufacturers to exfiltrate your network's
| communications as they see fit.
|
| There's a reason that 5G rollouts have national security
| implications and they could've been largely avoided (metadata
| aside).
| usrusr wrote:
| Post Showden this hardly qualifies as a smoking gun surprise, but
| it's nice to see a concrete example identified and examined. It's
| like nearfield archaeology, the difference between reading
| accounts of the use of greek fire in some antique battle vs
| finding identifiable remains of the manufacturing process.
| nimish wrote:
| It should not be surprising that a system that is subject to
| lawful intercept requirements has weak encryption, especially
| pre-lift of the export ban.
| ganzuul wrote:
| It was supposed to take 6 hours using Finland's fastest
| supercomputer at the time. So said my professor.
| mytailorisrich wrote:
| This is really to intercept over the air (so not necessarily
| for fully 'legal' intercept but also intelligence services).
|
| If law enforcement needs to wiretap a specific phone/SIM they
| only need to request it to the operator. Over-the-air
| encryption is irrelevant.
|
| Nowadays operators can duplicate all packets and send copies to
| law enforcement/security services in real time so that they can
| monitor 100% of a given phone's traffic from their offices.
| cs2733 wrote:
| Exactly, this is for third party access or at least I've
| assumed for a long time all communications using standard
| tools have always been open for scrutinity.
|
| It's nothing new either - for as long as there is postal
| service your mail could be open by order of this or that
| government oficial. Not even the Pony Express was immune to
| that ;-)
|
| If you want greater secrecy do like the paranoid intel
| community has been doing since at least the cold war and
| exchange messages on seemingly disconnected channels - say 2
| different newspaper columns, wordpress blogs or even websites
| made to look like robot-generated SEO spam, using code that
| was previously agreed in person, with everyone keeping
| hardcopies if necessary.
| Merrill wrote:
| At the time when digital cellular was first designed, the main
| objective was to design sufficiently strong authentication to
| stem the rampant theft of service then occurring on the analog
| cellular systems. In the US this was estimated at roughly $2
| billion/annum.
|
| Encrypting traffic for privacy purposes was less important. Prior
| analog cellular telephony systems were unencrypted, as were
| analog and digital wireline services. Thus, the privacy
| cryptography was intended to be strong enough to make it a little
| more inconvenient/expensive to eavesdrop on digital cellular than
| it was on analog cellular or wireline services without
| significantly impeding law enforcement.
| aliasEli wrote:
| These algorithms are pretty weak but can still be used for
| securing the communications with current phones.
| daveguy wrote:
| If the security level is artificially limited to 40 bits as the
| article suggests, then it is not good for securing any
| communications. It was relatively easy to crack DES-56 at rest
| before the turn of the century. A teraflop machine can defeat
| 40 bit encryption on the order of seconds.
|
| Edit: fixed the incorrect "RSA-56" encryption. Thanks graderjs.
| graderjs wrote:
| I think you might mean DES-56, or RC5, but not RSA.
| jhgb wrote:
| I imagine that _technically_ RSA-56 would be easy to crack
| as well.
| axoltl wrote:
| To be fair you could probably break 56 bit RSA with a pen
| and paper.
| daveguy wrote:
| Yes, good catch. I did mean DES-56. I got the encryption
| name confused with the company that issued the challenge.
| The RSA algorithm is completely different.
| meepmorp wrote:
| Also, note, GEA-2 doesn't seem to have the same kind of key
| weakening. I have no idea about the relative prevalence and
| configurability of those algorithms in the wild, though.
___________________________________________________________________
(page generated 2021-06-16 23:00 UTC)