[HN Gopher] End-to-end encrypted messages need more than libsignal
___________________________________________________________________
End-to-end encrypted messages need more than libsignal
Author : tpush
Score : 208 points
Date : 2022-12-10 03:34 UTC (19 hours ago)
(HTM) web link (mjg59.dreamwidth.org)
(TXT) w3m dump (mjg59.dreamwidth.org)
| dvh wrote:
| Why not simply use gpg?
| tao_oat wrote:
| forward secrecy, for one: https://signal.org/blog/asynchronous-
| security/
| upofadown wrote:
| Forward secrecy is somewhat overrated in end to end encrypted
| messaging. Most people do not want a truly off the record
| experience but instead keep their old messages around
| indefinitely. As long as those old messages exist and are
| accessible to the user they will be just as accessible to any
| attacker that gets access to the secret key material.
|
| The Signal Protocol somewhat excessively provides forward
| secrecy for each and every message sent. That is sort of
| pointless while the messages still exist on the screen. Most
| people would be happy getting rid of their old messages every
| week or so. You could totally do that in an instant messaging
| system that used OpenPGP formatted messages. The reason that
| no one bothers is because few people want to dump their old
| encrypted emails. No one wants to dump their old encrypted
| files. Instead they take advantage of the greater security
| inherent in an offline encryption system and avoid getting
| their keys leaked in the first place.
|
| If you really wanted to do message by message forward secrecy
| using a hash ratchet using OpenPGP formatted messages you
| could do that too. There is nothing magical about the Signal
| Protocol for stuff like that...
|
| Relevant discussion:
|
| * https://articles.59.ca/doku.php?id=pgpfan:forward_secrecy
| wizeman wrote:
| I thought the point of forward secrecy in end-to-end
| encrypted messaging was to protect past conversations at
| the transport layer against the key being compromised?
|
| In other words, the point being that a man-in-the-middle
| attacker cannot decrypt past conversations that he recorded
| even if in the future he is able to determine the key?
|
| It's kind of obvious that you cannot prevent the other
| party from saving the messages, but from what I understand
| I don't think that's what forward secrecy is even trying to
| do (disclaimer: I'm not a cryptographer).
| kevdev wrote:
| You're correct.
| upofadown wrote:
| >I thought the point of forward secrecy in end-to-end
| encrypted messaging was to protect past conversations at
| the transport layer against the key being compromised?
|
| Yes. Exactly. Only the transport part is protected.
| Contrast, say, TLS with messaging. TLS is basically an
| encrypted pipe. Plaintext goes in and plaintext comes
| out. If someone saves some of that plaintext and it gets
| leaked, well, that isn't your job to prevent that but you
| can at least provide forward secrecy. After all, people
| don't normally save their sensitive web pages for
| extended periods of time...
|
| With messaging, saving old messages is more or less the
| default. When that happens the value of forward secrecy
| is negated. If you want your old messages to be gone, you
| (and your correspondent) actually have to get rid of
| them.
|
| >...man-in-the-middle attacker...
|
| Terminology quibble. I think this would be normally
| described as recording the encrypted messages off the
| wire. MITM implies than an attacker is impersonating one
| or more correspondents.
| wizeman wrote:
| > Yes. Exactly. Only the transport part is protected.
|
| Well, isn't that valuable?
|
| > With messaging, saving old messages is more or less the
| default. When that happens the value of forward secrecy
| is negated.
|
| It's not negated because a passive attacker that records
| communications and then, in the (potentially far) future,
| somehow can obtain the key (say, by exploiting some
| weakness and/or brute-forcing), still cannot decrypt your
| past communications, regardless of whether everybody
| saves old messages or not.
|
| By passive attacker, I mean someone like the NSA, your
| ISP, your messaging provider, the server/P2P host that
| relays your messages, etc.
|
| > If you want your old messages to be gone, you (and your
| correspondent) actually have to get rid of them.
|
| But that's not what forward secrecy is designed to do, is
| it? It's designed to prevent _third parties_ who can
| record the encrypted end-to-end communication from
| decrypting past messages when /if they can obtain your
| key.
|
| It's not designed for making old messages be gone.
|
| > Terminology quibble. I think this would be normally
| described as recording the encrypted messages off the
| wire. MITM implies than an attacker is impersonating one
| or more correspondents.
|
| Yes, sorry, I meant a "passive man-in-the-middle
| attacker".
| upofadown wrote:
| >It's not negated because a passive attacker that records
| communications and then, in the (potentially far) future,
| somehow can obtain the key (say, by exploiting some
| weakness and/or brute-forcing), still cannot decrypt your
| past communications, regardless of whether everybody
| saves old messages or not.
|
| If someone breaks the encryption somehow then forward
| secrecy is also negated. They get the encrypted material
| directly. Forward secrecy is only effective in messaging
| if the attacker does something like break into your
| device to get the secret key material. At that point they
| will also get any saved messages that are still
| accessible to you, in whatever way they are accessible.
| RunSet wrote:
| > The Signal Protocol somewhat excessively provides forward
| secrecy for each and every message sent. That is sort of
| pointless while the messages still exist on the screen.
|
| Or if the message's recipient took a screenshot.
| aborsy wrote:
| You use hardware keys and don't care about forward secrecy.
| Private key never leaks.
|
| For offline file encryption ( pgp isn't intended for
| messaging).
| hiq wrote:
| > You use hardware keys and don't care about forward
| secrecy. Private key never leaks.
|
| I guess you mean "leak" as in "being copied", which relies
| on how good the hardware actually is at preventing this,
| but what if I just lose my hardware key, isn't that the
| same?
| [deleted]
| NotYourLawyer wrote:
| 3np wrote:
| Just like not aligning with the views of Donald Trump means you
| are anti-freedom?
|
| /s
| klabb3 wrote:
| Anecdotal but I'm a stronger-than-most believer in free speech
| and I don't think Musk is doing anything other than lip-service
| and a clever anchoring of free speech as a value that he, quite
| successfully, associated with himself. It's textbook
| manipulative behavior, claim you strongly believe in something
| early, and then people won't question it. If you pretend he
| never claimed to be pro-free speech, and make a judgment of his
| actions alone, does he feel like someone who genuinely strives
| towards free speech? He surely wants to be contrarian, but free
| speech is much more than edginess.
| pcwalton wrote:
| The author wasn't stating whether the bulk of the cryptography
| community _should_ be aligned with Musk 's views. He's saying
| that they _aren 't_. That's a descriptive statement, not a
| normative one.
|
| I'm not a member of the cryptography community, but the author
| is a prominent cryptographer [edit: not accurate; see below],
| and so right now I have every reason to believe that his
| statement is correct.
| tptacek wrote:
| The author isn't a prominent cryptographer, but he definitely
| knows a lot of them, and is a prominent security engineer and
| researcher.
| pcwalton wrote:
| Thanks for the correction.
| NotYourLawyer wrote:
| Cryptographers as a group are pretty pro-speech and anti-
| censorship.
| throwaway0x7E6 wrote:
| >He's saying that they aren't
|
| I would like a few examples of who are those people the
| author considers to be "members of the cryptography
| community".
|
| it's a pretty wild claim considering that we owe the current
| state of cryptography to 1A, and I assume most people
| involved in the field are aware of that.
| joshuamorton wrote:
| Supporting the first amendment, and thinking Elon's changes
| to Twitter policy are bad are completely coherent beliefs.
| Any 1A lawyer will tell you that Twitter has never been
| bound by the first amendment.
| throwaway0x7E6 wrote:
| and ISPs are not prohibited by any law from blackholing
| encrypted traffic in the name of combatting CSAM and hate
| speech to make the internet safer and more inclusive for
| everyone
|
| would you be cool with it if they did that?
| joshuamorton wrote:
| I think that does violate common carrier statues.
|
| But even if it didn't, such an ISP would immediately go
| out of business since like 90+% of web traffic is
| encrypted.
|
| But yeah like if an ISP did that, if be fine with that
| because multiple ISPs serve my area so I'd pick a
| different one. Insofar as there are people who only have
| access to one ISP, I think the common carrier statues
| apply. (And actually there are ISPs that advertise
| filtered dns resolvers for that apply even stricter
| content filtering, and I think that's fine. I just choose
| not to use those)
|
| But Twitter isn't a monopoly in any sense. It's one of
| the smaller social media sites, and even if you try to
| exclude Facebook from twitters niche, TikTok,
| truth.social, and others are similar enough. As has been
| clearly demonstrated by trump's choice to not return to
| Twitter.
| habinero wrote:
| Neither Twitter nor Elon are bound by the 1A, so it's
| irrelevant.
|
| Also, why is the claim surprising? Elon has really only
| paid lip service to free speech, his actions have been all
| over the place.
| djbusby wrote:
| Can ActivityPub deliver E2E? I've read docs but don't have enough
| experience to fully grok. Do I still have to figure out key-
| sharing?
| LanternLight83 wrote:
| I don't think it's in the spec (because I don't see it
| implemented), but I also don't see any blocking issues
| regarding an extension to the spec to provide this (aside from
| the usual issues with "server-side" E2E and roaming/multi-
| device access)
| franky47 wrote:
| Related: https://soatok.blog/2022/11/22/towards-end-to-end-
| encryption...
| 9wzYQbTYsAIc wrote:
| Right now you could PGP encrypt what you post and only share
| your key with your intended audience, but that would seem to
| defeat the purpose of publishing activities to the open web.
| thaumasiotes wrote:
| Seems like the author has an axe to grind.
|
| > When you want to send a message to someone, you ask the server
| for one of their one-time prekeys and use that. Decrypting this
| message requires using the private half of the one-time prekey,
| and the recipient deletes it afterwards. This means that an
| attacker who intercepts a bunch of encrypted messages over the
| network and then later somehow obtains the long-term keys still
| won't be able to decrypt the messages, since they depended on
| keys that no longer exist.
|
| > Since these one-time prekeys are only supposed to be used once
| (it's in the name!) there's a risk that they can all be consumed
| before they're replenished. The spec regarding pre-keys says that
| servers should consider rate-limiting this, but the protocol also
| supports falling back to just not using one-time prekeys if
| they're exhausted (you lose the forward secrecy benefits, but
| it's still end-to-end encrypted).
|
| > This implementation not only implemented no rate-limiting,
| making it easy to exhaust the one-time prekeys, it then also
| failed to fall back to running without them. Another easy way to
| force DoS.
|
| You just described a security/convenience tradeoff in the use of
| prekeys. OK. Then, the app's choice to go with security over
| convenience is a security problem, because you're not allowed to
| send messages without forward secrecy. For greater security, the
| app should have allowed messages at a lower level of security.
|
| If you want to make the case that someone did the wrong thing, do
| it in a way where they wouldn't also have been wrong if they did
| the opposite of what you're complaining about.
| klooney wrote:
| Also, Twitter is a great big centralized system- it seems like
| it ought to make public key distribution very easy and
| straightforward.
| djbusby wrote:
| Wasn't that what KeyBase was supposed to do?
| ttyprintk wrote:
| And GitHub is beating Keybase at
| tptacek wrote:
| This is a blog post relating flaws the author found in systems
| that naively used libsignal. The "axe" to grind here is that
| libsignal can be misused.
| thaumasiotes wrote:
| The axe to grind here is that whatever the author mentions
| is, ipso facto, a flaw. I've highlighted an example where the
| author presents as unreasonable a choice to go with greater
| security over lesser security, and labels that choice a
| security issue. This is not a reasoned or reasonable decision
| on the part of the author. I'm going to generalize the
| quality of argument here, which is terrible, to whatever else
| the author says.
|
| Want to make the argument that libsignal can be misused?
| First, go through all of your examples and make sure they
| actually show misuse.
| [deleted]
| tptacek wrote:
| All of these examples show flaws in applications that
| libsignal didn't intrinsically mitigate. I think your
| response here is knee-jerk and superficial.
| thaumasiotes wrote:
| By a generous standard in which "they are more secure
| than I claim they should be" is considered a security
| issue, all of these examples show flaws. I don't find
| that convincing, no.
|
| > I think your response here is knee-jerk and
| superficial.
|
| You're not exactly in a position to criticize that.
| https://news.ycombinator.com/item?id=33918022
| mjg59 wrote:
| First, it's not clear that the failure to fall back to
| losing forward security in the event of one-time prekey
| exhaustion is a decision - if you simply error in your
| client code when you try to get a prekey and there aren't
| any, that could also be because you didn't understand that
| libsignal will perform that fallback. But the more
| important point is that the spec is explicit about the
| importance of trying to prevent exhaustion of one-time
| prekeys, and their failure to do that is absolutely an
| issue. I was trying to talk about the consequences of that
| issue, rather than saying that the decision not to fallback
| is inherently itself a security concern.
| javajosh wrote:
| A timely reminder of the non-cryptographic issues you must get
| right, in addition to the cryptographic ones, in order to build a
| robustly secured system!
| tptacek wrote:
| See the recent Matrix fiasco for a more vivid example.
| 2Gkashmiri wrote:
| What?
| ttyprintk wrote:
| An complete break, at most academically:
|
| https://news.ycombinator.com/item?id=33009721
| tptacek wrote:
| What does "at most academically" mean?
| ttyprintk wrote:
| At the time, there was contention about whether all
| implementations needed to be patched. I only remember
| agreement about the academic scope of the protocol
| problem.
| TheCycoONE wrote:
| Can you link to whatever you're referring to?
| tptacek wrote:
| https://nebuchadnezzar-megolm.github.io/
|
| The cryptography itself didn't come out unscathed, but the
| basic nuts and bolts semantics of the system itself had
| devastating vulnerabilities.
| est31 wrote:
| It's a nice piece of research but I wouldn't call it a
| fiasco. Many parts were well engineered, they just found
| some edge cases that yes had devastating effects. It's
| the old attacker's advantage again, in security, even
| edge cases can get you in. Matrix is a complicated
| protocol and tries to do things that other messengers
| can't, like multi device support. The researchers also
| write on the website that Matrix has fixed most of the
| issues (4 of the listed 6 on the website), and for the
| remaining ones there are plans for fixes.
| tptacek wrote:
| All Matrix messages (in practice) are group messages, and
| servers control the keys for all Matrix groups.
| "Devastating" isn't strong enough. They killed it. It's
| dead.
|
| The response (or lack thereof) to this research is pretty
| fascinating!
| est31 wrote:
| > The response (or lack thereof) to this research is
| pretty fascinating!
|
| What about: https://matrix.org/blog/2022/09/28/upgrade-
| now-to-address-en...
|
| > "Homeserver Control of Room Membership" - A malicious
| homeserver can fake invites on behalf of its users to
| invite malicious users into their conversations, or add
| malicious devices into its users accounts. However, when
| this happens, we clearly warn the user: if you have
| verified the users you are talking to, the room and user
| will be shown with a big red cross to mark if malicious
| devices have been added. Similarly, if an unexpected user
| is invited to a conversation, all users can clearly see
| and take evasive action. Therefore we consider this a low
| severity issue. That said, this behaviour can be
| improved, and we've started work on switching our trust
| model to trust-on-first-use (so that new untrusted
| devices are proactively excluded from conversations, even
| if you haven't explicitly verified their owner) - and
| we're also looking to add cryptographic signatures to
| membership events in E2EE rooms to stop impersonated
| invites. These fixes will land over the coming months.
| tptacek wrote:
| I think this is an amazing response. The researchers
| demonstrated that a malicious server can inject itself
| into arbitrary Matrix groups, and thus get all the keys
| required to decrypt messages. The response: "we'll just
| try to have an alert pop up that tells you the server is
| reading your messages".
|
| Anyways, my point is just: this is a good illustration of
| the phenomenon described by the root comment.
| est31 wrote:
| > The response: "we'll just try to have an alert pop up
| that tells you the server is reading your messages".
|
| First, they have not said they will add alerts, they say
| that such alerts are already there. Second, they did
| admit that the current situation can be improved, and
| have said they will add cryptographic signatures to
| membership events in the future. To my reading that would
| be precisely what the researchers were missing, aka if
| new folks show up, even with support of the homeserver,
| they still need to present a signed invite from one of
| the group members to be given the keys.
| shp0ngle wrote:
| An honest question, not trying to be argumentative.
|
| If Signal server is evil and starts to MITM a
| communication between two parties, replacing each of the
| parties keys with their own keys and decrypting in the
| middle, both of the participants will also just get
| warning "the other party changed their key", right?
|
| That was the gist of the issue of yesteryear with
| Guardian and WhatsApp, right?
|
| How is this in practice different? User see a warning and
| ignores it (as people change their phones thus their keys
| all the time).
|
| I'm not trying to be snarky but actually asking a
| question
| bscphil wrote:
| From the page:
|
| > In environments where cross-signing and verification
| are enabled, adding a new unverified user adds a warning
| to the room to indicate that unverified devices are
| present. However, it is possible for a homeserver to add
| a verified user to rooms without changing the security
| properties of the room. This allows a colluding
| homeserver and verified user to eavesdrop on rooms not
| intended for them. In other words, the warning regarding
| unverified devices is independent to whether the device
| is intended to participate in the specific room. Users
| may, of course, simply ignore warnings.
|
| If I'm not mistaken this is not just a "warning" - Matrix
| clients will actively refuse to encrypt messages to the
| new recipient until they are verified in this situation.
|
| The last time I seriously looked at Matrix
| (2019-2020ish), cross-signing and verification were
| essentially required for real security in chats. It
| sounds to me like (a) the worst case scenario vis-a-vis
| the "rogue server" is that things regress back to where
| they were in 2019, and (b) there's an issue where if
| _everyone_ in a room has a rogue _user_ validated, a
| rogue _server_ can add the _rogue user_ to a room that
| the user is not supposed to be invited to, without
| triggering a warning. This latter issue strikes me as
| something that should definitely be fixed (if it hasn 't
| already been), but far from fatal for smaller chats.
|
| "They killed it. It's dead." strikes me as an enormous
| reach.
| Arathorn wrote:
| As others have pointed out...
|
| > They killed it. It's dead.
|
| ...is ridiculous hyperbole. This is a debate around the
| "system warns you when something bad happens" versus
| "system stops something bad happening in the first
| place". It's like saying SSL is "killed dead" because
| browsers let you connect to sites with bad certs albeit
| with a big red warning.
|
| And much as this was addressed for SSL with HSTS etc,
| we're working on the fix for Matrix. For what it's worth,
| the current approach is https://github.com/matrix-
| org/matrix-spec-proposals/blob/fay...
|
| Screaming about Matrix being "killed dead" helps no-one,
| and risks completely sabotaging our efforts to improve
| open communication, just in order to score a rhetorical
| point.
| tptacek wrote:
| You're changing the protocol!
|
| Nobody has ever disputed that it's possible for the
| Matrix team to build a secure messaging protocol. I'm
| sure you can. But: have you yet?
|
| By way of example: TLS 1.0 is dead as a doornail, too. If
| you like, we can use a different word to capture the
| equivalence class here. TLS 1.0 and the current Matrix
| protocol: "ex-parrots".
|
| This whole situation is wild. What we're discussing on
| this thread is the single most impactful set of
| cryptographic vulnerability findings against any secure
| messaging system ever. And this thread is, I think, one
| of a very few places on the whole Internet where there's
| even a conversation about it. Everybody else is just
| rolling along as if nothing happened.
|
| If I sound mean about this, I don't mean to be. But, fair
| warning, I'm in full-on "wake up, sheeple" mode over
| this.
| pseudo0 wrote:
| It has been a bit since I read the paper, but my
| recollection was that the attacks weren't very practical.
| It's great that they responsibly disclosed issues and
| that Matrix tightened up the spec and implementations,
| but requiring a malicious homeserver plus additional
| preconditions made the attacks quite slim edge cases.
| tptacek wrote:
| In practice, all Matrix messages are group messages.
| Servers control group membership. That's it, that's the
| tweet.
|
| People have been dismissing security research with this
| "not very practical" epithet for as long as I've been
| working. Literally, at least since I got into the field.
|
| https://web.archive.org/web/20000818162612/http://www.ent
| era...
| pseudo0 wrote:
| My background is more in the applied end of things, not
| research, so I suppose that colors my perspective.
|
| But let's say I'm playing red team, and I get root on the
| box where my target is running their Matrix server.
| Practically, what can I do to extract information without
| setting off giant red flags? From my read of the paper
| the answer is "not much", since a bunch of big warnings
| popping up would probably alert the users that something
| sketchy is happening.
| tptacek wrote:
| If you're red-teaming a Matrix server and you own up the
| server, you can decrypt all the messages of all the
| people using that Matrix server, and you can do it
| trivially, just using the basic protocol messages. Users
| might see extra group members in their groups and they
| might not, depending in part on whether they're running
| the most current version of the software.
|
| You can _decrypt all the messages_. I sometimes feel like
| I must have read a different paper than everybody else.
| RonoDas wrote:
| dfgdfgffd wrote:
| You can't decrypt all the messages, but only the ones
| sent despite warning
| tptacek wrote:
| This is an amazing assertion.
| pseudo0 wrote:
| You can decrypt all _future_ messages in a noisy way that
| alerts all the users that something weird is going on.
|
| > While the Matrix specification does not require a
| mitigation of this behaviour, when a user is added to a
| room, Element will display this as an event in the
| timeline. Thus, to users of Element this is detectable.
|
| Virtually everyone uses Element, so it's blatantly
| tamper-evident. That makes it a really bad option if red-
| teaming.
| tptacek wrote:
| "Tamper-evident", meaning, Element will list the string
| name of the member the server injected into your group in
| the group list.
|
| (Here it's worth noting that this is the behavior _post-
| fix_ for this paper. Read the paper! It was so much
| worse.)
| pseudo0 wrote:
| I have used Element, and new users joining a room are
| displayed as an event in the timeline. The paper confirms
| that in section III-A, and based on the context I'm 99%
| sure that they mean pre-fix, not post-fix.
|
| With III-B, they can add an unverified device, but that
| causes a warning for every other room participant. Again,
| tamper evident.
| tptacek wrote:
| We're clear that this is "an event in your timeline" that
| says "the server might be decrypting all your messages or
| maybe a new participant joined your group", right?
| pseudo0 wrote:
| If some random user joins my E2EE groupchat, yes, my
| expectation is that some unknown actor is now reading my
| messages... It would be no different than some rando
| getting added to a Signal group. At least in the groups
| I'm in, that would cause a flurry of messages asking who
| they are and who added them.
| dodgerdan wrote:
| What if someone already in the group adds a new device?
| That would hardly cause a flurry or surprise, it's a
| common thing to happen. But using Matrix it could also
| mean whom ever controls the server could be reading all
| your messages. And the alert has to cover both scenarios.
| I'm not sure how you could message that in such a way to
| make it actionable. Especially to non-tech users. Even if
| they did understand it would require a mini
| investigation/"flurry" multiple times a month for even a
| modest sized group. My personal observation is that a
| group of about 50 have 1 new phone every two weeks, with
| seasonal increases around Sept/Oct and January 1st.
| dfgdfgffd wrote:
| If verified that device will show up red. And you can
| enable sending keys only to verified contacts
| martinralbrecht wrote:
| This isn't correct. Several of our attacks succeeded
| without any warnings popping up.
|
| Furthermore, you need to distinguish between what Element
| happened to do (which users may or may not watch out for)
| and what the standard demanded.
|
| Note that at this point, as far as I understand it, there
| is no dispute between us - the authors of this research -
| and the Matrix developers about that leaving group
| membership under the control of the server was a bad and
| avoidable design decision. The Matrix developers are
| working on a spec change/fix which resolves this, linked
| elsewhere in this thread.
| pkulak wrote:
| Someone needs to invite a malicious homeserver to the
| group.
|
| So what's the scenario here? You have an encrypted group
| chat with so many members that some rando nefarious user
| can slip in? And then the big deal is that the E2E breaks
| down, so the home server that they probably own can read
| the messages that they are already reading?
| tptacek wrote:
| We are talking about end-to-end encryption. "Malicious
| homeserver" is literally the table stakes. If you trust
| your server, you can use anything. Go ahead and use
| Slack.
| dodgerdan wrote:
| This is the issue distilled. And the retort seems to be
| "but we give a warning".
|
| Awfully weak stuff for a cypherpunk-ish protocol. The CCC
| crowd that rabidly hates anything centralised, thinks
| it's insecure and corporate, are probably having a
| existential crisis. Matrix is doing a terrible job fixing
| the issues, worse they seem to downplaying and denying
| too. And the Tech press seem to dismiss the issue
| believing Matrixs' claims there isn't an issue.
| Arathorn wrote:
| I'm sorry - _how_ are we doing a terrible job fixing the
| issues? We are working solidly to switch Element over to
| the newly audited vodozemac crypto implementation
| (https://matrix.org/blog/2022/05/16/independent-public-
| audit-...), and then implementing both TOFU and client-
| controlled group membership. https://github.com/matrix-
| org/matrix-spec-proposals/blob/fay...
|
| We are not denying these issues - we just dare to
| disagree that they are as catastrophic as some suggest.
| tptacek wrote:
| Just so we're clear, I didn't say that you're doing a
| terrible job fixing the issues (I know the comment you're
| responding to said that, I'm just being careful not to
| cosign that.)
| martinralbrecht wrote:
| We had working exploits for those vulnerabilities where
| exploiting them wasn't immediately obvious. We shared
| those with the Matrix developers but didn't publish them
| because there was no dispute on whether our attacks were
| practical. So we meant it when we wrote "practically-
| exploitable".
|
| In an end-to-end encrypted setting a malicious server is
| precisely the adversary you defend against, not an edge
| case.
| cvwright wrote:
| I would love to hear what you and your team think about
| Matrix's proposed fixes, especially MSC3917.
|
| To me, it looks pretty good on the surface, but I don't
| know if I can convince myself that it's secure. I'm not
| even sure if I could write down a precise definition of
| security here, without banging my head on it for a while.
| iampivot wrote:
| Maybe it's this? https://www.theregister.com/2022/09/28/mat
| rix_encryption_fla...
| Doorstep2077 wrote:
| walterbell wrote:
| IETF MLS (https://datatracker.ietf.org/wg/mls/about/) was
| ratified in September 2022 by messenger service vendors
| (including Wire, Matrix, Mozilla, Cisco, Google, Facebook), who
| have been working for several years on an E2EE group messaging
| protocol to live alongside TLS. MLS is already deployed in
| production by Cisco WebEx.
|
| _> Messaging applications are increasingly making use of end-to-
| end security mechanisms to ensure that messages are only
| accessible to the communicating endpoints, and not to any servers
| involved in delivering messages. Establishing keys to provide
| such protections is challenging for group chat settings, in which
| more than two clients need to agree on a key but may not be
| online at the same time. In this document, we specify a key
| establishment protocol that provides efficient asynchronous group
| key establishment with forward secrecy and post-compromise
| security for groups in size ranging from two to thousands._
|
| There is now a follow-up IETF project to use MLS for inter-
| messenger interoperability, with Matrix as an early participant,
| https://news.ycombinator.com/item?id=33420112
|
| Review of 2020 draft protocol, https://liu.diva-
| portal.org/smash/get/diva2:1388449/FULLTEXT...
|
| _> Work is now ongoing to introduce the Messaging Layer Security
| (MLS) protocol as an efficient standard with high security
| guarantees for messaging in big groups. This thesis examines
| whether current MLS implementations live up to the promised
| performance properties and compares them to the popular Signal
| protocol. In general the performance results of MLS are promising
| and in line with expectations, providing improved performance
| compared to the Signal protocol as group sizes increase._
| dane-pgp wrote:
| > a follow-up IETF project to use MLS for inter-messenger
| interoperability
|
| It's worth highlighting (as a comment in the linked discussion
| does) that the EU's newly-passed Digital Markets Act requires
| interoperability between the messaging apps of large
| "gatekeeper" platforms.
|
| If Twitter does end up supporting E2EE messaging, it may soon
| be forced to implement MLS, and if Facebook and Google are
| really supportive of MLS then soon(TM) most people online will
| be able to communicate via that technology.
| saurik wrote:
| Notably, this design lacks reputability, which for some reason
| they didn't even want (as it might be used by "terrorists"),
| which led to arguments with Ian Goldberg, the developer of Off-
| the-Record messaging. The arguments on the big tracker about
| power imbalances were maybe a bit better, but I still
| personally disagree.
|
| https://mailarchive.ietf.org/arch/msg/mls/ZJ4e78obXSdYWnxmsN...
|
| https://github.com/mlswg/mls-architecture/issues/50
| walterbell wrote:
| While deniability would be welcome, the last few years have
| seen rumblings from various Australia/UK/US stakeholders
| about possible regulation for E2EE messenger intercept, i.e.
| even encryption would be compromised in that scenario.
| Hopefully the recent Apple announcement on E2EE iCloud means
| that regulators have seen the folly of mass interception.
|
| If China (e.g. WeChat) is a predictor of future messaging
| systems, there may be convergence between messaging and
| payments. Musk has publicly suggested that Twitter follow in
| WeChat's footsteps as an "Everything App". If payments and
| messaging converge, and governments adopt CBDCs for digital
| legal tender, then we may be looking down the barrel of
| state-issued digital IDs for online and offline activity.
|
| Linux Foundation is promoting blockchain for both software
| signing and globally interoperable travel/health credentials.
| Big tech companies each have their own island of semi-
| verified identities, depending on their proximity to
| financial services. It's not clear which groups are willing
| or able to effectively lobby for pseudonymous digital
| identity.
| bentley wrote:
| > reputability
|
| The word you're looking for is "repudiability."
|
| Although one might say that lacking repudiability could be
| enough for a messaging system to lack reputability ;)
| JustSomeNobody wrote:
| > Messaging applications are increasingly making use of end-to-
| end security mechanisms to ensure that messages are only
| accessible to the communicating endpoints, and not to any
| servers involved in delivering messages.
|
| You have to put trust in the app (and the company that owns
| that app), though.
|
| In twitter, I type a message in plain text. The twitter app
| encrypts it and sends it to the recipient. I am trusting
| Twitter to not encrypt it twice, once with my key and once with
| there's and capturing their copy in flight.
|
| With current trends at Twitter, I wouldn't trust them an inch.
| Null-Set wrote:
| How is twitter intending to use libsignal? I doubt it would be
| via the primary AGPL license[1], forcing them to publish the
| source code of their backend service. Does signal sell private
| licenses?
|
| [1] https://github.com/signalapp/libsignal/blob/main/LICENSE
| samwillis wrote:
| I believe they already have a license (it may have expired):
|
| > "agreement to license Signal protocol was reached"
|
| https://mobile.twitter.com/bhcarpenter/status/15966919162503...
| threeseed wrote:
| Signal requires all contributors to license their work to the
| company [1]
|
| So they are able to offer Twitter a non-APGL license. Usually
| for a sizeable fee.
|
| [1] https://signal.org/cla/
| getcrunk wrote:
| What are people's thoughts on dual licensing in this manner?
| Is it compatible with foss or no?
| icelancer wrote:
| It's not compatible with FOSS from a dogmatic point of
| view, but I think it's a pragmatic way forward given how
| AWS and other actors act. I dual license my company's
| products in this manner.
| sam_bristow wrote:
| Even _Stallman_ isn 't completely opposed to selling
| exceptions to copyleft licences.
|
| https://www.fsf.org/blogs/rms/selling-exceptions
| sparkie wrote:
| It is compatible. The people who have a say are the people
| who write the code. If you are not comfortable with dual
| licensing, don't contribute upstream.
|
| You can fork such dual licensed project, add your own
| contributions and then Signal cannot use your code in
| anything they license to third-parties under non-AGPL. The
| fork becomes AGPL-only.
|
| Example of where this has happened: MySQL was forked to
| MariaDB, so developments to MariaDB cannot be used by
| Oracle in their proprietary licensing of MySQL.
| threeseed wrote:
| I don't think it's compatible with the sprit of FOSS.
|
| You're giving the company (in this case Signal) an open-
| ended license for your contributions whilst they don't
| return the favour. And so whilst they can offer a
| proprietary version e.g. Signal Pro with a friendlier
| license e.g. BSD you can't.
|
| And what kind of underpins all of this is that many
| companies especially in the enterprise will not touch
| anything that is GPL or AGPL.
| varajelle wrote:
| With BSD, you're giving every company the right to take
| your code and use it without returning any favour at all.
|
| With the CLA it's the same. But at least Signal give back
| by still providing libsignal as free software.
|
| It depends on your motivations for contributing. If you
| don't like the CLA or any other things, you can still
| fork. Even if it may not be compatible with your own
| motivations, it is still in the spirit of free software.
| tptacek wrote:
| Huh? This is like saying that the AGPL isn't compatible
| with FOSS. I've heard stories that there was a window of
| time where RMS was advocating for the AGPL terms to be
| incorporated into GPL3. If your definition of FOSS
| excludes the GPL --- as your last sentence suggests ---
| your definition is... unusual.
| the_gipsy wrote:
| The key of the comment you're responding to is signing
| away copyright, not AGPL/GPL.
| threeseed wrote:
| FOSS is a very big umbrella that incorporates a wide
| variety of licenses.
|
| But for me AGPL is not in the spirit of FOSS. It's like
| you're an unpaid employee of some company who is happy to
| take your work as long as you don't compete with them.
|
| I understand why AGPL was invented i.e. to prevent AWS
| ripping your project off but I don't think it's a
| particularly fair license for individual contributors.
| tptacek wrote:
| At the point where you've defined the FSF out of FOSS,
| you've departed the ordinary definition of the term.
| adament wrote:
| Interestingly the FSF also requires copyright assignment
| for contributions to GCC, Emacs and some other GNU
| software.
| ediblelint wrote:
| The FSF does appear to prefer copyright assignment over
| DCOs[1], but as of last year GCC contributions only
| require a DCO[2].
|
| [1] https://www.fsf.org/blogs/licensing/FSF-copyright-
| handling [2] https://lwn.net/Articles/857791/
| [deleted]
| hiq wrote:
| Aren't you talking about copyright assignments rather
| than AGPL? You can definitely compete using AGPL, you
| just have to publish your changes (without necessarily
| assigning the copyrights), just like the company you're
| competing with has to if they start using your changes
| and you haven't assigned your copyrights to them.
| shp0ngle wrote:
| agpl was definitely not invented to prevent AWS stealing
| your project.
|
| Affero predates AWS by a lot. It was invented because FSF
| wants users to be able to see software they are using,
| but they cannot if it's a remote server.
|
| It has been used recently as magic anti-AWS wand, but it
| wasn't built for that
|
| edit: I am wrong in my dating! Affero is 2007, AWS was
| launched 2006. But yeah it was never meant as anti-AWS
| magic wand.
| reachableceo wrote:
| I think you are being pedantic.
|
| Aws , TiVo , whatever . Lots of orgs exploited the
| contractual logic gap in gpl v2. Aws is the latest in a
| long list of freeloaders .
|
| I license all my / my startups code under AGPL v3. I
| don't offer anyone proprietary license and I don't ask
| anyone for CLA.
|
| Defense against freeloaders and also tyrant. My operating
| agreement for my LLC has language about this stuff. It's
| that core to our DNA.
| jrochkind1 wrote:
| It is compatible if one of the licenses offered is actually
| a real open source license that is available to all. The
| AGPL is really an open source license. Licenses which
| restrict based on type of use or characteristics of user
| are not.
| bsaul wrote:
| IANAL but my understanding is that agpl requires you to publish
| server code if that code uses code under agpl, because
| consuming an api is equivalent to distributing the code.Aka
| that "calling an api which uses agpl code" is equivalent to
| using the code itself ( which means you should, as an api
| consumer, have access to the source eventhough you actually
| downloaded no binary).
|
| e2e means server never does any encryption or decryption so i
| don't think they'll have any signal code running on their
| server.
|
| They will however be required to open source their application
| code.
|
| Edit : and i think open sourcing twitter app would be a
| stunning move, that elon is totally capable of.
| mjg59 wrote:
| There seems to be a bunch of code in the Twitter app that's
| licensed from other third parties, so unless they're fine
| with their code being re-licensed under AGPL it's not
| possible to release the app under those terms.
| eadmund wrote:
| > "calling an api which uses agpl code" is equivalent to
| using the code itself
|
| No, not at all. If a server is AGPL-licensed, clients may
| have whatever license they want. If a client is AGPL-
| licensed, servers may have whatever license they want. What
| the AGPL does is provide that if code under the AGPL is used
| to provide network services, then clients of those servers
| are entitled to the server source code. You may read the
| terms here: https://www.gnu.org/licenses/agpl-3.0.en.html
|
| There is no copyright mechanism by which the AGPL could apply
| to clients of an AGPL server.
| bsaul wrote:
| I don't think we understood each other. I'm saying twitter
| should have to release their client code *if their client
| code uses libsignal*.
|
| What i'm saying about server code is the logic of agpl. GPL
| is about providing the means for users to obtain source
| code of the binary they're running. There was a gap with
| api, where a given library feature could be exposed through
| network services and since no binary were downloaded then
| the code wouldn't required to be open sourced. AGPL fixes
| that.
|
| But we both agree that client code and server code are
| independent relative to their licensing terms.
___________________________________________________________________
(page generated 2022-12-10 23:01 UTC)