[HN Gopher] Website Fingerprinting on Early QUIC Traffic
       ___________________________________________________________________
        
       Website Fingerprinting on Early QUIC Traffic
        
       Author : pueblito
       Score  : 280 points
       Date   : 2021-01-30 15:37 UTC (7 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | tyingq wrote:
       | _" allowing adversaries to infer the users' visited websites by
       | eavesdropping on the transmission channel."_
       | 
       | Ah, ok, that kind of fingerprinting. I suppose then this might be
       | where our local ISP's find a way to replace their now threatened
       | DNS query sniffing. Assuming 95.4% accuracy means what I think it
       | does, that's pretty impressive.
        
         | wolfium3 wrote:
         | Hah. I accidentally read "advertisers" instead of
         | "adversaries". Still valid I guess...?
        
           | lrossi wrote:
           | That's exactly how I read it as well!
        
             | kibwen wrote:
             | In the same way that we use "i18n" to abbreviate
             | "internationalization", I propose we start using "adver5s"
             | for both of these. :P
        
         | londons_explore wrote:
         | I bet you could get 95.4% accuracy by simply looking at the IP
         | addresses that users connect to.
         | 
         | Each webpage comes from an IP, but includes subresources from a
         | whole set of domains. I would guess that for the vast majority
         | of web sites, that set of domains is unique.
        
           | tyingq wrote:
           | I'm guessing not, as they say _" whereas only 60.7% when on
           | HTTPS"_.
        
         | bostik wrote:
         | I skimmed the paper and read it as an improved passive attack
         | against ESNI and Do{T,H} setups. There's nothing really new,
         | other than QUIC making it easier to identify the destinations.
         | 
         | Traffic analysis is a harsh villain.
        
           | jeroenhd wrote:
           | Sadly, eSNI is dead in the water, but an alternative is being
           | worked on. It'll just take longer for it to come to market.
        
       | [deleted]
        
       | jpcsmith wrote:
       | We had also investigated the differences in fingerprintability
       | between HTTP/QUIC and HTTPS (WF in the Age of QUIC, PETS'21). We
       | had found equivalent fingerprintability with deep-learning
       | classifiers when eavesdropping on the traffic through Wireguard.
       | It's interesting though to see the stark difference they found
       | between fingerprinting HTTP/QUIC and HTTPS when using only the
       | first ~40 packets. The trends in those early packets had also
       | allowed us to easily differentiate between the two types of
       | traffic over the VPN.
       | 
       | Our paper, in case you want to read more on this area:
       | https://petsymposium.org/2021/files/papers/issue2/popets-202...
        
       | mdale wrote:
       | It's odd state of the world where we will have to add significant
       | amounts of noise to prevent browsers from revealing what site
       | they are visiting because of traffic request protocols getting
       | too efficient and browsers trying to be efficient in predictable
       | ways.
        
         | zdragnar wrote:
         | Isn't predictability the problem? It seems that many hardware
         | and software attacks, at a very very high level, amount to
         | being too predictable.
        
           | monadic3 wrote:
           | No, the problem is the business model that incentivized
           | selling their users up the river.
        
           | infogulch wrote:
           | I don't find it particularly surprising that optimizing for
           | efficiency makes systems more predictable -- efficient
           | implementations often lie in a local minimum that are
           | independently discoverable.
        
         | nine_k wrote:
         | Remember how truly secure channels keep transmitting random
         | noise at a constant rate when they are not transmitting
         | encrypted messages at the same rate.
         | 
         | Anything which is non-uniform is an attack surface. If an
         | efficiency improvement makes things more predictable, then it
         | adds a possibility of a new snooping attack. All your
         | interactions have to be as inefficient as your most inefficient
         | interaction, else an adversary can tell them apart
        
           | anticristi wrote:
           | I never thought about this. Also, I find it depressing.
           | 
           | Basically my browser needs to "keep watching some YouTube
           | stream", so an attacker can't figure out that I switched to
           | reading emails.
        
       | 1vuio0pswjnm7 wrote:
       | Pipelining, HTTP/1.1-style, not necessarily SPDY, HTTP/2 or QUIC-
       | style, can effectively counter this sort of fingerprinting that
       | relies on analysis of request-response sizes.^1 I have used
       | HTTP/1.1 pipelining outside the browser for bulk data retrieval
       | for decades. Although I do not normally randomise requests, the
       | UNIX-style filters I wrote to do pipelining could easily be used
       | for this purpose.
       | 
       | 1. https://blog.torproject.org/experimental-defense-website-
       | tra...
        
       | jamescun wrote:
       | What are the proposed benefits of QUIC?
       | 
       | May be misguided, but I feel a little uneasy about bundling TCP
       | functionality, TLS and HTTP into a single protocol over UDP.
        
         | drewg123 wrote:
         | As a counterpoint to the advantages, the disadvantage is that
         | QUIC is much more expensive on the server side. You loose 25
         | years of stateless hardware offloads done by the NIC.
         | 
         | Measuring the workload that I care about (Netflix CDN serving
         | static videos), loosing TSO, LRO, and software kTLS, like you
         | would with QUIC, takes a server from serving 90Gb/s @ 62% CPU
         | to serving 35Gb/s and CPU limited. So we'd need essentially 3x
         | as many servers to serve via QUIC as we do to serve via TCP.
         | That has real environmental costs.
         | 
         | And these are old numbers, from servers that don't support
         | hardware kTLS, so the impact of loosing hardware kTLS would
         | make things even worse.
        
           | vasilvv wrote:
           | (I work for Google, opinions my own)
           | 
           | For YouTube, the CPU cost of QUIC is comparable to TCP,
           | though we did spent years optimizing it. [0] has a nice deep
           | dive.
           | 
           | Other CDN vendors like Fastly seem to have the similar
           | experience [1].
           | 
           | I believe sendmmsg combined with UDP GSO (as discussed, for
           | instance, in [2]) has solved most of the problems that were
           | caused by the absence of features like TSO. From what I
           | understand, most of the benefit of TSO comes not from the
           | hardware acceleration, but rather from the fact that it
           | processes multiple packet as one for most of the transmission
           | code path, meaning that all per-packet operations are only
           | invoked once per chunk (as opposed to once per individual IP
           | packet sent).
           | 
           | [0] https://atscaleconference.com/videos/networking-
           | scale-2019-i...
           | 
           | [1] https://www.fastly.com/blog/measuring-quic-vs-tcp-
           | computatio...
           | 
           | [2] https://blog.cloudflare.com/accelerating-udp-packet-
           | transmis...
        
             | drewg123 wrote:
             | GSO is better than nothing, but its still far more
             | expensive than actual hardware TSO that swallows a 64KB
             | send in hardware. At least it was when I did a study of it
             | when I was at Google. This is because it allocates an skb
             | for each MTU sized packet to hold the headers. I think it
             | costs something like 1.5-2.0x real hardware backed TSO,
             | mostly in memory allocation, freeing, and cache misses
             | around header replication.
             | 
             | Also, sendmmsg still touches data, and this has a huge
             | cost. With inline kTLS and sendfile, the CPU never touches
             | data we serve. If nvme drives with big enough controller
             | memory buffers existed, we would not even have to DMA NVME
             | data to host RAM, it could all just be served directly from
             | NVME -> NIC with peer2peer DMA.
             | 
             | Granted, we serve almost entirely static media. I imagine a
             | lot of what YouTube serves is long-tail, and transcoded on
             | demand, and is thus hot in cache. So touching data is not
             | as painful for YouTube as it is for us, since our hot path
             | is already more highly optimized. (eg, our job is easier)
             | 
             | I tried to look at the Networking@Scale link, but I just
             | get a blank page. I wonder if Firefox is blocking something
             | to do with facebook..
        
           | jeffbee wrote:
           | Horses for courses. Is anyone suggesting QUIC for gigantic
           | bulk transfers? TCP isn't great for latency-sensitive small
           | transfers so we will have QUIC. QUIC will not be perfect for
           | everything, like TCP is not perfect for everything.
        
             | drewg123 wrote:
             | I'm pretty sure YouTube uses QUIC for video streaming
             | today, which is a similar workload to ours.
        
               | virattara wrote:
               | > I'm pretty sure YouTube uses QUIC for video streaming
               | today
               | 
               | Is there any media transfer protocol based on QUIC?
        
               | elithrar wrote:
               | Lucas Pardue, who is the IETF QUIC WG chair, has a great
               | talk from Mux's Demuxed conf last year on using QUIC as a
               | transport for UDP streaming: https://www.youtube.com/watc
               | h?v=Zdkjd7-EWmQ&feature=emb_titl...
               | 
               | This is not dissimilar from SRT.
        
               | drewg123 wrote:
               | I found an article from 4 years ago saying that 20% of
               | youtube's traffic was QUIC:
               | https://www.fiercewireless.com/wireless/youtube-driving-
               | more...
        
               | detaro wrote:
               | Youtube is delivering video over HTTP. HTTP3, i.e. it is
               | transported over QUIC.
        
               | elithrar wrote:
               | Yep: YouTube currently delivers over gQUIC and h3 (IETF
               | QUIC).
               | 
               | We advertise the following alt-svc for most devices:
               | 
               | > h3-29=":443"; ma=2592000,h3-T051=":443";
               | ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443";
               | ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443";
               | ma=2592000; v="46,43"
               | 
               | HTTP/3 (over IETF QUIC) is still not as widely deployed
               | due to lesser client support, but that's growing as both
               | Chrome[1], Firefox and Apple (in iOS Safari, when
               | enabled) adopt it and the latest drafts.
               | 
               | [1]: https://blog.chromium.org/2020/10/chrome-is-
               | deploying-http3-...
        
               | jeffbee wrote:
               | I think that indicates more about the economics of
               | YouTube than anything. Probably their frontend CPU costs
               | would have to go up by a factor of ten thousand before it
               | even began to approach their costs for compression and
               | storage of user uploads.
        
             | masklinn wrote:
             | > Horses for courses. Is anyone suggesting QUIC for
             | gigantic bulk transfers?
             | 
             | Anyone who's looking to replace HTTP/1.1 and HTTP/2 with
             | HTTP/3? AFAIK QUIC is completely integral to that, in order
             | to avoid this issue you'll need a mixed deployment of
             | HTTP/3 and an older version of the standard.
        
               | anderspitman wrote:
               | I just hope we don't remove support for HTTP/1.1 from our
               | stacks entirely. 2/3 are great for performance but an
               | order of magnitude more complex.
        
               | M2Ys4U wrote:
               | I imagine that HTTP/2 use will almost entirely move to
               | HTTP/3, and HTTP/1.1 usage will decline but there'll
               | still be plenty of it around.
        
               | cpeterso wrote:
               | Cloudflare's Radar dashboard has recent measurements of
               | HTTP/1, 2, and 3 traffic (near the bottom of the page):
               | 
               | https://radar.cloudflare.com/
               | 
               | Right now, HTTP/1.x is 23% of traffic, HTTP/2 is 75%, and
               | HTTP/3 is just 2%. Since this is about overall traffic
               | (visible to Cloudflare), big sites like Google and
               | Facebook dominate. All that HTTP/3 traffic is probably
               | Chrome users watching YouTube.
        
               | jeffbee wrote:
               | I'm positive that an organization the size of Netflix
               | which has dedicated itself absolutely to pushing 200gbps
               | out of every individual box can figure out how to run
               | HTTP 1.1 forever.
        
               | otabdeveloper4 wrote:
               | "Whee, the magic number went up, better upgrade".
               | 
               | Despite Google meddling with the standards, "HTTP/3"
               | isn't HTTP.
        
               | chrismorgan wrote:
               | > _Despite Google meddling with the standards_
               | 
               | Please let that misconception die. I'm not at all fond of
               | Google, but HTTP/3 is not a Google thing: Google is just
               | one (admittedly significant) party involved in making a
               | thing that _lots_ of parties want and have worked
               | together to make.
               | https://news.ycombinator.com/item?id=25286488 is a
               | relevant comment I wrote a couple of months ago.
        
           | justicezyx wrote:
           | > You loose 25 years of stateless hardware offloads done by
           | the NIC.
           | 
           | That's a feature.
           | 
           | Otherwise the industry will die off slowly. Case in point,
           | OpenStack first, and later kubernetes, were exactly this kind
           | of innovation. It offers very tiny benefit and a whole slew
           | of plugging points for the industry players to hook upon; but
           | is beneficial enough for mostly everyone to be comfortably
           | endorse a piece of technology that at the time were already
           | starting to decaying... Of course in this case, OpenStack has
           | been a much worse tech because of it was developed by people
           | haven't been the mainstream...
        
           | anderspitman wrote:
           | Interesting. Could this be mitigated at all with QUIC
           | hardware?
           | 
           | It seems like if the tradeoffs are this bad at scale Google
           | wouldn't be pushing QUIC.
        
             | drewg123 wrote:
             | It can be mitigated, and companies are working on it.
             | However, there is no NIC offering hardware QUIC protocol
             | offload that I'm currently aware of. Part of what makes it
             | challenging is one of the features of QUIC: That the
             | protocol headers are themselves encrypted.
        
               | jeffbee wrote:
               | The Intel 700 and 800 series NICs have a DDP for QUIC.
        
               | anderspitman wrote:
               | But do the headers have much to do with power efficiency?
               | Seems like that would mostly hinge on encryption and
               | state machine efficiency.
        
               | detaro wrote:
               | They have to do with the viability of offloading. You
               | can't give the NIC an encrypted stream and have it handle
               | the protocol stuff if it needs to put things _inside_ the
               | encryption to do so.
        
           | littlecranky67 wrote:
           | Not a big fan of QUIC (for other reasons) but I didn't
           | consider your very valid points. Not be able to utilize TCP
           | offload engines is another big disadvante of QUIC - but to be
           | fair, I think we will see QUIC offload engines develop over
           | time just as they did with TCP. Maybe those Intel/Altera or
           | AMD/Xilinx deals will pay of for datacenter, for exactly
           | those reasons (quick adaption of emerging network protocols
           | by programming offload engines to FPGAs).
        
           | infogulch wrote:
           | How many of these downsides are inherent to the protocol
           | compared to simply not being optimized yet? I get that a QUIC
           | optimized NIC is unlikely to be right around the corner
           | (these things take time), but is there anything that makes
           | such a thing impossible?
        
           | me551ah wrote:
           | Most applications on the internet are CRUD apps and not
           | everyone is serving Netflix CDN data. TCP hasn't really
           | changed much in the past 25 years due to routers on the
           | internet being strict about its standards. Just look at 'TCP
           | Fast Open', while it was supposed to drastically cut down on
           | connection time in TCP, it can't be used on the internet due
           | to routers behaving erratically. Apple tried it in production
           | and then reverted it.
           | 
           | Since QUIC is based on UDP, it doesn't face the same
           | limitations that TCP does. TCP also suffers from head of line
           | blocking, which QUIC also eliminates. It is also more secure
           | due to header being encrypted, unlike TCP. Plus you also get
           | multiplexing built-in. QUIC is far better than TCP and for
           | serving websites & APIs, it is a much better alternative.
        
             | rini17 wrote:
             | But CRUDs also benefit from load balancing that is going to
             | be much more expensive without hardware support. For the
             | other problems, mostly satisfactory solutions exist.
        
               | me551ah wrote:
               | You can't really solve the head of line blocking problem
               | in TCP
        
               | rini17 wrote:
               | In CRUD apps all head of line problems are caused by
               | abject bloat. Protocol isn't going to fix that.
        
           | amelius wrote:
           | > So we'd need essentially 3x as many servers to serve via
           | QUIC as we do to serve via TCP. That has real environmental
           | costs.
           | 
           | So do ads, and no government seems to care much.
        
             | txdv wrote:
             | 3x as many servers, it is probably going to be 3x more
             | expensive. Money is something that everyone cares about
        
             | mlthoughts2018 wrote:
             | Such false equivalence. _Everything_ costs resources to
             | produce. Simply stating _that_ it costs resources, or even
             | just stating _how much_ resources it costs, is completely
             | meaningless on its own.
             | 
             | Pixar movies cost resources, prolonging life with medical
             | treatment costs resources, chocolate candy bars cost
             | resources, operating the Olympic Games costs resources.
             | 
             | In each case, various people believe the resource costs are
             | worth it to get the thing that is produced. Same for
             | bitcoin, same for ads.
             | 
             | You can't _just_ say they cost resources. If you're
             | advocating these things shouldn't exist, you either need to
             | get into the specifics of why they offer value but just not
             | enough to offset their resource costs, or why they don't
             | offer value, _and_ persuade others to agree with you.
             | 
             | Judging by the success of ads and bitcoin, I think, "these
             | things consume resources" is a pretty weak and useless
             | argument. It would not matter if the resource costs were
             | astronomical or tiny, it's a specious chain of reasoning
             | _in principle_.
        
             | detaro wrote:
             | Because other things with environmental cost (which are
             | widely criticized for it!) exist, one shouldn't consider
             | environmental costs?
        
         | dhdhhdd wrote:
         | 0rtt handshake (or 1rtt for new connections) and stream
         | multiplexing (no head of line blocking) come to my mind.
         | 
         | Oh, and connection migration (wifi to cell), but that's a bit
         | more tricky.
        
         | chaz6 wrote:
         | Most of QUIC is already supported by SCTP, which could have
         | made IPv6 an even more attractive proposition because the
         | reason SCTP has been poorly adopted is because of the number of
         | middleboxes, mostly for NAT.
        
           | zlynx wrote:
           | There were a few experimental HTTP/2 over SCTP
           | implementations. It worked pretty well.
           | 
           | But since the current NAT infested Internet requires
           | smuggling SCTP over UDP anyway you may as well go all out and
           | use QUIC which collapses the various protocol layers into a
           | single, more efficient protocol.
        
         | Jonnax wrote:
         | Here's a comparison of HTTP2 vs HTTP3 (QUIC) from Cloudflare:
         | 
         | https://blog.cloudflare.com/http-3-vs-http-2/
        
         | jsnell wrote:
         | Proper stream multiplexing, working connection migration on IP
         | changes, faster connection setup for the common case with
         | 0-RTT, solving the TCP ossification issues by decoupling
         | protocol upgrades from OS upgrades and by preventing
         | middleboxes from messing up flows when new options are
         | introduced.
        
           | the8472 wrote:
           | > working connection migration on IP changes
           | 
           | MPTCP is available in iOS and recent linux kernels. So you
           | don't need QUIC for that.
           | 
           | > decoupling protocol upgrades from OS upgrades
           | 
           | This is as much a downside as it is an upside. Now you have
           | to potentially juggle hundreds of applications each shipping
           | their own transport protocol implementation. And maybe their
           | own DNS client too.
        
             | infogulch wrote:
             | Dynamic linking has lost. Static linking fully self-
             | contained binaries / containers etc _combined with_ robust,
             | deterministic build automation is the future.
        
               | simias wrote:
               | I do think that in many situations dynamic linking
               | doesn't make a whole lot of sense these days (like, say,
               | a JPEG decoding library for instance). But having a
               | central point where you can configure, test and
               | administer networking makes a lot of sense to me.
               | 
               | The idea of every application coming up with its own DNS
               | implementation for instance annoys me greatly. I can
               | already anticipate the amount of time I'm going to lose
               | hunting down the way application X handles DNS because it
               | won't apply my system-wide settings.
               | 
               | I also firmly believe that SSL/TLS should've been handled
               | at the OS level instead of everybody shipping their own
               | custom OpenSSL implementations (complete with various
               | vulnerabilities). I wish I could just use some
               | setsockopt() syscalls to tell the OS I want a secure
               | socket and let it figure out the rest.
               | 
               | Moving stuff to higher layers is, IMO, mainly because
               | Google wants to become the new Microsoft/Apple and moving
               | that stuff up is a way to take the control away from
               | Windows/Mac OS. The browser is the OS. You use web apps
               | instead of apps.
               | 
               | You turn every device out there into a glorified
               | ChromeBook.
        
               | skissane wrote:
               | > I also firmly believe that SSL/TLS should've been
               | handled at the OS level instead of everybody shipping
               | their own custom OpenSSL implementations (complete with
               | various vulnerabilities). I wish I could just use some
               | setsockopt() syscalls to tell the OS I want a secure
               | socket and let it figure out the rest.
               | 
               | IBM z/OS actually does this with a feature called AT-TLS
               | (Application Transparent TLS) [0]. You don't even need to
               | call setsockopt (actually z/OS uses ioctl instead of
               | setsockopt for this, but same idea). You can turn a non-
               | TLS service into a TLS service simply by modifying the OS
               | configuration.
               | 
               | [0] https://www.ibm.com/support/knowledgecenter/en/SSLTBW
               | _2.2.0/...
        
         | uluyol wrote:
         | QUIC is TCP+TLS. HTTP is done as a separate layer on top.
         | 
         | The benefits are: future protocol evolution (TCP is extremely
         | difficult to change because of middleboxes, QUIC encrypts
         | almost all of the header to stop middleboxes from messing
         | things up) faster connection establishment (fewer RTTs thanks
         | to bundling TCP+TLS handshakes), no head-of-line blocking when
         | sending multiply streams over a single connection.
        
           | anticensor wrote:
           | QUIC is more similar to UDP+SCTP+TLS. Or, if you skip UDP and
           | directly layer it over IP, to SCTP+TLS.
        
           | paulryanrogers wrote:
           | Is this sustainable though? Won't the middle boxes adapt?
        
             | judge2020 wrote:
             | Almost nothing is plaintext so iterations can be done
             | within the encrypted layer. Iterations could start being
             | done at the same speed that http is done today.
        
               | oarsinsync wrote:
               | Middleboxes by design MITM connections. They'll terminate
               | the connection from the client and the server, unencrypt
               | and reencrypt as necessary.
               | 
               | That's the whole point of the troublesome middleboxes.
               | They're not passive listeners. They're active
               | participants, with fragile implementations.
        
               | cmckn wrote:
               | One does not simply "unencrypt" traffic, though. That's
               | the whole point of TLS.
        
               | oarsinsync wrote:
               | Indeed, one intercepts and terminates the TCP (or now
               | UDP) session. No version of TLS is immune to these
               | middleboxes.
               | 
               | I suspect I'm being misunderstood. I don't intend to
               | suggest these pasky middleboxes spring up randomly on the
               | Internet and anyone is at risk. These are deliberately
               | installed on the edge of corporate networks, and their
               | sole purpose is to intercept, unencrypt and filter
               | traffic.
               | 
               | There's no technological solution for a middleware box
               | that emulates a client to servers, and emulates a server
               | to clients, and is operating exactly as designed.
               | 
               | This shouldn't be new or surprising or groundbreaking to
               | anyone. It's just a fancy name for a poorly implemented
               | proxy.
        
               | ocdtrekkie wrote:
               | Expect middleboxes to aggressively block QUIC and similar
               | harmful adtech products designed to circumvent inspection
               | and security.
        
         | isbvhodnvemrwvn wrote:
         | From my understanding it's mostly head-of-the-line blocking.
         | 
         | HTTP/2 was introduced to solve head-of-the-line blocking on the
         | application level (one HTTP/1.1 connection could only handle
         | one request at the time, barring poorly supported pipelining)
         | 
         | TCP itself also only allows one stream of data to flow at the
         | time (in each direction) - in case of poor network quality you
         | are going to lose some TCP segments, and if you fill in the
         | receiver's window, the transfer might halt until retransmission
         | succeeds (this includes ALL streams within HTTP/2). Using UDP
         | allows you to circumvent that.
        
           | anderspitman wrote:
           | I've wondered about this. It makes sense in theory, but it
           | seems like if you have data loss on one channel and need
           | retransmission, you probably also have data loss on other
           | channels so they'll end up waiting for their own
           | retransmissions anyway. It seems more like an inherent issue
           | with building reliable streams on an unreliable channel.
           | 
           | I'm sure there's hard research on it. Anyone know some good
           | papers?
        
       | bluedino wrote:
       | Is it still true that QUIC avoids inspection/filtering, so it's
       | blocked by a lot of corporate firewalls?
        
         | ReactiveJelly wrote:
         | Since it's a new protocol, they will probably err on the side
         | of blocking it until they figure out how to MITM it.
         | 
         | (Recently had to deal with a client who said nothing about
         | MITMing our HTTPS, and a coworker who then installed a root
         | cert for them anyway... really pissed me off.)
        
         | Jonnax wrote:
         | It's UDP which is generally blocked
        
       | superkuh wrote:
       | QUIC is for corporations. Integrating TLS into a fundemental
       | protocol means that cert authorities are integrated. Cert
       | authorities mean that individual human people cannot communicate
       | with each other over the protocol unless a corporation allows it
       | on their whim.
       | 
       | If QUIC is widely adopted it will do much to improve the
       | performance video streaming from corporate services, but it will
       | absolutely destroy the decentralization of the internet and
       | provide single points of technical and political failure.
        
         | fhrow4484 wrote:
         | I don't quite follow your argument, I could s/QUIC/HTTPS/ and
         | s/will do much to improve the performance video streaming from
         | corporate services/will do much to improve the security of
         | corporate applications/
         | 
         | Can you elaborate on to why QUIC different from HTTPS-only in
         | that regard?
        
         | pradn wrote:
         | We can hope that initiatives like LetsEncrypt will continue to
         | exist so there's an alternative to paying for certs.
         | LetsEncrypt does rely on a private company for it's certs, but
         | at least it exists.
        
         | jrockway wrote:
         | Nothing is stopping you from starting your own CA and
         | distributing your own web browser that trusts the certificates.
         | What underlying transport protocol is used doesn't change the
         | complexity of removing corporate influence from the Internet.
         | There is one root DNS namespace. Certificates prove that the
         | stream your user agent receives from example.com is the
         | example.com that is pointed to by the DNS system. That's it. No
         | part of the Internet requires DNS or trusted-by-default CAs.
         | It's just a convenience.
         | 
         | Before widespread TLS adoption, your ISP could (and did!)
         | inject ads into arbitrary websites. Now they can't. That's a
         | good thing. It is actually amazing how little power your ISP
         | has to tamper with your network traffic -- all they can do it
         | shut it off completely.
        
           | fibers wrote:
           | > distributing your own web browser
           | 
           | network effects
        
             | simias wrote:
             | I think the network effect is much weaker here. While
             | interacting with a website without certificate (or with an
             | invalid one) adds friction, it's still wholly possible, at
             | least for now.
             | 
             | It's not like switching from WhatsApp to Telegram, where
             | you have to go from one hermetic silo to another.
        
             | jrockway wrote:
             | That's the underlying issue, I think -- "nobody else wants
             | to do my unusual thing". Most people are happy with the
             | corporate internet, or at least indifferent. If you want to
             | change that, you have to add some major value. (And, people
             | have added such value; there are users of Tor, after all.)
             | 
             | My point is, switching from HTTP/1.1 to QUIC doesn't really
             | make a difference here. The battle against centralized
             | control of parts of the Internet has been lost and no
             | protocol spec is going to change that outcome. Instead,
             | there needs to be major innovation from the camp that wants
             | to fundamentally change the world.
        
         | cakoose wrote:
         | The idea of certificates is built in to QUIC, but certificate
         | _authorities_ are the client software 's responsibility. Client
         | software can ignore certificate authorities entirely, just like
         | with HTTPS.
         | 
         | How exactly does widespread adoption of QUIC destroy
         | decentralization?
         | 
         | Is it any different from widespread adoption of HTTPS, e.g.
         | with browsers becoming increasingly discouraging towards
         | unencrypted HTTP?
        
           | userbinator wrote:
           | _Is it any different from widespread adoption of HTTPS, e.g.
           | with browsers becoming increasingly discouraging towards
           | unencrypted HTTP?_
           | 
           | I'm not the one you're asking, but I think it's not, and I
           | have the same concerns with HTTPS too.
        
         | tialaramex wrote:
         | TLS, and thus QUIC, doesn't care what the certificate is. It's
         | just an opaque bag of bytes as far as the protocol is
         | concerned. If servers and clients for some particular protocol
         | or service agreed it should be a 16 color GIF of any prime
         | numbered XKCD comic that would work with no changes to QUIC or
         | TLS.
         | 
         | So the reason you want certificates from the Web PKI for public
         | facing servers is only that those are widely trusted for
         | identity, whereas that "XKCD comic" idea is clearly
         | untrustworthy and doesn't identify much of anything.
        
         | cogman10 wrote:
         | The internet hasn't been decentralized since pretty much it's
         | inception. ICANN very much introduces a single point of failure
         | that could wipe out the internet if it were disrupted.
         | 
         | It could be replaced, but it would take years. In many ways,
         | cert authorities are more decentralized than ICANN.
         | 
         | This isn't to say we shouldn't worry about these single points
         | of failure, rather, I think it's just an exaggeration of QUIC's
         | impacts on decentralization.
        
           | ori_b wrote:
           | It's worth trying to reduce the amount of centralization, and
           | make baby steps to improving the situation.
        
       | tialaramex wrote:
       | At least some of this document is based on much older versions of
       | QUIC. In particular it mentions RST_STREAM which went away by the
       | end of 2018 in favour of RESET_STREAM.
       | 
       | In fact it's possible it isn't even talking about the proposed
       | IETF protocol QUIC at all, but instead Google's QUIC ("gQUIC" in
       | modern parlance) in which case this might as well be a paper
       | saying the iPhone is vulnerable to an attack but it turns out it
       | means a 1980s device named "iPhone" not the Apple product.
       | 
       | It certainly references a bunch of gQUIC papers, which could mean
       | that's blind references from a hasty Google search by researchers
       | who don't read their own references - but equally could mean they
       | really did do this work on gQUIC.
        
         | legulere wrote:
         | > in which case this might as well be a paper saying the iPhone
         | is vulnerable to an attack but it turns out it means a 1980s
         | device named "iPhone" not the Apple product.
         | 
         | More like it means the iPhone X instead of the newer iPhone 11
        
           | eli wrote:
           | Do any user agents still use gquic? People still use old
           | phones but pretty rarely old versions of chrome.
        
         | 1vuio0pswjnm7 wrote:
         | As you know from carefully reading the paper, it states that
         | they cloned the sites with HTrack and served them on their LAN
         | with "Caddy Server2". Perhaps then we could guess that the
         | version of QUIC they tested is the version of QUIC that Caddy
         | is using: https://github.com/lucas-clemente/quic-go (v0.19.3)
        
       | quotemstr wrote:
       | Anything other than a pipenet will be fingerprintable at some
       | level. A pipenet is a network in which all links run at constant
       | utilization, with dummy traffic (indistinguishable from useful
       | traffic) sent over the links during idle periods. Pipenets are of
       | course inefficient, but everything else is going to reveal _some_
       | kind of signal distinguishable from noise at _some_ level.
        
         | ReactiveJelly wrote:
         | Since I have broadband, it would be nice to set aside a couple
         | percent of my bandwidth for a pipenet (maybe over Tor, as well)
         | and then leave the rest for low-security stuff.
         | 
         | That way I have a place for high-security low-bandwidth text to
         | go if I ever need it.
        
       ___________________________________________________________________
       (page generated 2021-01-30 23:00 UTC)