[HN Gopher] QUIC and HTTP/3 Support Now in Firefox Nightly and Beta
       ___________________________________________________________________
        
       QUIC and HTTP/3 Support Now in Firefox Nightly and Beta
        
       Author : caution
       Score  : 462 points
       Date   : 2021-04-16 20:21 UTC (1 days ago)
        
 (HTM) web link (hacks.mozilla.org)
 (TXT) w3m dump (hacks.mozilla.org)
        
       | Hard_Space wrote:
       | I wish this trend to start articles with 'tl;dr' would stop.
       | Opening with a summary paragraph predates the attention-
       | deficit/time-starved age by many years, and is not an innovation.
        
       | nextaccountic wrote:
       | does it use neqo? https://github.com/mozilla/neqo
        
       | jeffrogers wrote:
       | Are the QUIC HTTP/3 Connection Migration IDs an end run around
       | limitations with cookies, ad blockers, MAC spoofing, and other
       | forms of anti-tracking measures? My understanding is that these
       | IDs survive network switching, sessions, etc.
        
         | Matthias247 wrote:
         | QUIC connection IDs are transport level IDs, and are valid for
         | as long as the connection is established. They will survive
         | network path switches (by design to enable migration), but e.g.
         | are not related any sessions on application level.
         | 
         | QUIC connection IDs can also change while the connection is
         | active, so there no 1:1 relation between connection ID and
         | connection either.
        
       | codetrotter wrote:
       | Is anyone else hyped about the possibility of tunneling traffic
       | over QUIC?
       | 
       | Public networks in coffee shops and libraries that block outbound
       | SSH and outbound Wireguard are the bane of my existence when I am
       | traveling and I try to get some god damn work done.
       | 
       | With HTTP/3 most public networks in coffee shops, libraries etc
       | are eventually going to allow QUIC. This means UDP tunnels
       | masquerading as QUIC traffic, as well as tunneling over QUIC
       | itself, is probably going to work. Eventually.
       | 
       | I am cautiously optimistic :)
        
         | ocdtrekkie wrote:
         | Any good enterprise environment will aggressively block QUIC.
         | The benefits to end users (or network admins) isn't there, and
         | the limitations upon network management are massive.
         | 
         | Sure, your family-run coffee shop won't bother to block QUIC,
         | but if Starbucks' IT is doing their job, they will.
        
           | partyboy wrote:
           | How is this going to work when a lot of sites/apps start to
           | use QUIC? It's like blocking HTTPS because "the limitations
           | upon network management are massive"
        
             | sjwright wrote:
             | HTTPS is necessary. QUIC is not.
             | 
             | It's going to be a long time before any site _requires_
             | QUIC in order to work.
        
             | tialaramex wrote:
             | It's going to make things work slightly less well from the
             | corporate office than they do at home on your cheap setup.
             | 
             | That's a continuing trend, and you might recognise it from
             | other aspects of life. Paying more while deliberately
             | choosing a worse service is pretty much life-as-usual for
             | corporations.
             | 
             | One of my previous employers had a deal to get train
             | tickets. In my country train tickets for actual consumers,
             | even ordered online, don't have a service fee. After all in
             | choosing to order online you're actually saving them money,
             | why would it cost extra? But the corporation paid about 5%
             | fees to get train tickets _from a company which also
             | provides those zero fee services to individuals_.
             | 
             | Or think about all the companies where Windows XP desktops
             | were still being used years after they were obviously the
             | wrong choice.
             | 
             | Monday to Friday the world IPv6 utilisation goes down, but
             | every weekend it's back up. Because at home people have
             | IPv6 (even if they've never thought about it) while at work
             | somebody explicitly turned that off.
             | 
             | One of the nice things about working from home is that my
             | network works very well, whereas whether I worked for a
             | startup or a billion dollar corporation it was always one
             | problem or another when I was in the office. Which reminds
             | me, I should really look for a new job as this pandemic
             | begins to slacken off.
        
               | Sebb767 wrote:
               | > Or think about all the companies where Windows XP
               | desktops were still being used years after they were
               | obviously the wrong choice.
               | 
               | I like that you used past tense there, but I'm pretty
               | sure XP is not dead yet. Unfortunately :/
        
             | pimterry wrote:
             | Many sites will support QUIC soon, but they won't strictly
             | require it for a long time, it's negotiated and backward
             | compatibility isn't irrelevant quite yet.
        
             | Matthias247 wrote:
             | They will always have to support a fallback (HTTP/1.1 or
             | HTTP/2), because not all users are guaranteed to have
             | success with QUIC. Blocked UDP traffic and ports can be
             | some reasons why it won't work, too small supported MTUs
             | can be other reasons.
        
           | ppoint wrote:
           | curious why you think there are no benefits to QUIC?
        
             | ocdtrekkie wrote:
             | QUIC is purpose-built to circumvent middleboxes. Enterprise
             | networking benefits greatly from providing security and
             | accountability via such middleboxes. The effort required to
             | properly secure a network trying to circumvent your network
             | security is costly and more complex.
        
               | aseipp wrote:
               | I mean, that's a trend that's been ongoing in both the
               | HTTP and TLS world for a while now, independent of QUIC.
               | Similar complaints arose with the announcement of HTTP/2
               | being TLS only, if I recall, and TLS 1.3 would probably
               | already be widely deployed right now if not for some
               | complications with deployed middleboxes causing
               | connection failures in the wild (which I don't entirely
               | blame on them; after all, bugs happen...)
        
               | tialaramex wrote:
               | _If_ you simply obey the standard, everything works
               | perfectly well. For example, the only _standard_ way to
               | build HTTPS middleboxes is to fasten a standards
               | compliant client and server together, when a corporate
               | user tries to visit google.com the server presents them
               | with its Certificate for google.com from your corporate
               | CA, and the client connects to the real google.com, now
               | the flow is naturally plaintext inside your middlebox and
               | you can do whatever you want.
               | 
               | If you did that, when TLS 1.3 comes along, your middlebox
               | client connects to google.com, the client says it only
               | knows TLS 1.2, google.com are OK with that, everything
               | works fine. When the corporate user runs Chrome, they
               | connect to the middlebox server, it says it only knows
               | TLS 1.2, that's fine with Chrome, everything works fine.
               | The middlebox continues to work exactly as before,
               | unchanged.
               | 
               | So what happened in the real world? Well of course it's
               | _cheaper_ to just sidestep the standards. Sure, doing so
               | means you fatally compromise security, but who cares? So
               | you don 't actually have a separate client and server
               | wired together, you wire pieces of the two connections
               | together so that you can scrimp on hardware overhead and
               | reduce your BOM while still charging full price for the
               | product.
               | 
               | The nice shiny Chrome says "Hi, google.com I'm just
               | continuing that earlier connection with a TLS 1.2
               | conversation we had, you remember, also I know about Fly
               | Casual This is Really TLS 1.3 and here are some random
               | numbers I am mentioning"
               | 
               | The middlebox pretends to be hip to teen slang, sure,
               | pass this along and we'll pretend we know what "Fly
               | Casual This Is Really TLS 1.3" means, maybe we can ask
               | another parent later and it sends that to the real
               | google.com
               | 
               | Google.com says "Hi there, I remember your previous
               | connection _wink wink_ and I know Fly Casual This Is
               | Really TLS 1.3 too " and then everything goes dark,
               | because it's encrypted.
               | 
               | The middlebox figures, well, I guess this was a previous
               | connection. I must definitely have decided previously
               | whether connecting to Google was OK, so no need to worry
               | about the fact it mysteriously went dark before I could
               | make a decision and it takes itself out of the loop.
               | 
               | Or worse, the middlebox now tries to join in on both
               | conversations, even though it passed along these "Fly
               | Casual This Is Really TLS 1.3" messages yet it doesn't
               | actually know TLS 1.3, so nothing works.
        
               | bawolff wrote:
               | Some say "security and accountability" others say launch
               | a MITM attack to spy on employees.
        
               | ocdtrekkie wrote:
               | Depends on the company, obviously. I don't care if
               | employees do whatever on work computers, I care about
               | protecting my network and my users.
               | 
               | In practice, all of this attempt to bypass middleboxes
               | does not help my users browse securely. It makes it
               | harder to block ads, it makes it harder to detect
               | phishing sites and other threats, and that's about it.
        
           | CameronNemo wrote:
           | Doesn't Google sponsor Starbucks' WiFi?
        
         | cmeacham98 wrote:
         | In the meantime, consider trying udp2raw[1], which will allow
         | you to fake UDP over TCP (by colluding on both sides of the
         | tunnel) and works pretty well on coffee shop WiFi in my
         | experience.
         | 
         | 1: https://github.com/wangyu-/udp2raw-tunnel
        
           | moonchild wrote:
           | See also udptunnel, an older implementation of the same idea
           | - http://www.cs.columbia.edu/~lennox/udptunnel/
        
             | wangyu wrote:
             | The idea is pretty different. This one is implemented by a
             | regular tcp socket, udp2raw is implemented by raw socket.
             | Raw socket based implementation bypasses congestion
             | control/retransmission/head-of-line-blocking, thus has much
             | better theoretical performance.
        
               | moonchild wrote:
               | Oh, really? Interesting, I had thought that udptunnel did
               | as you suggest. Perhaps I was confusing it with something
               | else.
        
         | esaym wrote:
         | It is not hard to set up openvpn in TCP mode and running on
         | port 443 you know....
        
         | xg15 wrote:
         | What has stopped you from tunneling traffic over https (or more
         | generally TLS on port 443) before?
        
         | achernya wrote:
         | We have built a VPN over QUIC, and the core code is open source
         | already [0].
         | 
         | We're working on standardizing "IP Proxying" over QUIC as part
         | of the MASQUE working group at IETF. So far, we've adopted a
         | requirements document [1] and have started work on an
         | implementation [2].
         | 
         | [0]
         | https://quiche.googlesource.com/quiche/+/refs/heads/master/q...
         | 
         | [1] https://tools.ietf.org/html/draft-ietf-masque-ip-proxy-
         | reqs-...
         | 
         | [2] https://tools.ietf.org/html/draft-cms-masque-connect-ip-00
        
           | tankenmate wrote:
           | I can see a number of jurisdictions around the world either
           | blocking and/or profiling this sort of traffic. Is any form
           | of "chaffing" / plausible deniability built into the
           | protocol?
        
             | achernya wrote:
             | Right now we're focusing on building a functional core
             | protocol and making sure it's sufficiently extensible. It
             | should be possible to build chaffing as an add-on extension
             | down the line.
        
           | saurik wrote:
           | In #2, why is the path hardcoded to /? One of the things I've
           | considered somewhat important in my similar work (on Orchid,
           | using HTTPS+WebRTC) is the ability to "embed" a VPN as a sub-
           | resource on some existing website.
        
             | therein wrote:
             | Fun to imagine some application layer firewall somewhere
             | go: "that /static/css/style.css sure is large and requiring
             | a lot of two way communication".
        
               | giovannibonetti wrote:
               | A firewall would be a great place to add some machine
               | learning
        
               | zrm wrote:
               | A little bit Poe's Law there, but I'll assume you're
               | serious and point out the problem.
               | 
               | Machine learning has non-trivial false positives. We
               | don't need firewalls launching denial of service attacks
               | on legitimate traffic at random.
        
               | Xylakant wrote:
               | I've seen an IDS decide to classify all traffic,
               | including management traffic as hostile. The result was
               | an outage for one of the larger web shops in germany.
        
               | meowface wrote:
               | Sounds like an IPS.
               | 
               | An IDS basing its fundamental action (detection) partly
               | on ML can definitely be a good, valuable idea. An IPS
               | basing its fundamental action (blocking traffic) partly
               | on ML is the problem.
        
               | codetrotter wrote:
               | Well, it is said that the only truly secure system is one
               | that is powered off, cast in a block of concrete and
               | sealed in a lead-lined room with armed guards.
               | 
               | Blanket denying _all_ traffic is a good first step to
               | ensuring that the system is really, really secure :P
        
             | avianlyric wrote:
             | It's doesn't really make any difference does it? The path
             | isn't used to indicate an IP packet flow, the "CONNECT-IP"
             | method is what indicates the you want to send an IP packet
             | flow to the server.
             | 
             | You could use the path to indicate different VPN endpoints,
             | but ultimately the path isn't needed at all.
             | 
             | Additionally all of this is gonna be inside a TLS session,
             | so no external viewer will ever see any of the headers,
             | including the path.
             | 
             | TL;DR the RFC doesn't make this VPN endpoint a traditional
             | http resource at all. It a special new HTTP method (like
             | GET or POST) that indicates the client wants the server to
             | treat the following data as an IP stream.
        
               | aasasd wrote:
               | The Great Firewall does probing for ages now. It pauses
               | the connection and sends its own request, checking what
               | kind of protocol the server returns. Nothing stops these
               | firewalls from trying 'connect-ip'. The path could be
               | used as a secret 'key' to thwart too-curious firewalls
               | (though the protocol probably won't do much against DPI
               | anyway).
        
               | shawnz wrote:
               | Why not just require authentication?
        
               | tmp538394722 wrote:
               | The goal is to look like normal boring traffic.
        
               | shawnz wrote:
               | The parent is talking about having the firewall attempt
               | its own request, not some kind of sniffing (which
               | shouldn't be possible with TLS)
        
               | saurik wrote:
               | It would then be weirdly important that a mis-
               | authenticated CONNECT-IP then look the same as if
               | CONNECT-IP were not implemented, as opposed to returning
               | an authorization error of some kind (which seems weirder
               | than hiding the method via the path).
        
               | shawnz wrote:
               | True. But the path method has issues too, like, how would
               | you implement that with HTTP 1.1? The "path" field is
               | actually part of the target, so you would have to send
               | the request like `CONNECT-IP https://remote-server/mount-
               | point` which seems backwards. (EDIT: and also you still
               | need to make sure you don't give the wrong kind of error
               | if the path is wrong, like a 404 instead of a 501 not
               | implemented or whatever)
               | 
               | As a possible solution for the discovery by authorization
               | error problem, maybe a convention could be established
               | where an authorization error is returned by default if
               | CONNECT-* isn't configured.
        
               | saurik wrote:
               | Huh. It is possible that I simply didn't understand the
               | spec, as I was (and am...) pretty sure that CONNECT-IP
               | (at least at the HTTP semantics level) just takes a path
               | (and only /)--like GET--without an associated "extra"
               | authority, as it is creating an IP tunnel: there is no
               | remote host (as there would be with an old-school
               | CONNECT); like, what is the "target" here? I am just
               | talking to the server and asking it to give me an IP
               | bridge, right? What does "remote-server" represent to an
               | IP tunnel?
        
               | shawnz wrote:
               | You're right, I misunderstood the usage. That does make
               | the possibility of using the path more compelling
        
               | aasasd wrote:
               | I guess authentication via https before 'connect-ip'
               | would work. Or, authenticating with headers in the same
               | first 'connect-ip' request, if the server responds with
               | 'invalid method' when not authenticated.
        
               | aasasd wrote:
               | By the way, probing at connection time, that I mentioned
               | in my original comment, isn't actually necessary. The GFW
               | will just scan known popular-ish hosts, trying 'connect-
               | ip' and banning everything that works as a proxy.
               | (Connection-time probing would just make it easier to
               | discover the hosts, such that collecting the IPs and
               | stats separately is not needed.)
        
               | saurik wrote:
               | FWIW, I do get that it is its own method and I understand
               | the intent of the specification (and thereby why it
               | "wouldn't matter" in that worldview), but it feels
               | weirdly non-HTTP for this functionality to not be treated
               | as a resource... but like, I guess "fair enough" in that
               | (looking at it again) the existing old-school proxy
               | CONNECT method also has this flaw :/. I just feel like
               | people should be able to easily "mount" functionality
               | onto their website into a folder, and by and large most
               | HTTP methods support this, with the ability to then take
               | that sub folder and internal-proxy it to some separate
               | backend origin server (as like, I would want to take the
               | flows for anything under /secret/folder/--whether GET or
               | PUT or PROPPATCH or CONNECT-IP--and have my front-end
               | http terminator be able to transparently proxy them,
               | without having to understand the semantics of the method;
               | it could very well be that I am just living in a dream of
               | trying way too hard to be fully-regular in a world
               | dominated by protocols that prefer to define arbitrary
               | per-feature semantics ;P).
               | 
               | I guess I am also very curious why you aren't defining
               | this over WebTransport (which would allow it to be usable
               | from websites--verified by systems like IPFS--to build
               | e2e-encrypted raw socket-like applications, as I imagine
               | CONNECT-IP won't be something web pages will be able to
               | use). (For anyone who reads this and finds this thought
               | process "cool", talk to me about Orchid some time, where
               | I do e2e multi-hop VPN over WebRTC ;P.)
        
         | georgyo wrote:
         | Many people have answered your comment, but I don't think any
         | of them actually addressed your comment...
         | 
         | If a public access point has a restrictive network policy, then
         | QUIC will change nothing. There is nothing that QUIC enables
         | that people browsing the web will demand. A user cannot even
         | tell if they are using QUIC (or http2) for that matter.
         | 
         | The network operator is either non-technical and does not care
         | about QUIC; or technical but trying to actively prevent people
         | from using VPN and/or torrenting.
         | 
         | There are many ways to VPN that look like tcp http. tcp over
         | tcp is okay if the connection is stable.
        
           | shawnz wrote:
           | HTTP CONNECT tunnelling works even with HTTP 1 though, it
           | just might work more efficiently with QUIC. So it could be a
           | seamless fallback.
        
           | aidenn0 wrote:
           | I think the idea is that TCP over QUIC will be better than
           | TCP over TCP, particularly since public wifi can have highly
           | variable latency.
        
             | znpy wrote:
             | I've been using ip over TCP for years when that was the
             | only thing allowed.
             | 
             | For the most part it'll work no problem. Yeah tunnelling IP
             | over UDP would be better, but ip over TCP is the only think
             | you have... It's gonna work, as long as the underlying
             | connection is stable enough.
        
             | midasuni wrote:
             | So your typical coffee shop has router capability to
             | determine a quic tunnel over udp/443 vs a WireGuard tunnel
             | over UDP/443?
        
               | littlecranky67 wrote:
               | Typical coffe-shop APs have disabled UDP all together,
               | mostly because it is not needed for HTTP and "intended
               | use" of a WiFi provided at a coffee shop.
               | 
               | I would say they disable it deliberately to prevent VPN
               | tunneling, its probably a sane decision to disable
               | anything not needed for the intended purpose (or not
               | putting effort in enabling it).
        
               | wildfire wrote:
               | Actually my typical coffee shop does not constrain UDP in
               | any way; otherwise DNS would break.
               | 
               | They are not sophisticated, they just do not want people
               | leeching large amounts of data via the connection (which
               | they basically perceive as http/1 or http/2).
               | 
               | Anyone using "something else" is basically noise.
        
         | oliwarner wrote:
         | If it's a bandwidth issue for them, isn't it more likely
         | they'll just block UDP/443?
        
         | legulere wrote:
         | Or they just continue blocking UDP and browsers fall back to
         | http1/2
        
         | CameronNemo wrote:
         | Can you not disguise the wireguard tunnels as something else?
         | E.g. just set the destination/listen port as 443/UDP? Or 53,
         | 123.
        
           | judge2020 wrote:
           | While not usual for places like coffee shops, i've seen
           | cruises and hotels do DPI since probably a sizable amount of
           | revenue can be generated from fast wifi access or by
           | unblocking all sites.
        
             | CameronNemo wrote:
             | Yep that is always an issue. Not sure how a hypothetical
             | QUIC tunnel would avoid that, though. Would appreciate any
             | info you can provide on mitigating deep packet inspection
             | blocking techniques.
        
             | grishka wrote:
             | I've been tempted to write a quick-and-dirty websocket
             | tunnel for quite some time. Let's see how they DPI any of
             | that without breaking anything else.
        
               | shawnz wrote:
               | Why not just use an HTTP CONNECT proxy running over
               | HTTPS?
        
               | grishka wrote:
               | You could do that? Well, you probably could.
               | 
               | But there are some "proactive" DPIs that make a request
               | themselves before letting you through. Would this protect
               | against that? It's easy to set up a regular web server
               | that would serve what everyone would see as an ordinary
               | personal website, except when you make a request to a
               | secret URL.
        
               | shawnz wrote:
               | You could require authentication (and as other posters
               | pointed out, make sure to spoof any wrong authentications
               | for maximum deniability).
               | 
               | It is also simple to serve some pages over HTTP alongside
               | the proxy functionality so that the server appears to be
               | a typical web server. Just turn on mod_proxy_connect in
               | addition to your existing httpd configuration if you use
               | Apache for example.
               | 
               | I use this method on my domains, although I've never
               | tried using it from within e.g. China
        
         | mjsir911 wrote:
         | We're already most of the way there with SSTP, although it's a
         | bit fiddly with SNI
         | 
         | https://en.wikipedia.org/wiki/Secure_Socket_Tunneling_Protoc...
        
           | codetrotter wrote:
           | Tunneling over TCP has problems. (The article you linked
           | mentions this as well.)
           | 
           | This TCP meltdown problem is especially prominent in public
           | WiFi networks where they often like to dramatically limit the
           | bandwidth to the clients that connect to their network. So
           | UDP tunneling will work much better there.
           | 
           | That's why I am excited about maybe finally having tunneling
           | over UDP be a reality on public networks in coffee shops and
           | libraries in the future thanks to HTTP/3 and QUIC where
           | historically the mentioned kinds of networks have often been
           | blocking basically all UDP traffic except DNS.
        
         | aduitsis wrote:
         | Hello, it's not like every venue will be hard pressed to
         | accommodate QUIC quickly, right?
         | 
         | Please consider that today, in 2021, IPv4 is still the only
         | thing supported in many venues. It took almost 20 years for
         | IPv6 to become universally supported by Operating Systems,
         | ISPs, providers, etc, but still you have to have IPv4.
         | 
         | And I dare say IPv6 was much much more critical to have than
         | QUIC.
         | 
         | But having said all that, what is there to allow in QUIC? Isn't
         | it just using UDP?
        
           | livre wrote:
           | > Hello, it's not like every venue will be hard pressed to
           | accommodate QUIC quickly, right?
           | 
           | If browsers fall back to regular HTTPS when the QUIC
           | connection fails there won't be much incentive in making sure
           | QUIC works or isn't blocked.
        
           | lxgr wrote:
           | > Isn't it just using UDP?
           | 
           | Yes, but there's still networks blocking everything but TCP
           | 80/443.
        
             | midasuni wrote:
             | In which case quic won't work. But your vpn over tcp/443
             | will be fine.
             | 
             | Is udp/53 blocked too?
        
           | Quekid5 wrote:
           | > Hello, it's not like every venue will be hard pressed to
           | accommodate QUIC quickly, right?
           | 
           | Your aren't wrong about that, given the amount of fallback
           | browsers do, but IPv6 actually required upgrading key pieces
           | of hardware at critical points in the infrastructure[0]. Not
           | sure if there's any hardware released in the last 10+ years
           | that doesn't support it, but who knows?
           | 
           | ... and yes, QUIC is "just UDP".
           | 
           | [0] ... and dare I say: a more compelling _immediate_ use
           | case. At the time is was designed the address shortage wasn
           | 't actually looking all that dire. It might actually have
           | been invented/designed _too soon_... like it solutions to Y2K
           | had come out in 1980 or something.
        
         | jk7tarYZAQNpTQa wrote:
         | What stops you from tunneling VPN inside TLS, or directly over
         | TCP? Something like
         | https://github.com/moparisthebest/wireguard-proxy
        
           | pixl97 wrote:
           | TCP over TCP vpn is a bad idea, udp is better.
        
             | jk7tarYZAQNpTQa wrote:
             | Isn't ipsec just TCP over TCP? UDP might be better, but the
             | TCP solution works today, no need to wait until HTTP/3 gets
             | widely adopted.
        
               | IcePic wrote:
               | No, ipsec is its own protocol, so it would be most
               | similar to "over UDP"
        
         | saurik wrote:
         | I already support VPN over WebRTC in Orchid, and intend to
         | support WebTransport (though the premise of that protocol irks
         | me a bit) to get unreliable packets over QUIC as these
         | standards finish forming.
         | 
         | https://github.com/OrchidTechnologies/orchid
        
         | kyriakos wrote:
         | Why do they block ssh and wireguard? Can't think of a good
         | reason
        
         | GormanFletcher wrote:
         | A surprising number of limited-access networks only do port
         | blocking, not protocol sniffing. This means many will
         | cheerfully let you connect to an SSH server over port 443
         | (useful for a SOCKS proxy).
        
         | thayne wrote:
         | Running a TLS based VPN (such as openvpn) on TCP port 443 will
         | already look a lot like https. Although, it would probably be
         | possible to guess it is a vpn from traffic patterns. The
         | same8ght be trie of http/3 though.
        
         | the8472 wrote:
         | TCP is far from optimal for tunneling multiple flows, but you
         | can still tweak it a lot by using a recent(ish) linux kernel
         | which will allow you to use FQ_CODEL + BBR + NOTSENT_LOWAT to
         | reduce loss sensitivity and minimize the excess buffering in
         | the presence of bulk flows over the tunnel. Well, and by not
         | using SSH for tunneling, since that brings yet another layer of
         | flow control. If you're on a fixed-bandwidth connection
         | disabling SSR might help too. These are all sender-side
         | changes, so you need to control the remote end of the VPN too.
        
         | partyboy wrote:
         | Take a look at my project? https://github.com/tobyxdd/hysteria/
        
         | netflixandkill wrote:
         | Been doing wireguard over udp 443 for a long time. Works pretty
         | consistently.
         | 
         | Probably take a few years for the low-effort network blocks to
         | figure out how to deal with that.
        
           | midasuni wrote:
           | It's possible that QUiC will mean in some places udp/443 will
           | be blocked but other udp won't be.
        
           | adamfisk wrote:
           | Wireguard traffic is easy to identify and therefore easy to
           | block.
        
             | bawolff wrote:
             | But typically coffee shops aren't really putting much
             | effort into blocking vpns, they're doing some silly block
             | everything not 80 or 443. Its not like they're trying to
             | bypass the great firewall of china.
        
           | lxgr wrote:
           | Interesting, I wonder if this is because of low effort
           | firewall rules (just mirror the TCP settings for UDP) or
           | explicitly to allow QUIC.
        
             | netflixandkill wrote:
             | Quic has been around long enough that I suspect it's simply
             | a common policy for both tcp and udp on 443. If you just
             | want to charge for access, rate limiting is easier and
             | there is almost never any local person involved in managing
             | it.
             | 
             | A lot of VPN blocking isn't a specific, deliberate policy
             | choice by the network owner, it comes in from default
             | policy options on whatever awful vendor they use. The
             | people seriously concerned about outgoing tunnels providing
             | access back into protected networks that hides from normal
             | packet inspection are a different case.
        
         | jlokier wrote:
         | If you're already encountering blocks, I don't think you can
         | assume there won't be blocks on QUIC unfortunately. Here are
         | just a selection of vendors advice. There's plenty of them, and
         | they are probably used by some schools, libraries etc.
         | 
         | https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?...
         | 
         | > Our recommendations:
         | 
         | > Palo Alto Networks recommends creating a security policy in
         | the firewall to block the QUIC application.
         | 
         | https://community.spiceworks.com/topic/2146987-do-you-block-...
         | 
         | > Do You Block QUIC
         | 
         | > Thinking about it, ... > Yes, ... > Yes, ... > Yes, ... > No,
         | ...
         | 
         | https://www.fastvue.co/fastvue/blog/googles-quic-protocols-s...
         | 
         | > How Google's QUIC Protocol Impacts Network Security and
         | Reporting
         | 
         | > ... inadvertently created a security hole for many
         | organizations.
         | 
         | > ... At the time of writing, the advice from most firewall
         | vendors is to block QUIC until support is officially added to
         | their products.
         | 
         | https://help.zscaler.com/zia/managing-quic-protocol
         | 
         | > Managing the QUIC Protocol
         | 
         | > Zscaler best practice is to block QUIC
         | 
         | https://www3.trustwave.com/support/kb/print20269.aspx
         | 
         | > Blocking web traffic over QUIC
         | 
         | > The recommended fix is to block outbound UDP on ports 80 and
         | 443 ...
        
         | crazygringo wrote:
         | You can already tunnel traffic over HTTPS, and I've never been
         | to a coffee shop or library that blocked _that_. Have you tried
         | that when traveling? I used to do it frequently something like
         | 15 years ago to use SSH, Remote Desktop, anything. (Back then
         | it was a software package called  "orenosp" that you set up on
         | your client laptop and then a server of your choice, like your
         | home computer -- not sure what tool is popular now.)
         | 
         | On the other hand, public networks would only have to allow
         | QUIC if regular HTTP(S) fallback ever stopped being supported
         | by most webservers, which... who knows if that would _ever_
         | happen.
        
         | api wrote:
         | We've looked at QUIC transport for ZeroTier for this reason.
        
       | willhoyle wrote:
       | I spent the good part of last night compiling a previous version
       | of nodejs to try out quic. I wanted to see if it could replace
       | webrtc for a multiplayer game.
       | 
       | You have to compile a previous commit from the repo because the
       | experimental implementation of quic was removed according to this
       | issue: https://github.com/nodejs/node/pull/37067
       | 
       | Not sure when we can expect it to come back. You can just feel
       | the pain in those comments. All that hard work.
        
       | mleonhard wrote:
       | I think HTTP/2 and HTTP/3 are unnecessarily complex and offer
       | little benefit for most web users and websites.
       | 
       | We need our technology to be secure against criminals and spies.
       | To accomplish that, we must reduce complexity. Moving from
       | HTTP/1.1 to HTTP/3 adds a lot of complexity:
       | 
       | - TCP -> UDP. Applications, proxies, and servers must implement
       | complicated request stream processing logic. In TCP, this is
       | handled by the OS kernel. The switch to UDP will bring many
       | exploits and privacy leaks due to the huge amount of new code
       | written for stream processing. Application performance will
       | likely go down due to bugs and misconfiguration.
       | 
       | - Text -> Protocol Buffers. Protocol buffer parsing requires a
       | large amount of new code. Applications & servers must either
       | increase the complexity of their build systems by adding code
       | generation or increase the complexity of their codebase by adding
       | hand-coded parsing and serde (serializing & deserializing). There
       | are plenty of ways to achieve the precision of protobufs with
       | text-based protocols. Even JSON would be better. Debugging
       | protobuf-based applications is difficult, increasing the costs of
       | deploying applications. These costs disproportionately affect
       | startups and small organizations.
       | 
       | - Uniplex -> Multiplex. With HTTP/1.1, application servers are
       | straightforward to code: read a request from the stream, process
       | it, and write a response. With multi-plexing, servers must handle
       | concurrent requests over a single stream. This is an order of
       | magnitude increase in complexity. It affects all aspects of the
       | server. There will be many exploits and privacy leaks due to bugs
       | in this new concurrent request processing code. Many production
       | systems are more difficult to implement for multiplex protocols:
       | firewalls, monitoring, bandwidth controls, DoS mitigation, and
       | logging. For Google, these are not new costs since they already
       | use Stubby (gRPC) internally for everything. But for the rest of
       | the world, especially startups and small organizations, the
       | switch to multiplex adds huge development and operational
       | complexity for negligible benefit.
       | 
       | I think we need a better text-based HTTP protocol. HTTP group is
       | solving Google's problems. We need it to solve our problems.
       | Here's what we should do:
       | 
       | - Remove the unused features from HTTP/1.1. For example, multi-
       | line headers.
       | 
       | - Eliminate ambiguity (and several classes of exploits):
       | - Make header names case-sensitive.  Define all well-known
       | headers with lower-case names.             - Forbid duplicate
       | headers.             - Forbid leading zeros and sign indicators
       | on numeric fields.             - Allow only lowercase
       | hexadecimal.             - Require content-type header and forbid
       | user-agents from inferring content-type.
       | 
       | - Make parsing simpler:                  - Make standard header
       | values mandatory and move them to the request/status line:
       | content-length, content-encoding, content-type, transfer-
       | encoding, and server name.             - Put the target path on
       | its own line, encoded in UTF-8.  Remove percent-encoding.
       | - Separate header names and values with a character that cannot
       | occur in header names or values, like '\t'.             - Use
       | UTF-8 for header values.             - Specify maximum sizes for
       | request/response (sans body), request/status line, method name,
       | content-type, server name, content-length field, target path,
       | header key, header value, total header length, and number of
       | headers.             - Use only one transfer encoding: chunked.
       | - Require content-length.  Do not support uploads or responses of
       | indeterminate length.             - Represent numeric values
       | (content-length, chunk-length) using one format, either decimal
       | or hex.
       | 
       | - Allow clients to detect tampering and corruption:
       | - Replace 'etag' header with a 'digest-sha-256' header.
       | - Add 'checksum-xxhash' and 'checksum-xxhash' headers for bodies
       | and chunks.  Pick an excellent hash function like xxHash.
       | 
       | - Allow browsers to control authentication:                  -
       | All connections are anonymous by default             - Forbid the
       | "cookie" header.  Replace it with a single-valued auth token that
       | is part of the request line.  Or even better, use the client
       | authentication protocol described below.  Browsers can now add a
       | "log out" button which switches back to anonymous mode.
       | - Servers can request authentication (401).             - Support
       | password-less challenge & response.  This prevents phishing.  It
       | supports hardware tokens.             - Add a way for browsers to
       | enroll a new "identity" with the server.  This could be stored on
       | a hardware token or just software.  The browser creates a new
       | identity for each website the user logs into.  When the user
       | wants to log in to a site for the first time, they click the
       | "Login" button next to the URL bar.  The browser re-requests the
       | page with the enrollment header.  The server can either accept
       | the new identity (2xx) or show a form asking for a security code.
       | The server returns the form with a 4xx status so it can get the
       | enrollment headers again.  The "Logout" button appears next to
       | the Login button.  It switches back to anonymous mode for that
       | site.             - Remove HTTP Basic authentication (cleartext
       | password).             - Remove HTTP Digest authentication
       | (passwords).
       | 
       | - Replace TLS. It is mind-numbingly complex. Because of that,
       | there are only a handful of implementations and most are buggy.
       | TLS leaks private data. Replace it with three new protocols:
       | - An unauthenticated privacy protocol.  This does not protect
       | against MITM.  Signs every data block with session key.
       | - Server authentication protocol.  Client requests certificate by
       | name.  Server sends its certificate and signature of client-
       | generated nonce.  Every sent data block gets signed.  Server
       | sends another message with its certificate and signature of the
       | privacy protocol session key and a nonce.  Corporate MITM
       | firewalls replace this message with their own certificate, which
       | clients are configured to accept.             - Client
       | authentication protocol.  Every sent data block gets signed.  A
       | browser can generate a new identity with a new key pair and send
       | the public key to the server in the 'enroll-rsa-2048' header.
       | The server generates a certificate for the public key and returns
       | it to the browser in the 'certificate' header.  The browser uses
       | this certificate in the client authn protocol.             - Do
       | not support resuming sessions.             - All three protocols
       | get a new version every 3 months with updated lists of supported
       | algorithms, parameters, and features.  No "extension" data
       | fields.  Public internet hosts must support only the eight latest
       | versions.  This means that unmaintained software will stop
       | working after 2 years.  This will prevent many data thefts,
       | ransomware attacks, malicious hacks, and Internet worms.
       | - Encode certificates with human-readable JSON instead of ASN.1
       | DER.  JER is not human-readable since it encodes DNS names in
       | Base64.  The ASN.1 format specification and DER encoding are
       | unnecessarily complex.             - Provide test servers and
       | clients that exercise all of the edge cases of the algorithms and
       | try known attacks.
        
         | userbinator wrote:
         | I agree with the massive increase in complexity brought on by
         | newer versions of HTTP and the attempt at essentially
         | reimplementing TCP over UDP, but I think a binary protocol will
         | be much simpler and more efficient than a text one.
         | 
         |  _Eliminate ambiguity (and several classes of exploits):_
         | 
         | It's easier to "start small" (i.e. with a short binary
         | encoding) that won't be much ambiguous in the first place, than
         | to try to add restrictions (and thus additional complexity) to
         | a freeform text protocol. Or put another way, instead of adding
         | error cases, it's better when there are no error cases in the
         | first place.
         | 
         |  _Encode certificates with human-readable JSON instead of ASN.1
         | DER._
         | 
         | Hell no. DER is prefix length-delimited, JSON (and text in
         | general) is not.
         | 
         |  _This means that unmaintained software will stop working after
         | 2 years._
         | 
         | Hell _fucking_ no! The last thing that anyone who cares about
         | things working wants is planned obolescence and even more
         | constant churn (which just leads to more and more bugs.) If
         | anything, that will only make the  "handful of implementations"
         | of TLS turn into perhaps one (i.e. Google's), or maybe two at
         | most. Thats even worse.
         | 
         |  _Replace TLS. It is mind-numbingly complex._
         | 
         | Replace it with what? Your proposal essentially reinvents the
         | main pieces of TLS (authentication, confidentiality), and TLS
         | is already quite modular with respect to that.
        
           | mleonhard wrote:
           | Have you tried to write an X.509 DER PEM file parser? I
           | started one. Just reading the specs (X.509, ASN.1, DER, &
           | PEM) takes two days. If writing a JSON parser is a walk down
           | the block, then writing an X.509 cert parser is a marathon.
           | 
           | How is TLS modular? It mixes encrypted and unencrypted
           | message types [0], leading to implementation vulnerabilities
           | [1]. It contains many unused and unnecessary features and
           | extensions, some of which are very bad ideas (renegotiation,
           | client_certificate_url). We all deserve a good protocol to
           | protect our data and systems. The complexity of TLS helps the
           | incumbents, stifles innovation, and harms privacy and
           | security.
           | 
           | Many organizations spend a lot of money and effort to upgrade
           | software and prevent obsolete software from running. This
           | will become a bigger need in the future as ransomware attacks
           | become more common. Building mandatory upgrades into an
           | Internet protocol would take boldness and courage. The
           | benefits will accrue to everyone in society: Less of our data
           | will get stolen by criminals and spies. Fewer organizations
           | will get extorted by criminals. Fewer hospitals will get shut
           | down.
           | 
           | [0] https://tools.ietf.org/html/rfc8446#section-5
           | 
           | [1] https://mitls.org/pages/attacks/SMACK
        
             | peterkelly wrote:
             | > _Have you tried to write an X.509 DER PEM file parser? I
             | started one. Just reading the specs (X.509, ASN.1, DER, &
             | PEM) takes two days. If writing a JSON parser is a walk
             | down the block, then writing an X.509 cert parser is a
             | marathon._
             | 
             | I have. It wasn't particularly difficult. Yes the specs
             | took about two days to get my head around but that's not a
             | large amount of time. I certainly wouldn't describe it as a
             | marathon.
        
             | userbinator wrote:
             | PEM is not DER, it's base64'd DER, and not necessary to
             | implement TLS (which only uses DER for X509.) I haven't
             | written any code for it, but I've debugged some, and read
             | the spec and hexdumps, and I'd much rather work with DER
             | than JSON. Everything length-prefixed, no "keep reading
             | until you find the end" horrors. JSON is simple on the
             | surface, complex on the inside; ASN.1 is somewhat like the
             | opposite of that. You may find this article enlightening:
             | https://news.ycombinator.com/item?id=20724672
             | 
             |  _The complexity of TLS helps the incumbents, stifles
             | innovation, and harms privacy and security._
             | 
             | ...and your  "solution" to replace it with something that
             | churns even more insanely frequently won't "helps the
             | incumbents"? We already have the horrible Chrome/ium
             | monopoly situation as evidence. I've just about had it with
             | the fucking "innovation", I want stability.
             | 
             |  _The benefits will accrue to everyone in society: Less of
             | our data will get stolen by criminals and spies. Fewer
             | organizations will get extorted by criminals. Fewer
             | hospitals will get shut down._
             | 
             | Every utopian vision turns into a dystopia. That "think of
             | the security!" doublespeak grandstanding is becoming
             | disturbingly Google-ish.
             | 
             | I'm done here --- clearly you can't be convinced (or are
             | deliberately trolling), and neither can I.
        
               | mleonhard wrote:
               | I did not intend for you to become frustrated. I am being
               | earnest.
               | 
               | Most software systems change only by adding complexity
               | and features. This is because owners rarely have the
               | courage to break any users' stuff. This is OK for a
               | growing company since they earn more money as they add
               | complexity. But it's counter-productive for security.
               | Security depends on simplicity.
               | 
               | Evolving systems means adding support for new things. To
               | evolve systems and maintain simplicity, we must also
               | remove support for old things.
               | 
               | Dropping support for a feature is currently a political
               | ordeal. Folks maintaining old systems don't want to
               | upgrade them to use the new thing. So they push back
               | against the drop. Regular dropping will force these
               | organizations to adopt regular upgrade processes. With
               | regular dropping, protocol/library/language maintainers
               | can drop features without incurring much extra work for
               | users. This will let them get rid of technical debt and
               | iterate much faster, increasing the rate of innovation.
        
               | slver wrote:
               | New features get introduced, and old features slowly get
               | dropped as they cease to be used. All seems fine.
        
           | marktangotango wrote:
           | > Hell no. DER is prefix length-delimited, JSON (and text in
           | general) is not.
           | 
           | This is a bit disingenuous, Content-Length header is
           | required, or defacto required, and every http 1.1 server out
           | there has a reasonable maximum allowed request header size.
        
         | [deleted]
        
         | bawolff wrote:
         | So just throw everything out and start again?
         | 
         | I think i'll stick with TLS and http/2.
        
         | Matthias247 wrote:
         | That's a long post, and I will not comment on all of it.
         | However some very short thoughts:
         | 
         | HTTP/3 has nothing to do with protocol buffers. Yes, it uses a
         | binary encoding for request headers on top of QUIC streams. And
         | those QUIC streams use another binary encoding to get QUIC
         | frame data into UDP datagrams. But neither of those are using
         | protocol buffers. If a user uses HTTP/3 in the end (most likely
         | through a library), they can store anything in the request
         | response bodies - whether its binary data or just text data.
         | 
         | Multiplexing and binary encoding already exist in a HTTP/1.1
         | stack. The main difference is that it happens on the kernel
         | level (it multiplexes IP datagrams in the end, and associates
         | them with sockets). The main difference with QUIC is that this
         | multiplexing layer now lives in application space. But since
         | most users are not going to write their own library, they will
         | rely on someone else to get that layer right. Either they want
         | to have a solid kernel plus a solid TLS library on top, or they
         | want a solid QUIC library. There is obviously a risk related to
         | introducing anything new into a system, but once things mature
         | it's not that different anymore.
         | 
         | Once users have a QUIC library, APIs don't really have to be
         | harder than for handling HTTP/1.1 requests. One can easily
         | offer blocking send() and receive() APIs to interact with
         | request bodies, the encoding and multiplexing doesn't have to
         | leak into application code.
        
           | mleonhard wrote:
           | > HTTP/3 has nothing to do with protocol buffers.
           | 
           | I don't know why I thought it was protocol buffers. I see now
           | that it's a TLV protocol. I think that is worse, since it
           | requires custom binary parsers.
           | 
           | > Multiplexing and binary encoding already exist in a
           | HTTP/1.1 stack.
           | 
           | This is disingenuous. The majority of code supporting
           | HTTP/1.1 is in text processing. Binary encoding only happens
           | while transfer encoding bodies. TCP is multiplexed over IP,
           | but very few engineers ever think about that because our
           | tools deal with TCP connections.
        
             | slver wrote:
             | I don't know why you seem to completely disregard the idea
             | of libraries and abstractions outside the OS kernel.
             | 
             | Your HTTP library can hide the complexity of HTTP/3 from
             | you and give you HTTP/1.1 style streams to deal with,
             | WITHOUT being part of the OS.
             | 
             | Said engineers won't have to learn or do anything, just
             | update their dependencies.
        
               | mleonhard wrote:
               | I need to understand the technology my systems depend on
               | so I can solve problems. I accept complexity when it is
               | necessary or when it brings strong benefits.
               | 
               | Things that are necessary complexity: JPEG libraries,
               | database engines, Rust's borrowing rules, and TLS (only
               | because no straightforward alternative exists).
               | 
               | Why is the extra complexity of HTTP/3 necessary?
        
               | slver wrote:
               | You don't need to understand every implementation detail
               | of a technology to solve problems, you need to understand
               | the abstraction it exposes. In this case HTTP/2 and
               | HTTP/3 expose the same abstraction that HTTP/1.1 exposes.
               | 
               | The only thing you need to know is that the cost each
               | request/response is much smaller then before, so you can
               | make more of them with less overhead. That's it.
               | 
               | Without abstraction this world is so complex, we wouldn't
               | be able to get out of bed in the morning and put our
               | pants on.
        
         | rini17 wrote:
         | _Add a way for browsers to enroll a new "identity" with the
         | server._
         | 
         | You believe that redoing login in all the applications that
         | have hardwired password authentication is remotely feasible?
        
       | crypt0x wrote:
       | Does someone know if this includes the QuicTransport JS API?
       | 
       | Im still salty that Google scrapped the p2p mode from chrome,
       | which could have been a nice alternative for webrtc, too.
        
       | throwaway189262 wrote:
       | I'm not as excited about this as I was for HTTP/2 . Every
       | benchmark I've seen is <5% performance improvement.
       | 
       | I've heard the difference is larger on cellular networks with
       | packet loss but haven't seen examples.
       | 
       | What we need more than HTTP/3 is ESNI. Exposed server names are a
       | real security risk.
        
         | bawolff wrote:
         | If you're looking at benchmarks without packet loss, then i'm
         | not sure what you expect. Http/3 is almost entirely about that
         | situation.
         | 
         | ESNI is totally unrelated. People can work on more than one
         | thing.
        
           | throwaway189262 wrote:
           | Unrelated but a ton of work is being put into HTTP/3 while
           | ESNI has been lingering in draft spec for years. As far as
           | protocol work, I think their priorities are wrong.
           | 
           | Most wireless protocols hide packet loss from upper layers
           | anyways using retry/FEC. I can't think of a common situation
           | where wireless packet loss is even visible to layer 4, so
           | efforts to build tolerance to it are usually pointless
        
             | bawolff wrote:
             | Sure, but would the people working on http/3 be otherwise
             | working on ESNI? Its a different problem space requiring
             | different skills.
             | 
             | > Most wireless protocols hide packet loss from upper
             | layers anyways using retry/FEC. I can't think of a common
             | situation where wireless packet loss is even visible to
             | layer 4, so efforts to build tolerance to it are usually
             | pointless
             | 
             | Its attempting to build tolerance to sudden transient
             | latency spikes interacting badly with congestion control
             | algorithms. You can hide packet loss, you can't hide some
             | random packet taking a lot longer to be delivered due to
             | having to be retried.
        
               | throwaway189262 wrote:
               | > Its attempting to build tolerance to sudden transient
               | latency spikes interacting badly with congestion control
               | algorithms. You can hide packet loss, you can't hide some
               | random packet taking a lot longer to be delivered due to
               | having to be retried.
               | 
               | Good point. By using UDP you can get around head-of-line
               | blocking. I didn't consider that
        
       | chrissnell wrote:
       | What server-side options do we have for QUIC?
        
         | [deleted]
        
         | pgjones wrote:
         | Hypercorn a Python ASGI server supports HTTP/3 built on aioquic
         | which is a Python library supports QUIC.
         | 
         | (I'm the author of Hypercorn).
        
         | jeffbee wrote:
         | Depending on what you want to accomplish, an option is running
         | your services in a cloud that supports terminating QUIC on
         | their front-end load balancers. Google Cloud does, for example.
        
         | dragonwriter wrote:
         | Here's a list of implementations. There's quite a few backend
         | (either servers or server libraries.)
         | 
         | https://github.com/quicwg/base-drafts/wiki/Implementations
        
         | ainar-g wrote:
         | Do you mean libraries for backend languages, like Go and Java,
         | or servers, like Nginx and Caddy? Because for Go there is the
         | quic-go module[1], and since Caddy is written in Go, it already
         | has (experimental) support for HTTP/3[2].
         | 
         | [1]: https://github.com/lucas-clemente/quic-go
         | 
         | [2]: https://caddyserver.com/docs/caddyfile/options#protocol
        
         | [deleted]
        
         | moderation wrote:
         | Envoy proxy has initial support using Google's Quiche. Docs and
         | example configs are being worked on. I wrote up some h3 client
         | and Envoy proxy testing this week [0].
         | 
         | 0. https://dev.to/moderation/experiments-with-h3-clients-
         | envoy-...
        
         | isomorphic wrote:
         | nginx has experimental QUIC support:
         | 
         | https://quic.nginx.org/
        
       | newman314 wrote:
       | QUIC breaks Zscaler. See https://help.zscaler.com/zia/managing-
       | quic-protocol and Zscaler's recommended solution.
       | 
       | I look forward to no longer using Zscaler once Infosec finally
       | realizes this.
        
       | Denatonium wrote:
       | I'd be interested in knowing how QUIC deals with networks where
       | an upstream link has a lower MTU than the LAN, and some
       | intermediate routers are blocking ICMP.
       | 
       | With TCP, the router would be set up to clamp TCP MSS, but how's
       | this going to work with a UDP-based transport?
        
         | Matthias247 wrote:
         | Both peers will start with a minimum MTU that is deemed to be
         | safe (around 1200 bytes) and will independently start a path
         | MTU discovery workflow from there. That allows each side to
         | increase the MTU without having a dependency on the other end.
         | For path MTU discovery, peers can send PING frames in higher
         | MTU datagrams and detect their loss (miss of their
         | acknowledgement)
        
         | jeffbee wrote:
         | https://tools.ietf.org/html/rfc8899
        
       | ivan4th wrote:
       | I have recently improved my rural multi-LTE setup by means of
       | OpenMPTCPRouter [1]. MPTCP does a really great job at aggregating
       | multiple paths for better throughput and reliability. Problem is,
       | QUIC can't do this, and while multipath QUIC exists [2], right
       | now it's more of a research project.
       | 
       | I'm afraid I'll have to block UDP/443 at some point to prevent
       | browsers from downgrading to 1/3 of available throughput...
       | 
       | [1] https://www.openmptcprouter.com/ [2] https://multipath-
       | quic.org/
        
         | avianlyric wrote:
         | Why would HTTP/3 not work with OpenMPTCPRouter? Looking at the
         | documentation OpenMPTCRouter makes itself the gateway for you
         | network, so all traffic TCP or UDP will get routed to, and over
         | its multipath network, so UDP packets will be encapsulated in
         | TCP, if your using a TCP based multipath layer.
         | 
         | Then on your VPC the UDP packets will be un-encapsulated and
         | sent on as normal.
        
           | [deleted]
        
           | ivan4th wrote:
           | There are several possibilities of dealing with UDP traffic
           | in OMR.
           | 
           | 1) Multipath VPN (glorytun). Thing is that UDP spread over
           | multiple paths will not as well as MPTCP. MPTCP creates
           | multiple subflows, one for each available path. Each subflow
           | behaves (roughly) as a separate TCP connection, carrying some
           | part of the traffic that goes through the pipe (byte-level
           | splitting of the traffic, not packet-level). What's important
           | here is that each subflow has its own separate congestion
           | control. The congestion control mechanism [1] is a very
           | important aspect of TCP that basically controls its speed in
           | a way that your internet connection is not overloaded and
           | concurrent connections aren't disrupted. QUIC also includes a
           | congestion control mechanism. But with MPTCP, each path is
           | treated individually; with QUIC-over-multipath-VPN, where we
           | just spread UDP packets somehow over multiple paths, the
           | congestion control mechanism will get confused, b/c in the
           | reality the congestion can happen separately on each path
           | which will be hard to see; also, there will be packet
           | reordering which will not improve the situation either.
           | Basically, this is what happens when you try to spread plain
           | TCP over multiple paths naively, and it's the reason why
           | MPTCP was invented in the first place.
           | 
           | 2) UDP-over-TCP. In this case, I'm afraid that given that
           | QUIC has its own congestion control, we might get unpleasant
           | side effects similar to "TCP meltdown" as in case of TCP-
           | over-TCP [2]
           | 
           | [1] https://en.wikipedia.org/wiki/TCP_congestion_control
           | 
           | [2] http://sites.inka.de/bigred/devel/tcp-tcp.html
        
       | enz wrote:
       | I use HTTP/3 since a few releases ago. I had to set the
       | "network.http.http3.enabled" to "True". No issue so far (tested
       | on a high-quality FTTH home connection and low-quality coax) with
       | Google's website (including YouTube) and some websites under
       | Cloudflare HTTP/3-ready CDN.
       | 
       | I use my own fork of the "http-and-ip-indicator" extension to
       | know when HTTP/3 is in use[1]. It shows an orange bolt when
       | HTTP/3 is in use for the top-level document.
       | 
       | [1]: https://github.com/enzzc/http-and-ip-indicator (Beware, I
       | don't maintain it actively)
        
       | est31 wrote:
       | Btw, the QUIC library that Firefox relies on is written in Rust:
       | https://github.com/mozilla/neqo
       | 
       | A bit weird though that they didn't cooperate with existing Rust
       | quic stacks like quinn. https://github.com/quinn-rs/quinn
        
         | pas wrote:
         | The reason is the need to have total flexibility (control). [0]
         | 
         | I reckon to make it as painless as possible to integrate it
         | into Firefox. Also probably a tiny bit of not-invented-here
         | syndrome too :)
         | 
         | [0] https://github.com/mozilla/neqo/issues/81
        
       | yonrg wrote:
       | Isn't syncthing using quic as one possible transport?
        
         | BitPirate wrote:
         | Yes, as UDP allows hole punching. They're using
         | https://github.com/lucas-clemente/quic-go
        
       | pmarreck wrote:
       | I've recently made Firefox Nightly my "main" browser. I'm quite
       | impressed with it. Even imported my Chrome cookies AND logins
        
       ___________________________________________________________________
       (page generated 2021-04-17 23:01 UTC)