[HN Gopher] End-to-end congestion control cannot avoid latency s...
       ___________________________________________________________________
        
       End-to-end congestion control cannot avoid latency spikes (2022)
        
       Author : fanf2
       Score  : 102 points
       Date   : 2024-07-09 08:42 UTC (14 hours ago)
        
 (HTM) web link (blog.apnic.net)
 (TXT) w3m dump (blog.apnic.net)
        
       | westurner wrote:
       | - "MOOC: Reducing Internet Latency: Why and How" (2023)
       | https://news.ycombinator.com/item?id=37285586#37285733 ; sqm-
       | autorate, netperf, iperf, flent, dslreports_8dn
       | 
       | Bufferbloat > Solutions and mitigations:
       | https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti...
        
       | mrtesthah wrote:
       | macOS sonoma supports L4S for this purpose:
       | 
       | https://www.theverge.com/23655762/l4s-internet-apple-comcast...
        
         | fmbb wrote:
         | What do you mean "this purpose"? The text linked in OP says
         | 
         | > Congestion signalling methods cannot work around this problem
         | either, so our analysis is also valid for Explicit Congestion
         | Notification methods such as Low Latency Low Loss Scalable
         | Throughput (L4S).
        
       | jchanimal wrote:
       | It's fun to think about the theoretical maximum in a multi-link
       | scenario. One thing that pops out of the analysis -- there are
       | diminishing returns to capacity from adding more links. Which
       | means at some point an additional link can begin to reduce
       | capacity as it starts to eat more into maintenance and uptime
       | budget than it offers in capacity.
        
         | 1992spacemovie wrote:
         | There are multiple SD-WAN vendors with active-active multipath
         | functionality. Typically the number of parallel active paths is
         | capped at something like 2 or 4. A few esoteric vendors do high
         | numbers (12-16). Fundamentally your premise is correct, but the
         | overhead % is a single digit as I understand it. Slightly
         | different than Amdahl's law in my eyes (transmission of data v
         | computation).
        
       | flapjack wrote:
       | One of the solutions they mention is underutilizing links. This
       | is probably a good time to mention my thesis work, where we
       | showed that streaming video traffic (which is the majority of the
       | traffic on the internet) can pretty readily underutilize links on
       | the internet today, without a downside to video QoE!
       | https://sammy.brucespang.com
        
         | clbrmbr wrote:
         | Can you comment on latency-sensitive video (Meet, Zoom) versus
         | latency-insensitive video (YouTube, Netflix)? Is only the
         | latter "streaming video traffic"?
        
           | flapjack wrote:
           | We looked at latency-insensitive like YouTube and Netflix
           | (which were a bit more than 50% of internet traffic last year
           | [1]).
           | 
           | I'd bet you could do something similar with Meet and Zoom-my
           | understanding is video bitrates for those services are lower
           | than for e.g. Netflix which we showed are much lower than
           | network capacities. But it might be tricky because of the
           | latency-sensitivity angle, and we did not look into it in our
           | paper.
           | 
           | [1] https://www.sandvine.com/hubfs/Sandvine_Redesign_2019/Dow
           | nlo...
        
         | aidenn0 wrote:
         | Packet switching won over circuit switching because the cost-
         | per-capacity was so much lower; if you have to end up over-
         | provision/under-utilize links anyways, why not use circuit
         | switching?
        
       | bobmcnamara wrote:
       | This result is explainable by Danish tandem q theory.
        
       | fmbb wrote:
       | Can someone explain this to a layman? Because it seems to me the
       | four solutions proposed are:
       | 
       | 1. Seeing the future 2. Building a ten times higher capacity
       | network 3. Breaking Net neutrality by deprioritizing traffic that
       | someone deems "not latency sensitive" 4. Flood the network with
       | more redundant packets
       | 
       | Or is the whole text a joke going over my head?
        
         | DowsingSpoon wrote:
         | Hi! I'm also a layman who doesn't really know what he's talking
         | about.
         | 
         | The article ends with " I will leave you with a question: Are
         | we trying to make end-to-end congestion control work for cases
         | where it can't possibly work?"
         | 
         | So, it seems to me that there may not _be_ any good solutions
         | to latency spikes. The article is basically pointing out that
         | you either pursue one of the unfortunate solutions mentioned,
         | or be resigned to accept that no congestion control mechanism
         | will ever be sufficient to eliminate the spikes. This seems a
         | valuable message to the people who might be involved in
         | developing the sort congestion control mechanisms they're
         | talking about.
        
           | fmbb wrote:
           | Haha joke's on me I guess, not reading the _headline_ and
           | skimming the intro so not getting the point.
        
         | Sohcahtoa82 wrote:
         | I think the author is saying that there ARE solutions, but none
         | of them are really viable.
         | 
         | Seeing the future obviously can't happen. Building a higher
         | capacity network is just wasted money. Breaking NN is going to
         | be unpopular, not to mention determination of "not latency
         | sensitive" is going to be difficult to impossible unless
         | there's a "not latency sensitive" flag on TCP packets that
         | people actually use in good faith. And flooding the network
         | with more redundant packets is just going to be a colossal
         | waste of bandwidth and could easily make congestion issues
         | _worse_.
        
       | comex wrote:
       | Sounds like a good argument for using a CDN. Or to phrase it more
       | generally, an intermediary that is as close as possible to the
       | host experiencing the fluctuating bandwidth bottleneck, while
       | still being on the other side of the bottleneck. That way it can
       | detect bandwidth drops quickly and handle them more intelligently
       | than by dropping packets - for instance, by switching to a lower
       | quality video stream (or even re-encoding on the fly).
        
       | scottlamb wrote:
       | > That is not true for networks where link capacity can change
       | rapidly, such as Wi-Fi and 5G.
       | 
       | Is this problem almost exclusively to do with the "last-mile" leg
       | of the connection to the user? (Or the two legs, in the case of
       | peer-to-peer video chats.) I would expect any content provider or
       | Internet backbone connection to be much more stable (and
       | generally over-provisioned too). In particular, there may be
       | occasional routing changes but a given link should be a fiber
       | line with fixed total capacity. Or is having changes in what
       | other users are contending for that capacity effectively the same
       | problem?
        
         | lo0dot0 wrote:
         | The entire network is usually oversubscribed so there can be
         | congestion. It is usually not over-provisioned because that
         | would be too expensive.
        
       | dilyevsky wrote:
       | There's a paper by Google where they demonstrated that you can
       | successfully push utilization far beyond of what is suggested
       | here[0]
       | 
       | [0] -
       | https://www.cs.princeton.edu/courses/archive/fall17/cos561/p...
        
       ___________________________________________________________________
       (page generated 2024-07-09 23:00 UTC)