[HN Gopher] There isn't much point to HTTP/2 past the load balancer
       ___________________________________________________________________
        
       There isn't much point to HTTP/2 past the load balancer
        
       Author : ciconia
       Score  : 195 points
       Date   : 2025-02-25 05:33 UTC (2 days ago)
        
 (HTM) web link (byroot.github.io)
 (TXT) w3m dump (byroot.github.io)
        
       | chucky_z wrote:
       | gRPC?
        
         | agf wrote:
         | Surprised not to see this mentioned in the article.
         | 
         | Lots of places (including a former employer) have done tons of
         | work to upgrade internal infrastructure to support HTTP/2 just
         | so they could use gRPC. The performance difference from JSON-
         | over-HTTP APIs was meaningful for us.
         | 
         | I realize there are other solutions but this is a common one.
        
           | 0x457 wrote:
           | Probably because it only works correctly outside of browser.
           | Browsers don't support "native" grpc. You normally use
           | something with specifically gRPC support rather than just h2
           | in a spherical vacuum.
        
         | nosequel wrote:
         | This entirely. When I first read the title, I thought, lets see
         | what they say about gRPC. gRPC is so much nicer working across
         | applications compared to simple REST servers/clients.
        
       | awinter-py wrote:
       | plus in my experience some h2 features behave oddly with load
       | balancers
       | 
       | I don't understand this super well, but could not get keepalives
       | to cross the LB boundary w/ GCP
        
         | _ache_ wrote:
         | HTTP keepalive it a feature from HTTP 1.1, not HTTP2.
        
           | awinter-py wrote:
           | ping frames?
        
       | Animats wrote:
       | The amusing thing is that HTTP/2 is mostly useful for sites that
       | download vast numbers of tiny Javascript files for no really good
       | reason. Like Google's sites.
        
         | paulddraper wrote:
         | Or small icon/image files.
         | 
         | Anyone remember those sprite files?
        
           | cyberpunk wrote:
           | You ever had to host map tiles? Those are the worst!
        
             | SahAssar wrote:
             | Indeed, there is a reason most mapping libraries still
             | support specifying multiple domains for tiles. It used to
             | be common practice to setup a.tileserver.test,
             | b.tileserver.test, c.tileserver.test even if they all
             | pointed to the same IP/server just to get around the
             | concurrent request limit in browsers.
        
         | theandrewbailey wrote:
         | I've seen noticeable, meaningful speed improvements with HTTP/2
         | on pages with only 1 Javascript file.
         | 
         | But I'd like to introduce you/them to tight mode:
         | 
         | https://docs.google.com/document/d/1bCDuq9H1ih9iNjgzyAL0gpwN...
         | 
         | https://www.smashingmagazine.com/2025/01/tight-mode-why-brow...
        
       | lmm wrote:
       | I think this post gets the complexity situation backwards. Sure,
       | you _can_ use a different protocol between your load balancer and
       | your application and it won 't do _too_ much harm. But you 're
       | adding an extra protocol that you have to understand, for no real
       | benefit.
       | 
       | (Also, why do you even want a load balancer/reverse proxy, unless
       | your application language sucks? The article says it "will also
       | take care of serving static assets, normalize inbound requests,
       | and also probably fend off at least some malicious actors", but
       | frankly your HTTP library should already be doing all of those.
       | Adding that extra piece means more points of failure, more
       | potential security vulnerabilities, and for what benefit?)
        
         | pixelesque wrote:
         | > Sure, you can use a different protocol between your load
         | balancer and your application and it won't do too much harm.
         | But you're adding an extra protocol that you have to
         | understand, for no real benefit.
         | 
         | Well, that depends...
         | 
         | At a certain scale (and arguably, not too many people will ever
         | need to think about this), using UNIX sockets (instead of HTTP
         | TCP) between the application and load balancer can be faster in
         | some cases, as you don't go through the TCP stack...
         | 
         | > Also, why do you even want a load balancer/reverse proxy,
         | unless your application language sucks?
         | 
         | Erm... failover... ability to do upgrades without any
         | downtime... it's extra complexity yes, but it does have some
         | benefits...
        
           | lmm wrote:
           | > At a certain scale (and arguably, not too many people will
           | ever need to think about this), using UNIX sockets (instead
           | of HTTP TCP) between the application and load balancer can be
           | faster in some cases, as you don't go through the TCP
           | stack...
           | 
           | Sure (although as far as I can see there's no reason you
           | can't keep using HTTP for that). You can go even further and
           | use shared memory (I work for a company that used Apache with
           | Jk back in the day). But that's an argument for using a
           | faster protocol because you're seeing a benefit from it, not
           | an argument for using a slower protocol because you can't be
           | bothered to implement the latest standard.
        
             | tuukkah wrote:
             | > _using a slower protocol because you can 't be bothered
             | to implement the latest standard._
             | 
             | I thought we were discussing HTTP/2 but now you seem to be
             | invoking HTTP/3? It's even faster indeed but brings a whole
             | lot of baggage with it. Nice comparison point though: Do
             | you want to add the complexity of HTTP/2 or HTTP/3 in your
             | backend? (I don't.)
        
               | lmm wrote:
               | > I thought we were discussing HTTP/2 but now you seem to
               | be invoking HTTP/3?
               | 
               | The article talks about HTTP/2 but I suspect they're
               | applying the same logic to HTTP/3.
               | 
               | > Do you want to add the complexity of HTTP/2 or HTTP/3
               | in your backend? (I don't.)
               | 
               | I'd like to use the same protocol all the way through. I
               | wouldn't want to implement any HTTP standard by hand (I
               | _could_ , but I wouldn't for a normal application), but
               | I'd expect an established language to have a solid
               | library implementation available.
        
         | tuukkah wrote:
         | Answers from the article - the "extra" protocol is just
         | HTTP/1.1 and the reason for a load balancer is the ability to
         | have multiple servers:
         | 
         | > _But also the complexity of deployment. HTTP /2 is fully
         | encrypted, so you need all your application servers to have a
         | key and certificate, that's not insurmountable, but is an extra
         | hassle compared to just using HTTP/1.1, unless of course for
         | some reasons you are required to use only encrypted connections
         | even over LAN._
         | 
         | > _So unless you are deploying to a single machine, hence don't
         | have a load balancer, bringing HTTP /2 all the way to the Ruby
         | app server is significantly complexifying your infrastructure
         | for little benefit._
        
           | conradludgate wrote:
           | I've deployed h2c (cleartext) in many applications. No tls
           | complexity needed
        
             | tuukkah wrote:
             | Good to know - neither the parent nor the article mention
             | this. h2c seems to have limited support by tooling (e.g.
             | browsers, curl), which is a bit discouraging.
             | 
             | EDIT: Based on the HTTP/2 FAQ, pure h2c is not allowed in
             | the standard as it requires you to implement some HTTP/1.1
             | upgrade functionality: https://http2.github.io/faq/#can-i-
             | implement-http2-without-i...
        
               | _ache_ wrote:
               | Why do you think that curl doesn't support h2c ?
               | 
               | It does. Just use `--http2` or `--http2-prior-knowledge`,
               | curl deduce the clear or not clear by `http` or `https`
               | URL protocol prefix (clear the default).
        
               | tuukkah wrote:
               | I said limited support and gave curl as an example
               | because curl --http2 sends a HTTP/1.1 upgrade request
               | first so fails in a purely HTTP/2 environment.
               | 
               | Thanks for bringing up --http2-prior-knowledge as a
               | solution!
        
         | Galanwe wrote:
         | > Also, why do you even want a load balancer/reverse proxy,
         | unless your application language sucks
         | 
         | - To terminate SSL
         | 
         | - To have a security layer
         | 
         | - To load balance
         | 
         | - To have rewrite rules
         | 
         | - To have graceful updates
         | 
         | - ...
        
           | lmm wrote:
           | > To terminate SSL
           | 
           | To make sure that your connections can be snooped on over the
           | LAN? Why is that a positive?
           | 
           | > To have a security layer
           | 
           | They usually do more harm than good in my experience.
           | 
           | > To load balance
           | 
           | Sure, if you're at the scale where you want/need that then
           | you're getting some benefit from that. But that's something
           | you can add in when it makes sense.
           | 
           | > To have rewrite rules > To have graceful updates
           | 
           | Again I would expect a HTTP library/framework to handle that.
        
             | fragmede wrote:
             | You terminate SSL as close to the user as possible, because
             | that round trip time is greatly going to affect the user
             | experience. What you do between your load balancer and
             | application servers is up to you, (read: should still be
             | encrypted) but terminating SSL asap is about user
             | experience.
        
               | lmm wrote:
               | > You terminate SSL as close to the user as possible,
               | because that round trip time is greatly going to affect
               | the user experience. What you do between your load
               | balancer and application servers is up to you, (read:
               | should still be encrypted) but terminating SSL asap is
               | about user experience.
               | 
               | That makes no sense. The latency from your load balancer
               | to your application server should be a tiny fraction of
               | the latency from the user to the load balancer (unless
               | we're talking about some kind of edge deployment, but at
               | that point it's not a load balancer but some kind of
               | smart proxy), and the load balancer decrypting and re-
               | encrypting almost certainly adds more latency compared to
               | just making a straight connection from the user to the
               | application server.
        
               | fragmede wrote:
               | It depends on your deployment and where your database and
               | app servers and POPs are. If your load balancer is right
               | next to your application server; is right next to your
               | database, you're right. And it's fair to point out that
               | most people have that kind of deployment. However there
               | are some companies, like Google, that have enough of a
               | presence that the L7 load balancer/smart proxy/whatever
               | you want to call it is way closer to you, Internet-
               | geographically, than the application server or the
               | database. For their use case and configuration, your
               | "almost certainly" isn't what was seen emperically.
        
               | ndriscoll wrote:
               | Say your application and database are in the US West and
               | you want to serve traffic to EU or AUS, or even US East.
               | Then you want to terminate TCP and TLS in those regions
               | to cut down on handshake latency, slow start time, etc.
               | Your reverse proxy can then use persistent TLS
               | connections back to the origin so that those connection
               | startup costs are amortized away. Something like nginx
               | can pretty easily proxy like 10+ Gb/s of traffic and 10s
               | of thousands of requests per second on a couple low power
               | cores, so it's relatively cheap to do this.
               | 
               | Lots of application frameworks also just don't bother to
               | have a super high performance path for static/cached
               | assets because there's off-the-shelf software that does
               | that already: caching reverse proxies.
        
             | harshreality wrote:
             | > To make sure that your connections can be snooped on over
             | the LAN? Why is that a positive?
             | 
             | No, to keep your _app_ from having to deal with SSL.
             | Internal network security is an issue, but sites that need
             | multi-server architectures can 't really be passing SSL
             | traffic through to the application servers anyway, because
             | SSL hides stuff that's needed for the load balancers to do
             | their jobs. Many websites need load balancers for
             | performance, but are not important enough to bother with
             | the threat model of an internal network compromise (whether
             | it's on the site owner's own LAN, or a bare metal or VPS
             | hosting vlan).
             | 
             | > Sure, if you're at the scale where you want/need that
             | then you're getting some benefit from that. But that's
             | something you can add in when it makes sense.
             | 
             | So why not preface your initial claims by saying you trust
             | the web app to be secure enough to handle SSL keys, and a
             | single instance of the app can handle all your traffic, and
             | you don't need high availability in failure/restart cases?
             | 
             | That would be a much better claim. It's still unlikely,
             | because you don't control the internet. Putting your
             | website behind Cloudflare buys you some decreased
             | vigilance. A website that isn't too popular or attention-
             | getting also reduces the risk. However, Russia and China
             | exist (those are examples only, not an exclusive list of
             | places malicious clients connect from).
        
               | lmm wrote:
               | > So why not preface your initial claims by saying you
               | trust the web app to be secure enough to handle SSL keys,
               | and a single instance of the app can handle all your
               | traffic, and you don't need high availability in
               | failure/restart cases?
               | 
               | Yeah, I phrased things badly, I was trying to push back
               | on the idea that you should always put your app behind a
               | load balancer even when it's a single instance on a
               | single machine. Obviously there are use cases where a
               | load balancer does add value.
               | 
               | (I do think ordinary webapps should be able to gracefully
               | reload/restart without losing connections, it really
               | isn't so hard, someone just has to make the effort to
               | code the feature in the library/framework and that's a
               | one-off cost)
        
             | Galanwe wrote:
             | > > To terminate SSL
             | 
             | > To make sure that your connections can be snooped on over
             | the LAN? Why is that a positive?
             | 
             | Usually your "LAN" uses whole link encryption, so that
             | whatever is accessed in your private infrastructure network
             | is encrypted (being postgres, NFS, HTTP, etc). If that is
             | not the case, then you have to configure encryption at each
             | service level, which is both error prone, time consuming,
             | and not always possible. If that is not case then you can
             | have internal SSL certificates for the traffic between RP
             | and workers, workers and postgres, etc.
             | 
             | Also you don't want your SSL server key to be accessible
             | from business logic as much as possible, having an early
             | termination and isolated workers achieves that.
             | 
             | Also, you generally have workers access private resources,
             | which you don't want exposed on your actual termination
             | point. It's just much better to have a public termination
             | point RP with a private iface sending requests to workers
             | living in a private subnet accessing private resources.
             | 
             | > > To have a security layer
             | 
             | > They usually do more harm than good in my experience.
             | 
             | Right, maybe you should detail your experience, as your
             | comments don't really tell much.
             | 
             | > To have rewrite rules
             | 
             | > To have graceful updates
             | 
             | > > Again I would expect a HTTP library/framework to handle
             | that.
             | 
             | HTTP frameworks handle routing _for themselves_, this is
             | not the same as rewrite rules which are often used to glue
             | multiple heterogeneous parts together.
             | 
             | HTTP frameworks are not handling all the possible rewriting
             | and gluing for the very reason that it's not a good idea to
             | do it at the logic framework level.
             | 
             | As for graceful updates, there's a chicken and egg problem
             | to solve. You want graceful update between multiple
             | versions of your own code / framework. How could that work
             | without a third party balancing old / new requests to the
             | new workers one at a time.
        
             | unification_fan wrote:
             | Tell me you never built an infrastructure without telling
             | me you never built an infrastructure
             | 
             | The point being that all the code on the stack is not
             | necessarily _yours_
        
               | lmm wrote:
               | I've built infrastructure. Indeed I've built
               | infrastructure exactly like this, precisely because
               | maintaining encryption all the way to the application
               | server was a security requirement (this was a system that
               | involved credit card information). It worked well.
        
             | ahoka wrote:
             | You usually re-encrypt your traffic after the GW, either by
             | using an internal PKI and TLS or some kind of encapsulation
             | (IPSEC, etc).
             | 
             | Security and availability requirements might vary, so much
             | to argue about. Usually you have some kind of 3rd party
             | service you want to hide, control CORS, Cache-Control, etc
             | headers uniformly, etc. If you are fine with 5-30 minutes
             | of outage (or until someone notices and manually restores
             | service), then of course you don't need to load balance.
             | But you can imagine this not being the case at most
             | companies.
        
           | tuukkah wrote:
           | - To host multiple frontends, backends and/or APIs under one
           | domain name
        
             | 0x457 wrote:
             | > backends and/or APIs under one domain name
             | 
             | On one IP, sure, for one domain you could use an API
             | gateway.
        
               | SahAssar wrote:
               | API gateway is a fancy term for a configurable reverse
               | proxy often bought as a service.
        
         | toast0 wrote:
         | Load balancers are nice to have if you want to move traffic
         | from one machine to another. Which sometimes needs to happen
         | even if your application language doesn't suck and you can
         | hotload your changes... You may still need to manage hardware
         | changes, and a load balancer can be nice for that.
         | 
         | DNS is usable, but some clients and recursive resolvers like to
         | cache results for way beyond the TTL provided.
        
         | harshreality wrote:
         | > why do you even want a load balancer/reverse proxy, unless
         | your application language sucks?
         | 
         | Most load balancer/reverse proxy applications also handle TLS.
         | Security-conscious web application developers don't want TLS
         | keys in their application processes. Even the varnish authors
         | (varnish is a load balancer/caching reverse proxy) refused to
         | integrate TLS support because of security concerns; despite
         | being reverse-proxy authors, they didn't trust themselves to
         | get it right.
         | 
         | An application can't load-balance itself very well. Either you
         | roll your own load balancer as a separate layer of the
         | application, which is reinventing the wheel, or you use an
         | existing load balancer/reverse proxy.
         | 
         | Easier failover with fewer (ideally zero) dropped requests.
         | 
         | If the app language isn't compiled, having it serve static
         | resources is almost certainly much slower than having a reverse
         | proxy do it.
        
           | lmm wrote:
           | > Security-conscious web application developers don't want
           | TLS keys in their application processes.
           | 
           | If your application is in a non-memory-safe language, sure
           | (but why would you do that?). Otherwise I would think the
           | risk is outweighed by the value of having your connections
           | encrypted end-to-end. If your application process gets fully
           | compromised then an attacker already controls it, by
           | definition, so (given that modern TLS has perfect forward
           | secrecy) I don't think you really gain anything by keeping
           | the keys confidential at that point.
        
             | nickelpro wrote:
             | I write application servers for a living, mostly for Python
             | but previously for other languages.
             | 
             | Nobody, _nobody_ , writes application servers with the
             | intent of having them exposed to the public internet. Even
             | if they're completely memory safe, we don't do DOS
             | protections like checking for reasonable header lengths,
             | rewriting invalid header fields, dropping malicious
             | requests, etc. Most application servers will still die to
             | slowloris attacks. [1]
             | 
             | We don't do this because it's a performance hog and we
             | assume you're already reverse proxying behind any
             | responsible front-end server, which all implement these
             | protections. We don't want to double up on that work. We
             | implement the HTTP spec with as low overhead as possible,
             | because we expect to have pipelined HTTP/1.1 connections
             | from a load balancer or other reverse proxy.
             | 
             | Your application server, Gunicorn, Twisted, Uvicorn,
             | whatever, does not want to be exposed to the public
             | internet. Do not expose it to the public internet.
             | 
             | [1]: https://en.wikipedia.org/wiki/Slowloris_(cyber_attack)
        
               | mr_person wrote:
               | As someone who designs load-balancer solutions for a
               | living I cannot agree with this more.
               | 
               | I likewise assume that all servers are insecure, always,
               | and we do not want them exposed without a sane load
               | balancer layer.
               | 
               | Your server was probably not made to be exposed to the
               | public internet. Do not expose it to the public internet.
        
               | koakuma-chan wrote:
               | > We don't do this because it's a performance hog and we
               | assume you're already reverse proxying behind any
               | responsible front-end server
               | 
               | What application servers have you written? I have never
               | seen an application server readme say DON'T EXPOSE
               | DIRECTLY TO THE INTERNET, WE ASSUME YOU USE REVERSE
               | PROXY.
        
               | SahAssar wrote:
               | > Nobody, nobody, writes application servers with the
               | intent of having them exposed to the public internet
               | 
               | For rust, go, lua (via nginx openresty) and a few others
               | this is a viable path. I probably wouldn't do it with
               | node (or bun or deno), python, or similar but there are
               | languages where in certain circumstances it is reasonable
               | and might be better.
        
             | harshreality wrote:
             | Speculative execution cpu bugs. Or whatever the next class
             | of problems is that can expose bits of process memory
             | without software memory bugs.
             | 
             | That's already a fringe case. Do you really think
             | everyone's writing web applications in a language like rust
             | without any unsafe (or equivalent)?
        
             | kreetx wrote:
             | You use a reverse proxy because whenever you "deploy to
             | prod", you'll be using one anyway, thus by not having TLS
             | in your app, you had not built something you don't actually
             | need.
        
         | guappa wrote:
         | C fast, the rest slow. You don't want to serve static assets in
         | non-C.
        
           | SahAssar wrote:
           | If you call sendfile with kTLS I imagine it'd be fast in any
           | language
        
       | treve wrote:
       | First 80% of the article was great, but it ends a bit handwavey
       | when it gets to its conclusion.
       | 
       | One thing the article gets wrong is that non-encrypted HTTP/2
       | exists. Not between browsers, but great between a load balancer
       | and your application.
        
         | fragmede wrote:
         | Not according to Edward Snowden, if you're Yahoo and Google.
        
           | ChocolateGod wrote:
           | You can just add encryption to your backend private network
           | (e.g. Wireguard)
           | 
           | Which has the benefit of encrypting everything and avoids the
           | overhead of starting a TLS socket for every http connection.
        
             | jeroenhd wrote:
             | If you're going that route, you may as well just do HTTPS
             | again. If you configure your TLS cookies and session
             | resumption right, you'll get all of the advantages of fancy
             | post-quantum crypto without having to go back to the days
             | of manually setting up encrypted tunnels like when IPSec
             | did the rounds.
        
         | tuukkah wrote:
         | Do you want to risk the complexity and potential performance
         | impact from the handshake that the HTTP/2 standard requires for
         | non-encrypted connections? Worst case, your client and server
         | toolings clash in a way that every request becomes two requests
         | (before the actual h2c request, a second one for the required
         | HTTP/1.1 upgrade, which the server closes as suggested in the
         | HTTP/2 FAQ).
        
           | arccy wrote:
           | most places where you'd use it use h2c prior knowledge, that
           | is, you just configure both ends to only speak h2c, no
           | upgrades or downgrades.
        
         | byroot wrote:
         | > One thing the article gets wrong is that non-encrypted HTTP/2
         | exists
         | 
         | Indeed, I misread the spec, and added a small clarification to
         | the article.
        
       | feyman_r wrote:
       | CDNs like Akamai still don't support H2 back to origins.
       | 
       | That's likely not because of the wisdom in the article per se,
       | but because of rising complexity in managing streams and
       | connections downstream.
        
       | wczekalski wrote:
       | It is very useful for long lived (bidirectional) streams.
        
         | m00x wrote:
         | Only if you're constrained on connections. The reason that
         | HTTP2 is much better for websites is because of the slow starts
         | of TCP connections. If you're already connected, you don't
         | suffer those losses, and you benefit from kernel muxing.
        
           | 0x457 wrote:
           | You've missed the bidirectional part.
        
             | nightpool wrote:
             | How is http2 bidirectional streams better than websockets?
             | I thought they were pretty much equivalent.
        
               | 0x457 wrote:
               | Well, IMO h2 streams are more flushed out and offer
               | better control than websockets, but that's just my
               | opinion. In fact, websockets are your only "proper"
               | option if you want hat bidirectional stream be binary -
               | browsers don't expose that portion of h2 to JS.
               | 
               | Here is a silly thing that is possible with h2 over a
               | single connection, but not with websockets:
               | 
               | Multiple page components (Islands) each have their own
               | stream of events over a single h2 connection. With
               | websockets, you will need to roll your own
               | multiplexing[1].
               | 
               | [1]: I think you can multiplex multiple websockets over a
               | single h2 connection tho, but don't quote me on this.
        
       | LAC-Tech wrote:
       | _Personally, this lack of support doesn't bother me much, because
       | the only use case I can see for it, is wanting to expose your
       | Ruby HTTP directly to the internet without any sort of load
       | balancer or reverse proxy, which I understand may seem tempting,
       | as it's "one less moving piece", but not really worth the trouble
       | in my opinion._
       | 
       | That seems like a massive benefit to me.
        
       | miyuru wrote:
       | The TLS requirement from HTTP2 also hindered http2 origin uptake.
       | The TLS handshake adds latency and is unnecessary on some
       | instances. (This is mentioned in heading "Extra Complexity" in
       | the article)
        
         | tankenmate wrote:
         | For HTTP/3 you get 0-RTT however which largely mitigates this.
        
           | stavros wrote:
           | 0-RTT _resumption_ (unless I 'm mistaken), which doesn't help
           | with the first connection (but that might be OK).
        
             | rochacon wrote:
             | Correct, to achieve 0-RTT the application need to perform
             | the handshake/certificate exchange at least once,
             | otherwise, how would it encrypt the payload? This could be
             | cached preemptively iirc, but it is not worth it.
             | 
             | The problem will be that QUIC uses more userland code and
             | UDP is not as optimized as TCP inside kernels. So far, the
             | extra CPU penalty has discouraged me from adopting QUIC
             | everywhere, I've kept it mostly on the edge-out where the
             | network is far less reliable.
        
           | bawolff wrote:
           | Umm dont you get 0-RTT resumption in all versions of http?
           | Its a TLS feature not an http feature. It does not require
           | QUIC
        
       | Guthur wrote:
       | The RFC said "SHOULD not" not "MUST not" couldn't we have just
       | ignored the 2 connection limit?
        
         | KingMob wrote:
         | That's what browsers actually did.
        
       | jchw wrote:
       | Personally, I'd like to see more HTTP/2 support. I think HTTP/2's
       | duplex streams would be useful, just like SSE. In theory,
       | WebSockets do cover the same ground, and there's also a way to
       | use WebSockets over HTTP/2 although I'm not 100% sure how that
       | works. HTTP/2 though, elegantly handles all of it, and although
       | it's a bit complicated compared to HTTP/1.1, it's actually
       | simpler than WebSockets, at least in some ways, and follows the
       | usual conventions for CORS/etc.
       | 
       | The problem? Well, browsers don't have a JS API for bidirectional
       | HTTP/2 streaming, and many don't see the point, like this article
       | expresses. NGINX doesn't support end-to-end HTTP/2. Feels like a
       | bit of a shame, as the streaming aspect of HTTP/2 is a more
       | natural evolution of the HTTP/1 request/response cycle versus
       | things like WebSockets and WebRTC data channels. Oh well.
        
         | withinboredom wrote:
         | Why would you use nginx these days anyway? Caddy is ~4x the
         | performance on proxying in my tests.
        
           | evalijnyi wrote:
           | Is it just me or did anyone else completely miss Caddy for
           | it's opening sentence?
           | 
           | >Caddy is a powerful, extensible platform to serve your
           | sites, services, and apps, written in Go.
           | 
           | To me it reads that if your application is not written in Go,
           | don't bother
        
             | unification_fan wrote:
             | Why should a reverse proxy give a single shit about what
             | your lang application is written in
        
               | evalijnyi wrote:
               | It shouldn't, which is why I think wording there is
               | strange. Nginx doesen't market itself as "platform to
               | serve your sites, services, and apps, written in C".
               | Reading the first sentence I don't even know what Caddy
               | is, what does a platform mean in this context? Arriving
               | on Nginx's site the first sentence visible to me is
               | 
               | >nginx ("engine x") is an HTTP web server, reverse proxy,
               | content cache, load balancer, TCP/UDP proxy server, and
               | mail proxy server.
               | 
               | Which is perfect
        
               | suraci wrote:
               | when it says 'written in Go', the subtext is - i'm fast,
               | i'm new, i'm modern, go buddies love me please
               | 
               | the better one is 'written in rust', the subtext is - i'm
               | fast, i'm new, i'm futurism, and i'm memory-safe, rust
               | buddies love me please
               | 
               | --- cynicism end ---
               | 
               | i do think sometimes it's worth to note the underlying
               | tech stack, for example, when a web server claims it's
               | based on libev, i know it's non-blocking
        
               | jchw wrote:
               | Back when Caddy first came out over 10 years ago, the
               | fact that it was written in Go was just simply more
               | notable. For Go, it also at least tells you the software
               | is in a memory-safe programming language. Now neither of
               | those things is really all that notable, for new
               | software.
        
               | PeterCorless wrote:
               | I didn't even read the article, but I love the comments
               | on the thread.
               | 
               | Yes. The implementation language of a system should not
               | matter to people in the least. However, they are used as
               | a form of prestige by developers and, sometimes, as a
               | consumer warning label by practitioners.
               | 
               | "Ugh. This was written in <language-I-hate>."
               | 
               | "Ooo! This was written in <language-I-love>!"
        
               | cruffle_duffle wrote:
               | Pretty sure Ruby on Rails sites were the same way.
        
               | eichin wrote:
               | Python certainly was too, back in the day. It feels like
               | it's roughly a "first 10 years of the language" thing,
               | maybe stretched another 5 if there's an underdog aspect
               | (like being interpreted.)
        
               | p_ing wrote:
               | For an open source product, it's fun to say "written in X
               | language". It also advertises the project to developers
               | who may be willing to contribute.
               | 
               | If you put "product made with Go", I'm not going to
               | contribute as I don't know Go, though that wouldn't
               | prevent me from using it should it fit my needs. But if
               | you wrote your project in .NET, I may certainly be
               | willing to contribute.
        
             | jeroenhd wrote:
             | The Go crowd, like the Rust crowd, likes to advertise the
             | language their software is written in. I agree that that
             | specific sentence is a bit ambiguous, though, as if it's
             | some kind of middleware that hooks into Go applications.
             | 
             | It's not, it's just another standalone reverse proxy.
        
               | SteveNuts wrote:
               | > The Go crowd, like the Rust crowd, likes to advertise
               | the language their software is written in.
               | 
               | Probably because end users appreciate that usually that
               | means a single binary + config file and off you go. No
               | dependency hell, setting up third party repos, etc.
        
               | 0x457 wrote:
               | > Probably because end users appreciate that usually that
               | means a single binary + config file and off you go. No
               | dependency hell, setting up third party repos, etc.
               | 
               | Until you have to use some plugin (e.g. cloudflare to
               | manage DNS for ACME checks), now it's exactly "dependency
               | hell, setting up third party repos, etc."
               | 
               | I also fully expect to see a few crashes from unchecked
               | `err` in pretty much any Go software. Also, nginx
               | qualifies for `single binary + config`, it's just NGINX
               | is for infra people and Caddy is for application
               | developers.
        
               | airstrike wrote:
               | Fortunately I don't think any of that applies to Rust ;-)
        
               | 0x457 wrote:
               | Actually, all of it applies to rust. The only stable ABI
               | in Rust is C-ABI and IMO at that point it stops being
               | rust. Even dynamically loading rustlib in rust
               | application is unsafe and only expected to work when both
               | compiled with the same version. In plugins context, it's
               | the same as what Caddy making you do.
               | 
               | However, Rust Evangelical Strike Force successfully
               | infiltrated WASM committee and when WASM Components
               | stabilize, it can be used for plugins in some cases (see
               | Zed and zellij). (Go can use them as well, rust is just
               | the first (only?) to support preview-2 components model.
        
               | airstrike wrote:
               | Yeah, I don't really do dynamic loading in my corner of
               | Rust. And I can always target some MSRV, cargo package
               | versions, and be happy with it. Definitely beats the
               | dependency hell I've had to deal with elsewhere
        
               | 0x457 wrote:
               | Don't get me wrong, I love rust and use it almost every
               | day. Doing `cargo run` in a project it handles everything
               | is good. This gets lost once you start working in a
               | plugin context. Because now you're not dealing in your
               | neatly organized workplace, you're working across
               | multiple workplaces from different people.
               | 
               | IIRC it's more than just MSRV or even matching version
               | exactly. It also requires flags that were used to compile
               | rustc match (there is an escape hatch tho).
        
               | 0x457 wrote:
               | When I see software written in Go, I know that it has a
               | very sad plugin support story.
        
               | kbolino wrote:
               | Terraform providers seem to work pretty well, but as far
               | as I know, they're basically separate executables and the
               | main process communicates with them using sockets.
        
               | 0x457 wrote:
               | Yes, works very well for terraform. You probably can see
               | why it's not going to work for a webserver?
        
           | theflyinghorse wrote:
           | I have an nginx running on my VPS supporting my startup. Last
           | time I had to touch it was about 4 years ago. Quality
           | software
        
           | jasonjayr wrote:
           | nginx's memory footprint is tiny for what it delivers. A
           | common pattern I see for homelab and self-hosted stuff is a
           | lightweight bastion VPS in a cloud somewhere proxying
           | requests to more capabile on-premise hardware over a VPN
           | link. Using a cheap < $5mo means 1GB or less of RAM, so you
           | have to tightly watch what is running on that host.
        
             | neoromantique wrote:
             | To be fair 1GB is a lot, both caddy and nginx would feel
             | pretty good with it I'd imagine.
        
               | Aurornis wrote:
               | 1GB would be for everything running on the server, not
               | just the reverse proxy.
               | 
               | For small personal projects, you don't usually buy a
               | $5/month VPS just to use as a dedicated reverse proxy.
        
               | sophacles wrote:
               | 1GB for a VPC that runs an HTTP load balancer/reverse
               | proxy and a handful of IPsec or Wireguard tunnels back to
               | the app servers (origin) is overkill. You could
               | successfully run that in 512MB, and probably even 256MB.
               | (That's the scenario described).
               | 
               | What needs to run on this that's a memory hog making
               | 512MB too small? By my (very rough) calculations youd
               | need 50-100MB for kernel + systemd + sshd + nginx base
               | needs + tunnels home. That leaves the rest for per-
               | request processing.
               | 
               | Each request starts needing enough RAM to parse the https
               | headers into a request object, open a connection back to
               | the origin, and buffer a little bit of traffic that comes
               | in while that request is being processed/origin
               | connection opens. After that you only need to maintian 2
               | connections plus some buffer space - Generously 50KB
               | initially and 10KB ongoing. There's enough space for a
               | thousand concurrent requests in the ram not used by the
               | system. Proxying is fairly cheap - the app servers (at
               | the origin) may need much much more, but that's not the
               | point of the VPS being discussed.
               | 
               | Also worth noting that the cheap VPS is not a per-project
               | cost - that is the reverse proxy that handles all HTTP
               | traffic into your homelab.
        
               | eikenberry wrote:
               | 1 GB should be way more than either should need. I run
               | nginx, unbound, postfix, dovecot plus all the normal suff
               | (ssh, systemd, etc) for a Linux system on a VPS w/ 500MB
               | of RAM. Currently the system has ~270MB used. It actually
               | has 1GB available due to a plan auto-upgrade but have
               | never bothered as I just don't need it.
        
           | Aurornis wrote:
           | I really like Caddy, but these nginx performance comparisons
           | are never really supported in benchmarks.
           | 
           | There have been numerous attempts to benchmark both (One
           | example: https://blog.tjll.net/reverse-proxy-hot-dog-eating-
           | contest-c... ) but the conclusion is almost always that
           | they're fairly similar.
           | 
           | The big difference for simple applications is that Caddy is
           | easier to set up, and nginx has a smaller memory footprint.
           | Performance is similar between the two.
        
             | otterley wrote:
             | AFAIK both proxies are capable of serving at line rate for
             | 10Gbps or more at millions of concurrent connections. I
             | can't possibly see how performance would significantly
             | differ if they're properly configured.
        
           | otterley wrote:
           | You're going to need to show your homework for this to be a
           | credible claim.
        
           | p_ing wrote:
           | Why would you use either when there is OpenBSD w/ carp +
           | HAProxy?
           | 
           | There's lots of options out there. I mean, even IIS can do RP
           | work.
           | 
           | Ultimately, I would prefer a PaaS solution over having to run
           | a couple of servers.
        
         | KingMob wrote:
         | Yeah, it's a shame you can't take advantage of natural HTTP/2
         | streaming from the browser. There's the upcoming WebTransport
         | API (https://developer.mozilla.org/en-
         | US/docs/Web/API/WebTranspor...), but it could have been added
         | earlier.
        
         | jonwinstanley wrote:
         | I thought http/2 was great for reducing latency for JS
         | libraries like Turbo Links and Hotwire.
         | 
         | Which is why the Rails crowd want it.
         | 
         | Is that not the case?
        
           | the_duke wrote:
           | H2 still suffers from head of line blocking on unstable
           | connections (like mobile).
           | 
           | H3 is supposed to solve that.
        
         | timewizard wrote:
         | > elegantly
         | 
         | There is a distinct lack of elegance in the HTTP/2 protocol.
         | It's exceptionally complex and it has plenty of holes in it.
         | That it simply does a job does not earn it "elegant."
        
           | mikepurvis wrote:
           | The same arguments apply to WebSockets probably-- yes the
           | implementation is a little hairy, but if the end result is a
           | clean abstraction that does a good job of hiding that
           | complexity from the rest of the stack, then it's elegant.
        
           | jchw wrote:
           | Honestly, I don't understand this critique. The actual
           | protocol is pretty straight-forward for what it does. I'm not
           | sure it can be much simpler given the inflexible
           | requirements. I find it more elegant than HTTP/1.1.
           | 
           | Versus HTTP/1.1, some details are simplified by moving the
           | request and status line parts into headers. The same HEADERS
           | frame type can be used for both the headers and trailers on a
           | given stream. The framing protocol itself doesn't really have
           | a whole lot of cruft, and versus HTTP/1 it entirely
           | eliminates the need for the dancing around with Content-
           | Length, chunked Transfer-Encoding, and trailers.
           | 
           | In practice, a lot of the issues around HTTP/2
           | implementations really just seem to be caused by trying to
           | shoehorn it into existing HTTP/1.1 frameworks, where the
           | differences just don't mesh very well (e.g. Go has some ugly
           | problems here) or just simply a lack of battle-testing due to
           | trouble adopting it (which I personally think is mainly
           | caused by the difficulty of configuring it. Most systems will
           | only use HTTP/2 by default over TLS, after all, so in many
           | cases end-to-end HTTP/2 wasn't being tested.)
        
             | SlightlyLeftPad wrote:
             | From the perspective of a user, where it starts to seem
             | inelegant is when grpc comes into the picture. You get grpc
             | to function but then plain http traffic breaks and vice
             | versa. It seems to be odd implementation details on
             | specific load balancer products. When in theory, all of it
             | should operate the same way, but it doesn't.
        
               | jrockway wrote:
               | grpc doesn't really do anything special on top of http/2.
               | Load balancers that are aware of http/2 on both sides
               | shouldn't have any trouble with either.
               | 
               | The problem that people run into load balancing grpc is
               | that they try to use a layer 4 load balancer to balance
               | layer 7 requests; that is, if there are 4 backends, the
               | load balancer tells you the address of one of them, and
               | then you wonder why the other 3 backends don't get 25% of
               | the traffic. That's because grpc uses 1 TCP connection
               | and it sends multiple requests over that connection
               | ("channel"). If your load balancer tells you the
               | addresses of all 4 servers, then you can open up 4
               | channels and load balance inside your application (this
               | was always the preferred approach at google, with a
               | control channel to gracefully drain certain backends,
               | etc.). If your load balancer is aware of http/2 at the
               | protocol level (layer 7), then you open up one channel to
               | your load balancer, which already has one channel for
               | each backend. When a request arrives, it inspects it and
               | picks a backend and proxies the rest of the exchange.
               | 
               | Ordinary http/2 works like this, it's just that you can
               | get away with a network load balancer because http
               | clients open new connections more regularly (consider the
               | lifetime of a browser page with the lifetime of a backend
               | daemon). Each new connection is a load balancing
               | opportunity for the naive layer 4 balancer. If you never
               | make new connections, then it never has an opportunity to
               | load balance.
               | 
               | grpc has plenty of complexity for "let applications do
               | their own load balancing", including built-in load
               | balancing algorithms and built-in service discovery and
               | health discovery (xDS); http/2 doesn't have any of this.
               | Whether these are actually part of grpc or just random
               | add-ons to popular client libraries is somewhat up for
               | debate, however.
        
         | bawolff wrote:
         | I think the problem is that duplex communication on the web is
         | rarely useful except in some special cases, and usually harder
         | to scale as you have to keep state around and can't as easily
         | rotate servers.
         | 
         | Some applications it is important but for most websites the
         | benefits just dont outweigh the costs.
        
         | KaiserPro wrote:
         | HTTP2 works great on the LAN, or if you have really good
         | network.
         | 
         | It starts to really perform badly when you have dropped
         | packets. So any kind of medium quality wifi or 4/5g kneecaps
         | performance.
         | 
         | It was always going to do this, and as webpages get bigger, the
         | performance degradation increases.
         | 
         | HTTP2 fundamentally underperforms in the real world, and
         | noticeably so on mobile. (My company enthusiastically rolled
         | out http2 support when akamai enabled it.)
         | 
         | Personally I feel that websockets are a hack, and frankly HTTP
         | 3 should have been split into three: a file access protocol, a
         | arbitrary TCP like pipe and a metadata channel. But web people
         | love hammering workarounds onto workarounds. so we are left
         | with HTTP3
        
         | commandlinefan wrote:
         | It seems like the author is agreeing that HTTP/2 is great (or
         | at least good) for browser -> web server communication, but not
         | useful for the REST-style APIs that pervade modern app design.
         | He makes a good case, but HTTP was never really a good choice
         | for API transport _either_, it just took hold because it was
         | ubiquitous.
        
       | a-dub wrote:
       | i think it probably varies from workload to workload. reducing
       | handshake time and header compression can have substantial
       | effects.
       | 
       | it's a shame server side hunting/push never caught on. that was
       | always one of the more interesting features.
        
         | KingMob wrote:
         | It didn't catch on because it was hard to see the benefits, and
         | if done incorrectly, could actually make things slightly
         | _slower_.
         | 
         | Essentially, the server had to do things like compute RTT and
         | understand the status of the browser's cache to do optimal
         | push.
        
           | a-dub wrote:
           | hmm, maybe the client could include a same origin cache state
           | bloom filter in the request.
           | 
           | although i suppose it's a solved problem these days.
        
             | KingMob wrote:
             | Bloom filters are small relative to the amount of data they
             | can hash, but aren't realistic bloom filters still tens of
             | kB at minimum? Might be too heavyweight to send up.
        
               | a-dub wrote:
               | 1024 capacity, 1 in 1M false positive rate. (false
               | positives fail safe- sending something the client already
               | has), 3.6KB
               | 
               | https://hur.st/bloomfilter/?n=1024&p=1.0E-6&m=&k=
        
               | KingMob wrote:
               | Ahh, nice. That's not too bad.
        
       | hiAndrewQuinn wrote:
       | The maximum number of connections thing in HTTP/1 always makes me
       | think of queuing theory, which gives surprising conclusions like
       | how adding a single extra teller at a one-teller bank can cut
       | wait times by 50 times, not just by 2.
       | 
       | However, I think the problem is the Poisson process isn't really
       | the right process to assume. Most websites which would run afoul
       | of the 2/6/8/etc connections being opened are probably trying to
       | open up a lot of connections _at the same time_. That 's very
       | different from situations where only 1 new person arrives every 6
       | minutes on average, and 2 new people arriving within 1 second of
       | each other is a considerably rarer event.
       | 
       | [1]: https://www.johndcook.com/blog/2008/10/21/what-happens-
       | when-...
        
         | tome wrote:
         | Can't it cut wait times by infinity? For example, if the
         | arrivals are at 1.1 per minute, and a teller processes 1 per
         | minute.
        
           | hinkley wrote:
           | You've forgotten about banker's hours :)
        
           | SJC_Hacker wrote:
           | Could be that, could also be the people taking a long time
           | aren't at least causing a bottleneck (assuming there arent
           | two of them at the same time). So you have this situation
           | like this: first person takes 10 minutes, while there are 9
           | waiting in line that take only one minute a piece. With one
           | teller, average wait time is ~15 minutes. With two tellers,
           | its now ~5 minutes.
           | 
           | Which is why it is highly annoying when there's only one
           | worker at the coffee stand, and there's always this one jerk
           | at the front of the queue who orders a latte when you just
           | want a coffee. With two workers, the people who just want
           | coffee won't have to wait 15 minutes for the lattee people
           | 
           | And I've also noticed a social effect, when people wait a
           | long time it seems to reinforce how they perceive the
           | eventual serviced, that is, they want more out of the
           | interaction, so take longer. Which makes it the situation
           | even worse
        
         | hinkley wrote:
         | And if memory serves if you care about minimizing latency you
         | want all of your workers running an average of 60% occupied.
         | (Which is also pretty close to when I saw P95 times dog-leg on
         | the last cluster I worked on).
         | 
         | Queuing theory is _really_ weird.
        
       | _ache_ wrote:
       | I remember been bashed on HN saying that HTTP is hard. Yet, I saw
       | non-sens here in the comment about HTTP. The whole article is
       | good but:
       | 
       | > HTTP/2 is fully encrypted, so you need all your application
       | servers to have a key and certificate
       | 
       | Nope. h2c is a thing and is official. But the article is right,
       | the value HTTP/2 provides isn't for LAN, so HTTP 1.1 or HTTP/2 it
       | doesn't matter much.
       | 
       | HTTP/3 however, is fully encrypted. h3c doesn't exists. So yeah,
       | HTTP3 slower you connection, it isn't suited for LAN and should
       | not be used.
       | 
       | BUT if you actually want to encrypt even in you LAN, use HTTP/3,
       | not HTTP/2 encrypted. You will have a small but not negligible
       | gain from 0-RTT.
        
         | gmokki wrote:
         | I would not use http/3 for lan. Even the latest Linux kernels
         | struggle with it. Http/1 aka TCP has fully supported encryption
         | and other offload support. UDP consumes still much more CPU for
         | same amount of traffic.
        
           | _ache_ wrote:
           | Do you have source for that? I'm very interested. There is no
           | technical reason for UDP to be slower than TCP (at CPU
           | level).
           | 
           | The only field that is computed in UDP is checksum and the
           | same exists in TCP and it must be recomputed each time
           | someone actually re-route the packet (eg: bridge to VM) since
           | TTL is decreased.
           | 
           | So I doubt your assertion.
           | 
           | _____
           | 
           | Writing my comment I understood what your are talking about.
           | There is a bunch of encryption done at user mode in HTTP/3
           | that doesn't need to be done in user mode. In HTTP/2 it was
           | sometime done in kernel mode (kTTL), so was quicker. The
           | slowness comes for the CPU needed it to be copied out of
           | kernel mode. I didn't follow the whole story so I trust you
           | on this.
        
             | 0x457 wrote:
             | https://docs.kernel.org/networking/tls-offload.html
        
             | SahAssar wrote:
             | Encryption is one thing (if you run kTLS which is still not
             | done in most manual setups) but the much bigger IIRC is how
             | much of the networking stack needs to run in userspace and
             | has not been given the optimization love of TCP. If you
             | compared non-kTLS h2 with non-kTLS h3 over a low-latency
             | link the h2 connection could handle a lot more traffic
             | compared to h3.
             | 
             | That is not to say that h3 does not have its place, but the
             | networking stacks are not optimized for it yet.
        
             | koakuma-chan wrote:
             | https://news.ycombinator.com/item?id=41890784
        
       | kam1kazer wrote:
       | nah, I'm using HTTP/3 everywhere
        
       | kittikitti wrote:
       | If we ever get to adopting this, I will send every byte to a
       | separate IPv6 address. Big Tech surveillance wouldn't work so
       | many don't see a point like the author.
        
       | fulafel wrote:
       | There's a security angle: Load balancers have big problems with
       | request smuggling. HTTP/2 does something to the picture, maybe
       | someone is more up to date if it's currently better or worse?
       | 
       | ref: https://portswigger.net/web-security/request-smuggling
        
         | albinowax_ wrote:
         | Yes HTTP/2 is much less prone to exploitable request smuggling
         | vulnerabilities. Downgrading to H/1 at the load balancer is
         | risky.
        
         | nitely wrote:
         | In theory request smuggling is not possible with end-to-end
         | HTTP/2. It's only possible if there is a downgrade to HTTP/1 at
         | some point.
        
           | SahAssar wrote:
           | A h2 proxy usually wouldn't proxy through the http2
           | connection, it would instead accept h2, load-balance each
           | request to a backend over a h2 (or h1) connection.
           | 
           | The difference is that you have a h2 connection to the proxy,
           | but everything past that point is up to the proxies routing.
           | End-to-end h2 would be more like a websocket (which runs over
           | HTTP CONNECT) where the proxy is just proxying a socket
           | (often with TLS unwrapping).
        
       | sluongng wrote:
       | Hmm it's weird that this submission and comments are being shown
       | to me as "hours ago" while they are all 2 days old
        
         | layer8 wrote:
         | https://news.ycombinator.com/item?id=26998309
        
         | wglb wrote:
         | Sometimes the moderators will effectively boost a post that
         | they think is interesting so it gets more views.
        
       | wiggidy wrote:
       | Yeah yeah, whatever, just make it work in the browser so I can do
       | gRPC duplex streams, thank you very much.
        
       | nitwit005 wrote:
       | I'd agree it's not critical, but discard the assumption that
       | requests within the data center will be fast. People have to send
       | requests to third parties, which will often be slow. Hopefully
       | not as slow as across the Atlantic, but still magnitudes worse
       | than an internal query.
       | 
       | You will often be in the state where the client uses HTTP2, and
       | the apps use HTTP2 to talk to the third party, but inside the
       | data center things are HTTP1.1, fastcgi, or similar.
        
         | nightpool wrote:
         | Why does HTTP2 help with this? Load balancers use one keepalive
         | connection per request and don't experience head of line
         | blocking. And they have slow start disabled. So even if the
         | latency of the _final_ request is high, why would HTTP2 improve
         | the situation?
        
       | najmlion wrote:
       | Http2 is needed for a GRPC route on OpenShift.
        
       | jiggawatts wrote:
       | Google _measured_ their bandwidth usage and discovered that
       | something like half was just HTTP headers! Most RPC calls have
       | small payloads for both requests and responses.
       | 
       | HTTP/2 compresses headers, and that alone can make it worthwhile
       | to use throughout a service fabric.
        
       | gwbas1c wrote:
       | > So the main motivation for HTTP/2 is multiplexing, and over the
       | Internet ... it can have a massive impact.
       | 
       | > But in the data center, not so much.
       | 
       |  _That 's a very bold claim._
       | 
       | I'd like to see some data that shows little difference with and
       | without HTTP/2 in the datacenter before I believe that claim.
        
         | 0xbadcafebee wrote:
         | Datacenters don't typically have high latency, low bandwidth,
         | and varying availability issues. If you have a saturated
         | http/1.1 network (or high CPU use) within a DC you can usually
         | just add capacity.
        
       | immibis wrote:
       | If your load balancer is converting between HTTP/2 and HTTP/1.1,
       | it's a reverse proxy.
       | 
       | Past the reverse proxy, is there a point to HTTP at all? We could
       | also use SCGI or FastCGI past the reverse proxy. It does a better
       | job of passing through information that's gathered at the first
       | point of entry, such as the client IP address.
        
       | monus wrote:
       | > bringing HTTP/2 all the way to the Ruby app server is
       | significantly complexifying your infrastructure for little
       | benefit.
       | 
       | I think the author wrote it with encryption-is-a-must in the mind
       | and after he corrected those parts, the article just ended up
       | with these weird statements. What complexity is introduced apart
       | from changing the serving library in your main file?
        
         | byroot wrote:
         | You are correct about the first assumption, but even without
         | encryption dealing with multiplexing significantly complexify
         | things, so I still stand by that statement.
         | 
         | If you assume no multiplexing, you can write a much simpler
         | server.
        
         | hinkley wrote:
         | In a language that uses forking to achieve parallelism,
         | terminating multiple tasks at the same endpoint will cause
         | those tasks to compete. For some workflows that may be a
         | feature, but for most it is not.
         | 
         | So that's Python, Ruby, Node. Elixir won't care and C# and
         | Java... well hopefully the HTTP/2 library takes care of the
         | multiplexing of the replies, then you're good.
        
       ___________________________________________________________________
       (page generated 2025-02-27 23:00 UTC)