[HN Gopher] Server-Sent Events (SSE) Are Underrated
       ___________________________________________________________________
        
       Server-Sent Events (SSE) Are Underrated
        
       Author : Igor_Wiwi
       Score  : 300 points
       Date   : 2024-12-25 21:35 UTC (1 days ago)
        
 (HTM) web link (igorstechnoclub.com)
 (TXT) w3m dump (igorstechnoclub.com)
        
       | recursivedoubts wrote:
       | https://data-star.dev is a hypermedia-oriented front end library
       | built entirely around the idea of streaming hypermedia responses
       | via SSE.
       | 
       | It was developed using Go & NATS as backend technologies, but
       | works with any SSE implementation.
       | 
       | Worth checking out if you want to explore SSE and what can be
       | achieved w/it more deeply. Here is an interview with the author:
       | 
       | https://www.youtube.com/watch?v=HbTFlUqELVc
        
         | sudodevnull wrote:
         | Datastar author here, happy to answer any questions!
        
         | andersmurphy wrote:
         | +1 for recommending data-star. The combination of idiomorph
         | (thank you), SSE and signals is fantastic for making push based
         | and/or multiplayer hypermedia apps.
        
       | ramon156 wrote:
       | They're underrated when they work(tm)
       | 
       | Currently at work I'm having issues because - Auth between an
       | embedded app and javascript's EventSource is not working, so I
       | have to resort to a Microsoft package which doesn't always work.
       | - Not every tunnel is fond of keep-alive (Cloudflare), so I had
       | to switch to ngrok (until I found out they have a limit of 20k
       | requests).
       | 
       | I know this isn't the protocol's fault, and I'm sure there's
       | something I'm missing, but my god is it frustrating.
        
         | nchmy wrote:
         | Try this sse client https://github.com/rexxars/eventsource-
         | client
        
       | condiment wrote:
       | So it's websockets, only instead of the Web server needing to
       | handle the protocol upgrade, you just piggyback on HTTP with an
       | in-band protocol.
       | 
       | I'm not sure this makes sense in 2024. Pretty much every web
       | server supports websockets at this point, and so do all of the
       | browsers. You can easily impose the constraint on your code that
       | communication through a websocket is mono-directional. And the
       | capability to broadcast a message to all subscribers is going to
       | be deceptively complex, no matter how you broadcast it.
        
         | realPubkey wrote:
         | Yes most servers support websockets. But unfortunately most
         | proxies and firewalls do not, especially in big company
         | networks. Suggesting my users to use SSEs for my database
         | replication stream solved most of their problems. Also setting
         | up a SSE endpoint is like 5 lines of code. WebSockets instead
         | require much more and you also have to do things like pings etc
         | to ensure that it automatically reconnects. SEEs with the
         | JavaScript EventSource API have all you need build in:
         | 
         | https://rxdb.info/articles/websockets-sse-polling-webrtc-web...
        
           | the_mitsuhiko wrote:
           | SSE also works well on HTTP/3 whereas web sockets still
           | don't.
        
             | apitman wrote:
             | I don't see much point in WebSockets for HTTP/3.
             | WebTransport will cover everything you would need it for an
             | more.
        
               | the_mitsuhiko wrote:
               | That might very well be but the future is not today.
        
               | apitman wrote:
               | But why add it to HTTP/3 at all? HTTP/1.1 hijacking is a
               | pretty simple process. I suspect HTTP/3 would be
               | significantly more complicated. I'm not sure that effort
               | is worth it when WebTransport will make it obselete.
        
               | the_mitsuhiko wrote:
               | It was added to HTTP/2 as well and there is an RFC.
               | (Though a lot of servers don't support it even on HTTP/2)
               | 
               | My point is mostly that SSE works well and is supported
               | and that has A meaningful benefit today.
        
               | leni536 wrote:
               | To have multiple independent websocket streams, without
               | ordering requirements between streams.
        
           | deaf_coder wrote:
           | going slightly off the tangent here, does FaaS cloud
           | providers like AWS, CloudFlare, and etc support SSEs?
           | 
           | Last time I checked, they don't really support it.
        
       | programmarchy wrote:
       | Great post. I discovered SSE when building a chatbot and found
       | out it's what OpenAI used rather than WebSockets. The batteries-
       | included automatic reconnection is huge, and the format is
       | surprisingly human readable.
        
       | dugmartin wrote:
       | It doesn't mention the big drawback of SSE as spelled out in the
       | MDN docs:
       | 
       | "Warning: When not used over HTTP/2, SSE suffers from a
       | limitation to the maximum number of open connections, which can
       | be especially painful when opening multiple tabs, as the limit is
       | per browser and is set to a very low number (6)."
        
         | k__ wrote:
         | And over HTTP/2 and 3 they are efficient?
        
           | apitman wrote:
           | HTTP/2+ only uses a single transport connection (TCP or QUIC)
           | per server, and multiplexes over that. So there's essentially
           | no practical limit.
        
             | toomim wrote:
             | Except that browsers add a limit of ~100 connections even
             | with HTTP/2, for no apparently good reason.
        
         | RadiozRadioz wrote:
         | That is a very low number. I can think of many reasons why one
         | would end up with more. Does anyone know why it is so low?
        
           | apitman wrote:
           | Historical reasons. The HTTP/1.1 spec actually recommends
           | limited to 2 connections per domain. That said, I'm not sure
           | why it's still so low. I would guess mostly to avoid
           | unintended side effects of changing it.
        
             | gsnedders wrote:
             | > The HTTP/1.1 spec actually recommends limited to 2
             | connections per domain.
             | 
             | This is no longer true.
             | 
             | From RFC 9112 SS 9.4
             | (https://httpwg.org/specs/rfc9112.html#rfc.section.9.4):
             | 
             | > Previous revisions of HTTP gave a specific number of
             | connections as a ceiling, but this was found to be
             | impractical for many applications. As a result, this
             | specification does not mandate a particular maximum number
             | of connections but, instead, encourages clients to be
             | conservative when opening multiple connections.
        
               | apitman wrote:
               | If this was a MUST would it have required a bump from
               | 1.1?
        
             | dontchooseanick wrote:
             | Because you're supposed to use a single connection with
             | HTTP Pipelining for all your ressources [1]
             | 
             | When index.html loads 4 CSS and 5 JS : 10 ressources in
             | HTTP 1.0 needed 10 connections, with 10 TLS negociations
             | (unless one ressource loaded fast and you could reuse it's
             | released connection)
             | 
             | With HTTP1.1 Pipelining you open only one connection,
             | including a single TLS nego, and ask 10 ressources.
             | 
             | Why not only 1 per domain so ? IIRC it's because the 1st
             | ressource index.html may take a lot of Time to complete and
             | well race conditions suggest you use another one that the
             | 'main thread' more or less. So basically 2 are sufficient.
             | 
             | [1] https://en.m.wikipedia.org/wiki/HTTP_pipelining
        
               | immibis wrote:
               | HTTP pipelining isn't used by clients.
        
           | foota wrote:
           | Probably because without http/2 each would require a TCP
           | connection, which could get expensive.
        
           | raggi wrote:
           | The number was set while Apache was dominant and common
           | deployments would get completely tanked by a decent number of
           | clients opening more conns than this. c10k was a thing once,
           | these days c10m is relatively trivial
        
           | giantrobot wrote:
           | Because 30 years ago server processes often (enough) used
           | inetd or served a request with a forked process. A browser
           | hitting a server with a bunch of connections, especially over
           | slow network links where the connection would be long lived,
           | could swamp a server. Process launches were expensive and
           | could use a lot of memory.
           | 
           | While server capacity in every dimension has increased the
           | low connection count for browsers has remained. But even
           | today it's still a bit of a courtesy to not spam a server
           | with a hundred simultaneous connections. If the server
           | implicitly supports tons of connects with HTTP/2 support
           | that's one thing but it's not polite to abuse HTTP/1.1
           | servers.
        
         | SahAssar wrote:
         | There is little reason to not use HTTP/2 these days unless you
         | are not doing TLS. I can understand not doing HTTP/3 and QUIC,
         | but HTTP/2?
        
           | jiggawatts wrote:
           | Corporate proxy servers often downgrade connections to HTTP
           | 1.1 because inertia and lazy vendors.
        
             | SahAssar wrote:
             | To do that they need to MITM _and tamper with_ the inner
             | protocol.
             | 
             | In my experience this is quite rare. Some MITM proxies
             | analyze the traffic, restrict which ciphers can be used,
             | block non-dns udp (and therefore HTTP/3), but they don't
             | usually downgrade the protocol from HTTP/2 to HTTP/1.
        
               | arccy wrote:
               | "tamper" sounds much more involved than what they (their
               | implementation) probably do: the proxy decodes the http
               | request, potentially modifies it, and uses the decoded
               | form to send a new request using their client, which only
               | speaks http/1
        
               | geoffeg wrote:
               | That hasn't been my experience at large corporations.
               | They usually have a corporate proxy which only speaks
               | HTTP 1.1, intercepts all HTTPS, and doesn't support
               | websockets (unless you ask for an exception) and other
               | more modern HTTP features.
        
               | dilyevsky wrote:
               | That's exactly what they're doing and it's still very
               | common in private networks
        
         | jesprenj wrote:
         | You can easily multiplex data over one connection/event stream.
         | You can design your app so that it only uses one eventstream
         | for all events it needs to receive.
        
           | raggi wrote:
           | This, it works well in a service worker for example.
        
             | nikeee wrote:
             | How does this work with a service worker? I've only managed
             | to do this via SharedWorker (which is not available on
             | Chrome on Android).
        
               | raggi wrote:
               | You can just open a stream in the service worker and push
               | events via postMessage and friends.
               | 
               | Another nice thing to do is to wire up a simple
               | filesystem monitor for all your cached assets that pushes
               | path & timestamp events to the service worker whenever
               | they change, then the service worker can refresh affected
               | clients too (with only a little work this is endgame
               | livereload if you're not constrained by your environment)
        
             | tomsmeding wrote:
             | The caniuse link in the OP, under Known Issues, notes that
             | Firefox currently does not support EventSource in a service
             | worker. https://caniuse.com/?search=EventSource
        
         | atombender wrote:
         | One of my company's APIs uses SSE, and it's been a big support
         | headache for us, because many people are being corporate
         | firewalls that don't do HTTP/2 or HTTP/3, and people often open
         | many tabs at the same time. It's unfortunately not possible to
         | detect client-side whether the limit has been reached.
         | 
         | Another drawback of SSE is lack of authorization header
         | support. There are a few polyfills (like this one [1]) that
         | simulate SSE over fetch/XHR, but it would be nice to not need
         | to add the bloat.
         | 
         | [1] https://github.com/EventSource/eventsource
        
           | robocat wrote:
           | Presumably you try SSE, and on failure fallback to something
           | else like WebSockets?
           | 
           | Push seems to require supporting multiple communication
           | protocols to avoid failure modes specific to one protocol -
           | and libraries are complex because of that.
        
             | mardifoufs wrote:
             | But then why not just use websockets?
        
               | virtue3 wrote:
               | From what I understand websockets are great until you
               | have to load balance them. And then you learn why they
               | aren't so great.
        
               | com2kid wrote:
               | I've scaled websockets before, it isn't that hard.
               | 
               | You need to scale up before your servers become
               | overloaded, and basically new connections go north to the
               | newly brought up server. It is a different mentality than
               | scaling stateless services but it isn't super duper hard.
        
               | hooli_gan wrote:
               | Can you suggest some resources to learn more about
               | Websocket scaling? Seems like an interesting topic
        
               | com2kid wrote:
               | Honestly I just flipped the right bits in the aws load
               | balancer (maintain persistent connections, just the first
               | thing you are told to do when googling aws load balancers
               | and web sockets) and setup the instance scaler to trigger
               | based upon "# open connections / num servers >
               | threshold".
               | 
               | Ideally it is based on the rate of incoming connections,
               | but so long as you leave enough headroom when doing the
               | stupid simple scaling rule you should be fine. Just
               | ensure new instances don't take too long to start up.
        
               | hamandcheese wrote:
               | My understanding is the hard part about scaling
               | WebSockets is that they are stateful and long lived
               | connections. That is also true of SSE. Is there some
               | other aspect of WebSockets that make them harder to scale
               | than WebSockets?
               | 
               | I guess with WebSockets, if you choose to send messages
               | from the client to the server, then you have some
               | additional work that you wouldn't have with SSE.
        
           | nchmy wrote:
           | FYI, the dev of that library created a new, better Event
           | Source client
           | 
           | https://github.com/rexxars/eventsource-client
        
             | atombender wrote:
             | Yes, I know. We both work at Sanity, actually! The reason I
             | didn't mention it was that the newer library isn't a
             | straight polyfill; it offers a completely different
             | interface with async support and so on.
        
           | fitsumbelay wrote:
           | I hate to suggest a solution before testing it myself so
           | apologies in advance but I have a hunch that Broadcast
           | Channel API can help you detect browser tab opens on client
           | side. New tabs won't connect to event source and instead
           | listen for localStorage updates that the first loaded tab
           | makes.
           | 
           | https://www.google.com/search?q=can+I+use+BroadcastChannel+A.
           | ..
           | 
           | The problem in this case is how to handle the first tab
           | closing and re-assign which tab then becomes the new "first"
           | tab that connects to the event source but it may be a LOE to
           | solve.
           | 
           | Again apologies for suggesting unproven solutions but at the
           | same time I'm interested in feedback it gets to see if its
           | near the right track
        
             | b4ckup wrote:
             | This sounds like a good use case for using a service
             | worker. All tabs talk to the service worker and the worker
             | is the single instance that talks to the backend and can
             | use only one connection. Maybe there are some trade offs
             | for using SSE in web workers, I'm not sure.
        
               | Keithamus wrote:
               | BroadcastChannel is a better solution for a couple of
               | reasons. Service Workers are better at intercepting
               | network requests and returning items from a cache,
               | there's some amount of additional effort to do work
               | outside of that. The other thing is they're a little more
               | difficult to set up. A broadcast channel can be handled
               | in a couple lines of code, easily debuggable as they run
               | on the main thread, and they're more suited to the
               | purpose.
        
               | fitsumbelay wrote:
               | agreed. Workers was one of my first thoughts but I think
               | BroadcastChannel delivers with _much_ lower LOE
        
           | leni536 wrote:
           | Supposedly websockets (the protocol) support authorization
           | headers, but often there are no APIs for that in websocket
           | libraries, so people just abuse the subprotocols header in
           | the handshake.
        
             | apitman wrote:
             | I don't think the problem is libraries. Browsers don't
             | support this.
        
               | leni536 wrote:
               | Sure, I didn't mean to distinguish browsers and the JS
               | websocket API and websocket libraries in other languages.
        
         | nhumrich wrote:
         | Http2 is controllable by you, since it's supposed in every
         | browser. So, the way to fix this limitation is to use http2
        
           | jillesvangurp wrote:
           | Yes, use a proper load balancer that can do that. And use
           | Http3 which is also supported by all relevant browsers at
           | this point. There's no good reason to build new things on top
           | of old things.
        
           | lolinder wrote:
           | This was already suggested and someone pointed out that some
           | corporate networks MITM everything without HTTP/2 support:
           | 
           | https://news.ycombinator.com/item?id=42512072
        
       | apitman wrote:
       | > Perceived Limitations: The unidirectional nature might seem
       | restrictive, though it's often sufficient for many use cases
       | 
       | For my use cases the main limitations of SSE are:
       | 
       | 1. Text-only, so if you want to do binary you need to do
       | something like base64
       | 
       | 2. Browser connection limits for HTTP/1.1, ie you can only have
       | ~6 connections per domain[0]
       | 
       | Connection limits aren't a problem as long as you use HTTP/2+.
       | 
       | Even so, I don't think I would reach for SSE these days. For less
       | latency-sensitive and data-use sensitive applications, I would
       | just use long polling.
       | 
       | For things that are more performance-sensitive, I would probably
       | use fetch with ReadableStream body responses. On the server side
       | I would prefix each message with a 32bit integer (or maybe a
       | variable length int of some sort) that gives the size of the
       | message. This is far more flexible (by allowing binary data), and
       | has less overhead compared to SSE, which requires 7 bytes
       | ("data:" + "\n\n") of overhead for each message.
       | 
       | [0]: https://stackoverflow.com/a/985704
        
         | nchmy wrote:
         | You can do fetch and readable stream with SSE - here's an
         | excellent client library for that
         | 
         | https://github.com/rexxars/eventsource-client
        
         | nhumrich wrote:
         | ReadableStream appears to be SSE without any defined standards
         | for chunk separation. In practice, how is it any different from
         | using SSE? It appears to use the same concept.
        
           | tomsmeding wrote:
           | Presumably, ReadableStream does not auto-reconnect.
        
       | kdunglas wrote:
       | A while ago I created Mercure: an open pub-sub protocol built on
       | top of SSE that is a replacement for WebSockets-based solutions
       | such as Pusher. Mercure is now used by hundreds of apps in
       | production.
       | 
       | At the core of Mercure is the hub. It is a standalone component
       | that maintains persistent SSE (HTTP) connections to the clients,
       | and it exposes a very simple HTTP API that server apps and
       | clients can use to publish. POSTed updates are broadcasted to all
       | connected clients using SSE. This makes SSE usable even with
       | technologies not able to maintain persistent connections such as
       | PHP and many serverless providers.
       | 
       | Mercure also adds nice features to SSE such as a JWT-based
       | authorization mechanism, the ability to subscribe to several
       | topics using a single connection, events history, automatic state
       | reconciliation in case of network issue...
       | 
       | I maintain an open-source hub written in Go (technically, a
       | module for the Caddy web server) and a SaaS version is also
       | available.
       | 
       | Docs and code are available on https://mercure.rocks
        
         | apitman wrote:
         | The site mentions battery-efficiency specifically. I'm curious
         | what features does Mercure provide in that direction?
        
           | kdunglas wrote:
           | SSE/Mercure (as WebSockets) is much battery-efficient than
           | polling (push vs poll, less bandwidth used).
           | 
           | Additionally, on controlled environnements, SSE can use a <<
           | push proxy >> to wake up the device only when necessary:
           | https://html.spec.whatwg.org/multipage/server-sent-
           | events.ht...
        
           | pests wrote:
           | It comes down to all the extra bytes sent and processed
           | (local and remote, and in flight) by long polling. SSE events
           | are small while other methods might require multiple packets
           | and all the needless headers throughout the stack, for
           | example.
        
         | Dren95 wrote:
         | Cool didn't know this. I used a similar solution called
         | Centrifugo for a while. It allows you to choose which transport
         | to use (ws, sse, others)
         | 
         | https://github.com/centrifugal/centrifugo
        
         | tonyhart7 wrote:
         | its cool but its in go, do you know other implementation in
         | rust ????
        
       | piccirello wrote:
       | I utilized SSE when building automatic restart functionality[0]
       | into Doppler's CLI. Our api server would send down an event
       | whenever an application's secrets changed. The CLI would then
       | fetch the latest secrets to inject into the application process.
       | (I opted not to directly send the changed secrets via SSE as that
       | would necessitate rechecking the access token that was used to
       | establish the connection, lest we send changed secrets to a
       | recently deauthorized client). I chose SSE over websockets
       | because the latter required pulling in additional dependencies
       | into our Golang application, and we truly only needed
       | server->client communication. One issue we ran into that hasn't
       | been discussed is HTTP timeouts. Some load balancers close an
       | HTTP connection after a certain timeout (e.g. 1 hour) to prevent
       | connection exhaustion. You can usually extend this timeout, but
       | it has to be explicitly configured. We also found that our server
       | had to send intermittent "ping" events to prevent either
       | Cloudflare or Google Cloud Load Balancing from closing the
       | connection, though I don't remember how frequently these were
       | sent. Otherwise, SSE worked great for our use case.
       | 
       | [0] https://docs.doppler.com/docs/automatic-restart
        
         | Xenoamorphous wrote:
         | I also used SSE 6 or so years ago, and had the same issue with
         | out load balancer; a bit hacky but what I did was to set a
         | timer that would send a single colon character (which is the
         | comment delimiter IIRC) periodically to the client. Is that
         | what you meant by "ping"?
        
         | apitman wrote:
         | Generally you're going to want to send ping events pretty
         | regularly (I'd default to every 15-30 seconds depending on
         | application) whether you're using SSE, WebSockets, or something
         | else. Otherwise if the server crashes the client might not know
         | the connection is no longer live.
        
           | sabareesh wrote:
           | Yeah with cloudflare you need to do it every 30 seconds as
           | the timeout is is 60 seconds
        
             | loloquwowndueo wrote:
             | Then why not do it every 59 seconds :)
        
               | virtue3 wrote:
               | You'd probably want to do it every 29 seconds in case a
               | ping fails to send/deliver.
        
           | robocat wrote:
           | What do you do for mobile phones: using data/radio for pings
           | would kill the battery?
           | 
           | After locking the phone, how is the ping restarted when the
           | phone is unlocked? Or backgrounding the browser/app?
        
             | erinaceousjones wrote:
             | The way I've implemented SSE is to make use of the fact it
             | can also act like HTTP long-polling when the GET request is
             | initially opened. The SSE events can be given timestamps or
             | UUIDs and then subsequent requests can include the last
             | received ID or the time of the last received event, and
             | request the SSE endpoint replay events up until the current
             | time.
             | 
             | You could also add a ping with a client-requestable
             | interval, e.g. 30 seconds (for foreground app) and 5
             | minutes or never (for backgrounded app), so the TCP
             | connection is less frequently going to cause wake events
             | when the device is idle. As client, you can close and
             | reopen your connection when you choose, if you think the
             | TCP connection is dead on the other side or you want to
             | reopen it with a new ping interval.
             | 
             | Tradeoff of `?lastEventId=` - your SSE serving thing needs
             | to keep a bit of state, like having a circular buffer of up
             | to X hours worth of events. Depending on what you're doing,
             | that may scale badly - like if your SSE endpoint is
             | multiple processes behind a round-robin load balancer...
             | But that's a problem outside of whether you're choosing to
             | use SSE, websockets or something else.
             | 
             | To be honest, if you're worrying about mobile drain, the
             | most battery efficient thing I think anyone can do is admit
             | defeat and use one of the vendor locked-in things like
             | firebase (GCM?) or apple's equivalent notification things:
             | they are using protocols which are more lightweight than
             | HTTP (last I checked they use XMPP same as whatsapp?), can
             | punch through firewalls fairly reliably, batch
             | notifications from many apps together so as to not wake
             | devices too regularly, etc etc...
             | 
             | Having every app keep their own individual connections open
             | to receive live events from their own APIs sucks battery in
             | general, regardless of SSE or websockets being used.
        
       | yu3zhou4 wrote:
       | I've had no idea they exist until I began to use APIs serving LLM
       | outputs. They work pretty well for this purpose from my
       | experience. An alternative to SSE is web sockets for this purpose
       | I suppose
        
       | sbergjohansen wrote:
       | Previous related discussion (2022):
       | 
       | https://news.ycombinator.com/item?id=30403438 (100 comments)
        
       | schmichael wrote:
       | I've never understood the use of SSE over ndjson. Builtin browser
       | support for SSE might be nice, but it seems fairly easy to handle
       | ndjson? For non-browser consumers ndjson is almost assuredly
       | easier to handle. ndjson works over any transport from HTTP/0.9
       | to HTTP/3 to raw TCP or unix sockets or any reliable transport
       | protocol.
        
         | apitman wrote:
         | Manually streaming a XHR and parsing the messages is
         | significantly more work, and you lose the built-in browser API.
         | But if you use a fetch ReadableStream with TLV messages I'm
         | sold.
        
           | nchmy wrote:
           | Here's SSE with fetch and streams
           | https://github.com/rexxars/eventsource-client
        
       | Tiberium wrote:
       | One thing I dislike regards to SSE, which is not its fault but
       | probably a side effect of the perceived simplicity: lots of
       | developers do not actually use proper implementations and instead
       | just parse the data chunks with regex, or something of the sorts!
       | This is bad because SSE, for example, supports comments (":
       | text") in streams, which most of those hand-rolled
       | implementations don't support.
       | 
       | For example, my friend used an LLM proxy that sends
       | keepalive/queue data as SSE comments (just for debugging mainly),
       | but it didn't work for Gemini, because someone at Google decided
       | to parse SSE with a regex: https://github.com/google-
       | gemini/generative-ai-js/blob/main/... (and yes, if the regex
       | doesn't match the complete line, the library will just throw an
       | error)
        
       | Tiberium wrote:
       | Also, another day, another mostly AI-written article on HN's top
       | page :)
        
         | emmanueloga_ wrote:
         | It's funny how HN has a mix of people who think AGI is just
         | around the corner, people trying to build/sell stuff that uses
         | LLMs, and others who can't stand LLM-generated content. Makes
         | me wonder how much overlap there is between these groups.
        
           | Tiberium wrote:
           | I don't have anything against LLMs, I use them daily myself,
           | but publishing content that's largely AI-generated without a
           | disclaimer just feels dishonest to me. Oh, and also when
           | people don't spend at least some effort to make the style
           | more natural, not those bullet point lists in the article
           | that e.g. Claude loves so much.
        
           | remram wrote:
           | Those are not incompatible positions at all. You can think
           | great AI is around the corner and still dislike today's not-
           | great AI writing.
        
         | slow_typist wrote:
         | What makes you think the article is AI-written?
        
           | Tiberium wrote:
           | I've just spent too much time with different LLMs, and for
           | example Claude really loves such bullet point lists. The
           | article is full of them.
           | 
           | The whole structure with numbered sections really gives it
           | away, humans don't write blog posts like that.
        
       | whatever1 wrote:
       | Can Django with vanilla gunicorn do this ?
        
         | mfalcao wrote:
         | Yes, I've done it using StreamingHttpResponse. You'll want to
         | use an asynchronous worker type though.
        
       | upghost wrote:
       | Does anyone have a good trick for figuring out when the client
       | side connection is closed? I just kill the connection on the
       | server every N minutes and force the client to reconnect, but
       | it's not exactly graceful.
       | 
       | Secondly, on iOS mobile, I've noticed that the EventSource seems
       | to fall asleep at some point and not wake up when you switch back
       | to the PWA. Does anyone know what's up with that?
        
         | jesprenj wrote:
         | Send a dummy event and see if you get an ACK in response.
         | Depends on the library you're using.
        
           | upghost wrote:
           | There's no ack on a raw SSE stream, unfortunately -- unless
           | you mean send an event and expect the client to issue an HTTP
           | request to the server like a keepalive?
        
             | jauco wrote:
             | There should be an ACK on the tcp packet (IIRC it's not a
             | lateral ACK but something like it) and the server should
             | handle a timeout on that as the connection being "closed"
             | which can be returned to the connection opener.
             | 
             | You might want to look into timeouts or error callbacks on
             | your connection library/framework.
        
               | upghost wrote:
               | Interesting, hadn't checked at the TCP level. Will need
               | to look into that.
        
               | jauco wrote:
               | I remembered wrong. In most circumstances a tcp
               | connection will be gracefully terminated by sending a FIN
               | message. The timeout I talked about is on an ACK for a
               | keepalive message. So after x time of not receiving a
               | keepalive message the connection is closed. This handles
               | cases where a connection is ungracefully dropped.
               | 
               | All this is done at the kernel level, so at the
               | application level you should be able to just verify if
               | the connection is open by trying a read from the socket.
        
               | upghost wrote:
               | Thanks for clarifying, that would've sent me on a long
               | wild goose chase. Most libraries only provide some sort
               | of channel to send messages to. They generally do not
               | indicate any FIN or ACK received at the TCP level.
               | 
               | If anyone knows any library or framework in any language
               | that solves this problem, I'd love to hear about it.
        
         | nhumrich wrote:
         | The socket closes. Most languages bubble this back up to you
         | with a connection closed exception. In python async world, it
         | would be a cancelled error.
        
       | henning wrote:
       | They are handy for implementing simple ad-hoc hot reloading
       | systems as well. E.g. you can have whatever file watcher you are
       | using call an API when a file of interest changes that sends an
       | event to listening clients on the frontend. You can also trigger
       | an event after restarting the backend if you make an API change
       | by triggering the event at boot time. Then you can just add a
       | dev-only snippet to your base template that reloads the page or
       | whatever. Better than nothing if your stack doesn't support it
       | out of the box and doesn't take very much code or require adding
       | any additional project dependencies. Not as sophisticated as
       | React environments that will only reload a component that changed
       | and only do a full page refresh if needed, but it still gives a
       | nice, responsive feeling when paired with tools that recompile
       | your backend when it changes.
        
       | hamandcheese wrote:
       | I tried implementing SSE in a web project of mine recently, and
       | was very surprised when my website totally stopped working when I
       | had more than 6 tabs open.
       | 
       | It turns out, Firefox counts SSE connections against the 6 host
       | max connections limit, and gives absolutely no useful feedback
       | that it's blocking the subsequent requests due to this limit (I
       | don't remember the precise error code and message anymore, but it
       | left me very clueless for a while). It was only when I stared at
       | the lack of corresponding server side logs that it clicked.
       | 
       | I don't know if this same problem happens with websockets or not.
        
         | uncomplexity_ wrote:
         | wait let's check this
         | 
         | https://news.ycombinator.com/item?id=42511562
         | 
         | at https://developer.mozilla.org/en-US/docs/Web/API/Server-
         | sent... it says
         | 
         | "Warning: When not used over HTTP/2, SSE suffers from a
         | limitation to the maximum number of open connections, which can
         | be especially painful when opening multiple tabs, as the limit
         | is per browser and is set to a very low number (6). The issue
         | has been marked as "Won't fix" in Chrome and Firefox. This
         | limit is per browser + domain, which means that you can open 6
         | SSE connections across all of the tabs to www.example1.com and
         | another 6 SSE connections to www.example2.com (per Stack
         | Overflow). When using HTTP/2, the maximum number of
         | simultaneous HTTP streams is negotiated between the server and
         | the client (defaults to 100)."
         | 
         | so the fix is just use http/2 on server-side?
        
           | remram wrote:
           | Or a SharedWorker that creates a single SSE connection for
           | all your tabs.
           | 
           | SharedWorker is not very complicated but it's another
           | component to add. It would be cool if this was built into SSE
           | instead.
        
             | uncomplexity_ wrote:
             | okay wtf this is amazing, seems usable with websockets too.
             | 
             | usage of Shared Web Workers https://dev.to/ayushgp/scaling-
             | websocket-connections-using-s...
             | 
             | caniuse Shared Web Workers 45%
             | https://caniuse.com/sharedworkers
             | 
             | caniuse BroadcastChannel 96%
             | https://caniuse.com/broadcastchannel
        
               | nchmy wrote:
               | Yeah, the issue with SharedWorkers is that Android
               | Chromium doesn't support it yet.
               | https://issues.chromium.org/issues/40290702
               | 
               | But rather than Broadcast Channel, you can also use the
               | Web Locks API (https://developer.mozilla.org/en-
               | US/docs/Web/API/Web_Locks_A...) rather than Broadcast
               | Channel
               | 
               | This library (https://github.com/pubkey/broadcast-
               | channel/blob/master/src/...) from the fantastic RxDB
               | javascript DB library uses WebLocks with a fallback to
               | Broadcast Channel. But, WebLocks are supported on 96% of
               | browsers, so probably safe to just use it exclusively
               | now.
        
             | hamandcheese wrote:
             | Ultimately this is what I did. But if you need or want per-
             | tab connection state it will get complicated in a hurry.
        
           | ksec wrote:
           | Even if they don't change the default 6 open connection. They
           | could have at least made it per tab rather than per site. [1]
           | [2] And I dont understand why this hasn't been done in the
           | past 10 years.
           | 
           | What am I missing?
           | 
           | [1] https://bugzilla.mozilla.org/show_bug.cgi?id=906896
           | 
           | [2] https://issues.chromium.org/issues/40329530
        
         | mikojan wrote:
         | That's only if not used over HTTP/2 and it says so in the docs
         | too[0]
         | 
         | [0]: https://developer.mozilla.org/en-
         | US/docs/Web/API/EventSource
        
           | hamandcheese wrote:
           | AFAIK browsers require https with http2. This is a locally
           | running server/app which will probably never have https.
           | Maybe there is an exception for localhost, I'm not sure.
        
       | _caw wrote:
       | > SSE works seamlessly with existing HTTP infrastructure
       | 
       | This is false. SSE is not supported on many proxies, and isn't
       | even supported on some common local proxy tooling.
        
       | ksajadi wrote:
       | I'm curious as to how everyone deals with HTTP/2 requirements
       | between the backend servers and the load balancer? By default,
       | HTTP/2 requires TLS which means either no SSL termination at the
       | load balancer or a SSL cert generated per server with a different
       | one for the front end load balancer. This all seems very
       | inefficient.
        
         | kcb wrote:
         | Not sure how widespread this is but AWS load balancers don't
         | validate the backend cert in any way. So I just generate some
         | random self signed cert and use it everywhere.
        
         | nhumrich wrote:
         | You don't need http2 on the actual backend. All limitations for
         | SSE/http1 are browser level. Just downgrade to http1 from the
         | LB to backend, even without SSL. As long as LB to browser is
         | http2 you should be fine.
        
           | ksajadi wrote:
           | Isn't that going to affect the whole multiplexing / multiple
           | connection of SSEs?
        
             | seabrookmx wrote:
             | I don't think so?
             | 
             | HTTP 1.1 still supports Keep-Alive.
        
             | nhumrich wrote:
             | No. That's all handled in the browser and load balancer.
        
       | est wrote:
       | I built several internal tool to tail logs using SSE with
       | Flask/FastAPI. Easy to implement and maintain.
       | 
       | For FastAPI if you want some hooks when client disconnects aka
       | nginx 499 errors, follow this simple tip
       | 
       | https://github.com/encode/starlette/discussions/1776#discuss...
        
       | fitsumbelay wrote:
       | Finding use cases for SSE and reading about others doing the same
       | brings me great joy. Very easy to set up -- you just set 2 or 3
       | response headers and off you go.
       | 
       | I have a hard time imagining the tech's limits outside of testing
       | scenarios so some of the examples brought up here are interesting
        
       | crowdyriver wrote:
       | http streaming is even more underrated.
        
       | anshumankmr wrote:
       | SSE is not underrated. In fact it's being used by Open AI for
       | streaming completions. It's just not always needed unlike the
       | very obvious use cases for normal REST APIs and Websockets.
       | 
       | It was a pain to figure out how to get it to work in a ReactJS
       | codebase I was working on then and from what I remember Axios
       | didn't support it then so I had to use native fetch to get it to
       | work.
        
         | jpc0 wrote:
         | How long ago was this?
         | 
         | I seem to remember not having too many issue with useEffect and
         | context on this.
         | 
         | Maybe the issue is you wanted to implement it in a singular
         | react component when in reality you should be treating it like
         | an other state library since it is something long lived that
         | should live outside of react and pass data into react.
        
           | anshumankmr wrote:
           | Pretty much a year and a half back (I think it was March
           | 2023). We had a real complicated set up back then, since I
           | couldn't put our Open AI client key on the client side so I
           | wrote an end point to to call Open AI's GPT3.5 API and then
           | send that back to the front end to get the "typewriter"
           | effect that they had on the frontend. It was quite broken
           | back then cause sometimes random artifacts used to pop up in
           | response, and some chunks came along with one another
           | requiring me to write some really convoluted deserializing
           | logic for it.
        
       | lakomen wrote:
       | No they're not. They're limited to 6 clients per browser per
       | domain on http/1.1 Which is important because nginx can't reverse
       | proxy http/2 or higher, so you end up with very weird
       | functionality, essentially you can't use nginx with SSE.
       | 
       | https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...
       | 
       | Edit: I see someone already posted about that
        
       | benterix wrote:
       | The topic is interesting but the ChatGPT style of presenting
       | information as bullet points is tiring.
        
       | aniketchopade wrote:
       | I had some trouble implementing when server has to wait for
       | another endpoint (webhook) to feed output to browser . During
       | request processing (thread1) had to store the sse context in
       | native Java object which will be retrieved later when
       | webhook(thread2) is called But then with multiple service
       | instance you wouldn't know which service had stored it so webhook
       | had to publish something which others has to subscribe.
        
       | RevEng wrote:
       | OpenAI's own documentation makes note of how difficult it is to
       | work with SSE and to just use their library instead. My team
       | wrote our own parser for these streaming events from an OpenAI
       | compatible LLM server. The streaming format is awful. The double
       | newline block separator also shows up in a bunch of our text,
       | making parsing a nightmare. The "data:" signifier is slightly
       | better, but when working with scientific software, it still
       | occurs too often. Instead we've had to rely on the totally-not-
       | reliable fact that the server returns each as a separate packet
       | and the receiving end can be set up to return each packet in the
       | stream.
       | 
       | The suggestions I've found online for how to deal with the
       | newline issue are to fold together consecutive newlines, but this
       | loses formatting of some documents and otherwise means there is
       | no way to transmit data verbatim. That might be fine for HTML or
       | other text formats where newlines are pretty much optional, but
       | it sucks for other data types.
       | 
       | I'm happy to have something like SSE but the protocol needs more
       | time to cook.
        
       | deaf_coder wrote:
       | The part where it says:
       | 
       | > SSE works seamlessly with existing HTTP infrastructure:
       | 
       | I'd be careful with that assumption. I have tried using SSE
       | through some 3rd party load balancer at my work and it doesn't
       | work that well. Because SSE is long-lived and doesn't normally
       | close immediately, this load balancer will keep collecting and
       | collecting bytes from the server and not forward it until server
       | closes the connection, effectively making SSEs useless. I had to
       | use WebSockets instead to get around this limitation with the
       | load balancer.
        
         | jpc0 wrote:
         | I had a similar issue at one point but if I remember correctly
         | I just had to have my webserver send the header section without
         | closing the connection.
         | 
         | Usually things would just get streamed through but for some
         | reason until the full header was sent the proxy didn't forward
         | and didn't acknowledge the connection.
         | 
         | Not saying that is your issue but definitely was mine.
        
           | sudhirj wrote:
           | Not entirely. If a load balancer is set to buffer say 4kb of
           | data all the time, your SSE is stuck until you close the
           | connection.
           | 
           | I think there is a HTTP/2 flush instruction, but no load
           | balancer is obligated to handle it and your SSE library might
           | not be flushing anyway.
        
             | deaf_coder wrote:
             | In my case with this load balancer, I think it's just badly
             | written. I think it is set to hold ALL data until the
             | server ends the connection. I have tried leaving my SSE
             | open to send over a few megabytes worth of data and the
             | load balancer never forwarded it at all until I commanded
             | the server to close the connection.
             | 
             | The dev who wrote that code probably didn't think too much
             | about memory efficiency of proxying HTTP connections or
             | case of streaming HTTP connections like SSE.
        
         | Igor_Wiwi wrote:
         | thanks, updated the article with your comment
        
         | motorest wrote:
         | > SSE works seamlessly with existing HTTP infrastructure:
         | 
         | To stress how important it is to correct this error, even
         | Mozilla's introductory page on server-sent events displays
         | prominently with a big red text box that server-sent events are
         | broken when not used over HTTP/2 due to browser's hard limit on
         | open connections.
         | 
         | https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...
         | 
         | Edit: I just saw the issue pointed out further down in the
         | discussion
         | 
         | https://news.ycombinator.com/item?id=42511562
        
       | cle wrote:
       | > Automatic Reconnection
       | 
       | I wouldn't characterize this as "automatic", you have to do a lot
       | of manual work to support reconnection in most cases. You need to
       | have a meaningful "event id" that can be resumed on a new
       | connection/host somewhere else with the Last-Event-Id header. The
       | plumbing for this event id is the trivial part IMO. The hard part
       | is the server-side data synchronization, which is left as an
       | exercise for the reader.
       | 
       | Also, God help you if your SSE APIs have side effects. If the API
       | call is involved in a sequence of side-effecting steps then
       | you'll enter a world of pain by using SSE. Use regular HTTP calls
       | or WebSockets. (Mostly b/c there's no cancellation ack, so
       | retries are often racy.)
        
       ___________________________________________________________________
       (page generated 2024-12-26 23:02 UTC)