https://germano.dev/sse-websockets/
- germano.dev
Server-Sent Events: the alternative to WebSockets you should be using
February 12, 2022 * 15 min read * 212 points * 100 comments *
This work is licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International License.
Cover imageCover image
When developing real-time web applications, WebSockets might be the
first thing that come to your mind. However, Server Sent Events (SSE)
are a simpler alternative that is often superior.
Contents
1. Prologue
2. WebSockets?
3. What is wrong with WebSockets
1. Compression
2. Multiplexing
3. Issues with proxies
4. Cross-Site WebSocket Hijacking
4. Server-Sent Events
5. Let's write some code
1. The Reverse-Proxy
2. The Frontend
3. The Backend
6. Bonus: Cool SSE features
7. Conclusion
Prologue
Recently I have been curious about the best way to implement a
real-time web application. That is, an application containing one ore
more components which automatically update, in real-time, reacting to
some external event. The most common example of such an application,
would be a messaging service, where we want every message to be
immediately broadcasted to everyone that is connected, without
requiring any user interaction.
After some research I stumbled upon an amazing talk by Martin Chaov,
which compares Server Sent Events, WebSockets and Long Polling. The
talk, which is also available as a blog post, is entertaining and
very informative. I really recommend it. However, it is from 2018 and
some small things have changed, so I decided to write this article.
WebSockets?
WebSockets enable the creation of two-way low-latency communication
channels between the browser and a server.
This makes them ideal in certain scenarios, like multiplayer games,
where the communication is two-way, in the sense that both the
browser and server send messages on the channel all the time, and it
is required that these messages be delivered with low latency.
In a First-Person Shooter, the browser could be continuously
streaming the player's position, while simoultaneously receiving
updates on the location of all the other players from the server.
Moreover, we definitely want these messages to be delivered with as
little overhead as possible, to avoid the game feeling sluggish.
This is the opposite of the traditional request-response model of
HTTP, where the browser is always the one initiating the
communication, and each message has a significant overhead, due to
establishing TCP connections and HTTP headers.
However, many applications do not have requirements this strict. Even
among real-time applications, the data flow is usually asymmetric:
the server sends the majority of the messages while the client mostly
just listens and only once in a while sends some updates. For
example, in a chat application an user may be connected to many rooms
each with tens or hundreds of participants. Thus, the volume of
messages received far exceeds the one of messages sent.
What is wrong with WebSockets
Two-way channels and low latency are extremely good features. Why
bother looking further?
WebSockets have one major drawback: they do not work on top of HTTP,
at least not fully. They require their own TCP connection. They use
HTTP only to establish the connection, but then upgrade it to a
standalone TCP connection on top of which the WebSocket protocol can
be used.
This may not seem a big deal, however it means that WebSockets cannot
benefit from any HTTP feature. That is:
* No support for compression
* No support for HTTP/2 multiplexing
* Potential issues with proxies
* No protection from Cross-Site Hijacking
At least, this was the situation when the WebSocket protocol was
first released. Nowadays, there are some complementary standards that
try to improve upon this situation. Let's take a closer look to the
current situation.
Note: If you do not care about the details, feel free to skip the
rest of this section and jump directly to Server-Sent Events or the
demo.
Compression
On standard connections, HTTP compression is supported by every
browser, and is super easy to enable server-side. Just flip a switch
in your reverse-proxy of choice. With WebSockets the question is more
complex, because there are no requests and responses, but one needs
to compress the individual WebSocket frames.
RFC 7692, released on December 2015, tries to improve the situation
by definining "Compression Extensions for WebSocket". However, to the
best of my knowledge, no popular reverse-proxy (e.g. nginx, caddy)
implements this, making it impossible to have compression enabled
transparently.
This means that if you want compression, it has to be implemented
directly in your backend. Luckily, I was able to find some libraries
supporting RFC 7692. For example, the websockets and wsproto Python
libraries, and the ws library for nodejs.
However, the latter suggests not to use the feature:
The extension is disabled by default on the server and enabled by
default on the client. It adds a significant overhead in terms of
performance and memory consumption so we suggest to enable it
only if it is really needed.
Note that Node.js has a variety of issues with high-performance
compression, where increased concurrency, especially on Linux,
can lead to catastrophic memory fragmentation and slow
performance.
On the browsers side, Firefox supports WebSocket compression since
version 37. Chrome supports it as well. However, apparently Safari
and Edge do not.
I did not take the time to verify what is the situation on the mobile
landscape.
Multiplexing
HTTP/2 introduced support for multiplexing, meaning that multiple
request/response pairs to the same host no longer require separate
TCP connections. Instead, they all share the same TCP connection,
each operating on its own independent HTTP/2 stream.
This is, again, supported by every browser and is very easy to
transparently enable on most reverse-proxies.
On the contrary, the WebSocket protocol has no support, by default,
for multiplexing. Multiple WebSockets to the same host will each open
their own separate TCP connection. If you want to have two separate
WebSocket endpoints share their underlying connection you must add
multiplexing in your application's code.
RFC 8441, released on September 2018, tries to fix this limitation by
adding support for "Bootstrapping WebSockets with HTTP/2". It has
been implemented in Firefox and Chrome. However, as far as I know, no
major reverse-proxy implements it. Unfortunately, I could not find
any implementation in Python or Javascript either.
Issues with proxies
HTTP proxies without explicit support for WebSockets can prevent
unencrypted WebSocket connections to work. This is because the proxy
will not be able to parse the WebSocket frames and close the
connection.
However, WebSocket connections happening over HTTPS should be
unaffected by this problem, since the frames will be encrypted and
the proxy should just forward everything without closing the
connection.
To learn more, see "How HTML5 Web Sockets Interact With Proxy
Servers" by Peter Lubbers.
Cross-Site WebSocket Hijacking
WebSocket connections are not protected by the same-origin policy.
This makes them vulnerable to Cross-Site WebSocket Hijacking.
Therefore, WebSocket backends must check the correctness of the
Origin header, if they use any kind of client-cached authentication,
such as cookies or HTTP authentication.
I will not go into the details here, but consider this short example.
Assume a Bitcoin Exchange uses WebSockets to provide its trading
service. When you log in, the Exchange might set a cookie to keep
your session active for a given period of time. Now, all an attacker
has to do to steal your precious Bitcoins is make you visit a site
under her control, and simply open a WebSocket connection to the
Exchange. The malicious connection is going to be automatically
authenticated. That is, unless the Exchange checks the Origin header
and blocks the connections coming from unauthorized domains.
I encourage you to check out the great article about Cross-Site
WebSocket Hijacking by Christian Schneider, to learn more.
Server-Sent Events
Now that we know a bit more about WebSockets, including their
advantages and shortcomings, let us learn about Server-Sent Events
and find out if they are a valid alternative.
Server-Sent Events enable the server to send low-latency push events
to the client, at any time. They use a very simple protocol that is
part of the HTML Standard and supported by every browser.
Unlike WebSockets, Server-sent Events flow only one way: from the
server to the client. This makes them unsuitable for a very specific
set of applications, that is, those that require a communication
channel that is both two-way and low latency, like real-time games.
However, this trade-off is also their major advantage over
WebSockets, because being one-way, Server-Sent Events work seamlessly
on top of HTTP, without requiring a custom protocol. This gives them
automatic access to all of HTTP's features, such as compression or
HTTP/2 multiplexing, making them a very convenient choice for the
majority of real-time applications, where the bulk of the data is
sent from the server, and where a little overhead in requests, due to
HTTP headers, is acceptable.
The protocol is very simple. It uses the text/event-stream
Content-Type and messages of the form:
data: First message
event: join
data: Second message. It has two
data: lines, a custom event type and an id.
id: 5
: comment. Can be used as keep-alive
data: Third message. I do not have more data.
data: Please retry later.
retry: 10
Each event is separated by two empty lines (\n) and consists of
various optional fields.
The data field, which can be repeted to denote multiple lines in the
message, is unsurprisingly used for the content of the event.
The event field allows to specify custom event types, which as we
will show in the next section, can be used to fire different event
handlers on the client.
The other two fields, id and retry, are used to configure the
behaviour of the automatic reconnection mechanism. This is one of the
most interesting features of Server-Sent Events. It ensures that when
the connection is dropped or closed by the server, the client will
automatically try to reconnect, without any user intervention.
The retry field is used to specify the minimum amount of time, in
seconds, to wait before trying to reconnect. It can also be sent by a
server, immediately before closing the client's connection, to reduce
its load when too many clients are connected.
The id field associates an identifier with the current event. When
reconnecting the client will transmit to the server the last seen id,
using the Last-Event-ID HTTP header. This allows the stream to be
resumed from the correct point.
Finally, the server can stop the automatic reconnection mechanism
altogether by returning an HTTP 204 No Content response.
Let's write some code!
Let us now put into practice what we learned. In this section we will
implement a simple service both with Server-Sent Events and
WebSockets. This should enable us to compare the two technologies. We
will find out how easy it is to get started with each one, and verify
by hand the features discussed in the previous sections.
We are going to use Python for the backend, Caddy as a reverse-proxy
and of course a couple of lines of JavaScript for the frontend.
To make our example as simple as possible, our backend is just going
to consist of two endpoints, each streaming a unique sequence of
random numbers. They are going to be reachable from /sse1 and /sse2
for Server-Sent Events, and from /ws1 and /ws2 for WebSockets. While
our frontend is going to consist of a single index.html file, with
some JavaScript which will let us start and stop WebSockets and
Server-Sent Events connections.
The code of this example is available on GitHub.
The Reverse-Proxy
Using a reverse-proxy, such as Caddy or nginx, is very useful, even
in a small example such as this one. It gives us very easy access to
many features that our backend of choice may lack.
More specifically, it allows us to easily serve static files and
automatically compress HTTP responses; to provide support for HTTP/2,
letting us benefit from multiplexing, even if our backend only
supports HTTP/1; and finally to do load balancing.
I chose Caddy because it automatically manages for us HTTPS
certificates, letting us skip a very boring task, especially for a
quick experiment.
The basic configuration, which resides in a Caddyfile at the root of
our project, looks something like this:
localhost
bind 127.0.0.1 ::1
root ./static
file_server browse
encode zstd gzip
This instructs Caddy to listen on the local interface on ports 80 and
443, enabling support for HTTPS and generating a self-signed
certificate. It also enables compression and serving static files
from the static directory.
As the last step we need to ask Caddy to proxy our backend services.
Server-Sent Events is just regular HTTP, so nothing special here:
reverse_proxy /sse1 127.0.1.1:6001
reverse_proxy /sse2 127.0.1.1:6002
To proxy WebSockets our reverse-proxy needs to have explicit support
for it. Luckily, Caddy can handle this without problems, even though
the configuration is slighly more verbose:
@websockets {
header Connection *Upgrade*
header Upgrade websocket
}
handle /ws1 {
reverse_proxy @websockets 127.0.1.1:6001
}
handle /ws2 {
reverse_proxy @websockets 127.0.1.1:6002
}
Finally you should start Caddy with
$ sudo caddy start
The Frontend
Let us start with the frontend, by comparing the JavaScript APIs of
WebSockets and Server-Sent Events.
The WebSocket JavaScript API is very simple to use. First, we need to
create a new WebSocket object passing the URL of the server. Here wss
indicates that the connection is to happen over HTTPS. As mentioned
above it is really recommended to use HTTPS to avoid issues with
proxies.
Then, we should listen to some of the possible events (i.e. open,
message, close, error), by either setting the on$event property or by
using addEventListener().
const ws = new WebSocket("wss://localhost/ws");
ws.onopen = e => console.log("WebSocket open");
ws.addEventListener(
"message", e => console.log(e.data));
The JavaScript API for Server-Sent Events is very similar. It
requires us to create a new EventSource object passing the URL of the
server, and then allows us to subscribe to the events in the same way
as before.
The main difference is that we can also subscribe to custom events.
const es = new EventSource("https://localhost/sse");
es.onopen = e => console.log("EventSource open");
es.addEventListener(
"message", e => console.log(e.data));
// Event listener for custom event
es.addEventListener(
"join", e => console.log(`${e.data} joined`))
We can now use all this freshly aquired knowledge about JS APIs to
build our actual frontend.
To keep things as simple as possible, it is going to consist of only
one index.html file, with a bunch of buttons that will let us start
and stop our WebSockets and EventSources. Like so
We want more than one WebSocket/EventSource so we can test if HTTP/2
multiplexing works and how many connections are open.
Now let us implement the two functions needed by those buttons to
work:
const wss = [];
function startWS(i) {
if (wss[i] !== undefined) return;
const ws = wss[i] = new WebSocket("wss://localhost/ws"+i);
ws.onopen = e => console.log("WS open");
ws.onmessage = e => console.log(e.data);
ws.onclose = e => closeWS(i);
}
function closeWS(i) {
if (wss[i] !== undefined) {
console.log("Closing websocket");
websockets[i].close();
delete websockets[i];
}
}
The frontend code for Server-Sent Events is almost identical. The
only difference is the onerror event handler, which is there because
in case of error a message is logged and the browser will attempt to
reconnect.
const ess = [];
function startES(i) {
if (ess[i] !== undefined) return;
const es = ess[i] = new EventSource("https://localhost/sse"+i);
es.onopen = e => console.log("ES open");
es.onerror = e => console.log("ES error", e);
es.onmessage = e => console.log(e.data);
}
function closeES(i) {
if (ess[i] !== undefined) {
console.log("Closing EventSource");
ess[i].close()
delete ess[i]
}
}
The Backend
To write our backend, we are going to use Starlette, a simple async
web framework for Python, and Uvicorn as the server. Moreover, to
make things modular, we are going to separate the data-generating
process, from the implementation of the endpoints.
We want each of the two endpoints to generate an unique random
sequence of numbers. To accomplish this we will use the stream id
(i.e. 1 or 2) as part of the random seed.
Ideally, we would also like our streams to be resumable. That is, a
client should be able to resume the stream from the last message it
received, in case the connection is dropped, instead or re-reading
the whole sequence. To make this possible we will assign an ID to
each message/event, and use it to initialize the random seed,
together with the stream id, before each message is generated. In our
case, the ID is just going to be a counter starting from 0.
With all that said, we are ready to write the get_data function which
is responsible to generate our random numbers:
import random
def get_data(stream_id: int, event_id: int) -> int:
rnd = random.Random()
rnd.seed(stream_id * event_id)
return rnd.randrange(1000)
Let's now write the actual endpoints.
Getting started with Starlette is very simple. We just need to
initialize an app and then register some routes:
from starlette.applications import Starlette
app = Starlette()
To write a WebSocket service both our web server and framework of
choice must have explicit support. Luckily Uvicorn and Starlette are
up to the task, and writing a WebSocket endpoint is as convenient as
writing a normal route.
This all the code that we need:
from websockets.exceptions import WebSocketException
@app.websocket_route("/ws{id:int}")
async def websocket_endpoint(ws):
id = ws.path_params["id"]
try:
await ws.accept()
for i in itertools.count():
data = {"id": i, "msg": get_data(id, i)}
await ws.send_json(data)
await asyncio.sleep(1)
except WebSocketException:
print("client disconnected")
The code above will make sure our websocket_endpoint function is
called every time a browser requests a path starting with /ws and
followed by a number (e.g. /ws1, /ws2).
Then, for every matching request, it will wait for a WebSocket
connection to be established and subsequently start an infinite loop
sending random numbers, encoded as a JSON payload, every second.
For Server-Sent Events the code is very similar, except that no
special framework support is needed. In this case, we register a
route matching URLs starting with /sse and ending with a number (e.g.
/sse1, /sse2). However, this time our endpoint just sets the
appropriate headers and returns a StreamingResponse:
from starlette.responses import StreamingResponse
@app.route("/sse{id:int}")
async def sse_endpoint(req):
return StreamingResponse(
sse_generator(req),
headers={
"Content-type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
},
)
StreamingResponse is an utility class, provided by Starlette, which
takes a generator and streams its output to the client, keeping the
connection open.
The code of sse_generator is shown below, and is almost identical to
the WebSocket endpoint, except that messages are encoded according to
the Server-Sent Events protocol:
async def sse_generator(req):
id = req.path_params["id"]
for i in itertools.count():
data = get_data(id, i)
data = b"id: %d\ndata: %d\n\n" % (i, data)
yield data
await asyncio.sleep(1)
We are done!
Finally, assuming we put all our code in a file named server.py, we
can start our backend endpoints using Uvicorn, like so:
$ uvicorn --host 127.0.1.1 --port 6001 server:app &
$ uvicorn --host 127.0.1.1 --port 6002 server:app &
Bonus: Cool SSE features
Ok, let us now conclude by showing how easy it is to implement all
those nice features we bragged about earlier.
Compression can be enabled by changing just a few lines in our
endpoint:
@@ -32,10 +33,12 @@ async def websocket_endpoint(ws):
async def sse_generator(req):
id = req.path_params["id"]
+ stream = zlib.compressobj()
for i in itertools.count():
data = get_data(id, i)
data = b"id: %d\ndata: %d\n\n" % (i, data)
- yield data
+ yield stream.compress(data)
+ yield stream.flush(zlib.Z_SYNC_FLUSH)
await asyncio.sleep(1)
@@ -47,5 +50,6 @@ async def sse_endpoint(req):
"Content-type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
+ "Content-Encoding": "deflate",
},
)
We can then verify that everything is working as expected by checking
the DevTools:
SSE CompressionSSE Compression
Multiplexing is enabled by default since Caddy supports HTTP/2. We
can confirm that the same connection is being used for all our SSE
requests using the DevTools again:
SSE MultiplexingSSE Multiplexing
Automatic reconnection on unexpected connection errors is as simple
as reading the Last-Event-ID header in our backend code:
< for i in itertools.count():
---
> start = int(req.headers.get("last-event-id", 0))
> for i in itertools.count(start):
Nothing has to be changed in the front-end code.
We can test that it is working by starting the connection to one of
the SSE endpoints and then killing uvicorn. The connection will drop,
but the browser will automatically try to reconnect. Thus, if we
re-start the server, we will see the stream resume from where it left
off!
Notice how the stream resumes from the message 243. Feels like magic
ProvaProva
Conclusion
WebSockets are a big machinery built on top of HTTP and TCP to
provide a set of extremely specific features, that is two-way and low
latency communication.
In order to do that they introduce a number of complications, which
end up making both client and server implementations more complicated
than solutions based entirely on HTTP.
These complications and limitations have been addressed by new specs
(RFC 7692, RFC 8441), and will slowly end up implemented in client
and server libraries.
However, even in a world where WebSockets have no technical
downsides, they will still be a fairly complex technology, involving
a large amount of additional code both on clients and servers.
Therefore, you should carefully consider if the addeded complexity is
worth it, or if you can solve your problem with a much simpler
solution, such as Server-Sent Events.
---------------------------------------------------------------------
That's all, folks! I hope you found this post interesting and maybe
learned something new.
Feel free to check out the code of the demo on GitHub, if you want to
experiment a bit with Server Sent Events and Websockets.
I also encourage you to read the spec, because it surprisingly clear
and contains many examples.
#websockets #server-sent-events #eventsource
Comments
You can comment this post on HN!
bullen on Feb 12, 2022 at 3:26 pm [-]
I made the backend for this MMO on SSE over HTTP/1.1:
https://store.steampowered.com/app/486310/Meadow/
We have had a total of 350.000 players over 6 years and the backend
out-scales all other multiplayer servers that exist and it's open
source:
https://github.com/tinspin/fuse
You don't need HTTP/2 to make SSE work well. Actually the HTTP/2 TCP
head-of-line issue and all the workarounds for that probably make it
harder to scale without technical debt.
---------------------------------------------------------------------
bastawhiz on Feb 12, 2022 at 4:01 pm [-]
Can you explain how H2 would make it harder to scale SSE?
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:21 pm [-]
The mistake they did was to assume only one TCP socket should be
used; the TCP has it's own head-of-line limitations just like HTTP/
1.1 has if you limit the number of sockets (HTTP/1.1 had 2 sockets
allowed per client, but Chrome doesn't care...) it's easily solvable
by using more sockets but then you get into concurrency problems
between the sockets.
That said if you, like SSE on HTTP/1.1; use 2 sockets per client
(breaking the RFC, one for upstream and one for downstream) you are
golden but then why use HTTP/2 in the first place?
HTTP/2 creates more problems than solutions and so does HTTP/3
unfortunately until their protocol fossilizes which is the real
feature of a protocol, to become stable so everyone can rely on
things working.
In that sense HTTP/1.1 is THE protocol of human civilization until
the end of times; together with SMTP (the oldest protocol of the
bunch) and DNS (which is centralized and should be replaced btw).
---------------------------------------------------------------------
jupp0r on Feb 12, 2022 at 4:33 pm [-]
The issues with TCP head-of-line blocking are resolved in HTTP/3
(QUIC).
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:44 pm [-]
Sure but then HTTP/3 is still binary and it's in flux meaning most
routers don't play nice with it yet and since HTTP/1.1 works great
for 99.9% of the usecases I would say it's a complete waste of time,
unless you have some new agenda to push.
Really people should try and build great things on the protocols we
have instead of always trying to re-discover the wheel, note: NOT the
same as re-inventing the wheel: http://move.rupy.se/file/wheel.jpg
---------------------------------------------------------------------
HWR_14 on Feb 12, 2022 at 4:53 pm [-]
Your license makes some sense, but it seems to include a variable
perpetual subscription cost via gumroad. Without an account (assuming
I found the right site), I have no idea what you would be asking for.
I recommend making it a little clearer on the landing page.
That's said, it's very cool. Do you have a development blog for
Meadow?
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 5:23 pm [-]
Added link in the readme! Thx.
No, no dev log but I'll tell you some things that where incredible
during that project:
- I started the fuse project 4 months before I set foot in the Meadow
project office (we had like 3 meetings during those 4 months just to
touch base on the vision)! This is a VERY good way of making things
smooth, you need to give tech/backend/foundation people a head start
of atleast 6 months in ANY project.
- We spent ONLY 6 weeks (!!!) implementing the entire games
multiplayer features because I was 100% ready for the job after 4
months. Not a single hickup...
- Then for 7 months they finished the game client without me and
released without ANY problems (I came back to the office that week
and that's when I solved the anti-virus/proxy cacheing/buffering
problem!).
I think Meadow is the only MMO in history so far to have ZERO
breaking bugs on release (we just had one UTF-8 client bug that we
patched after 15 minutes and nobody noticed except the poor person
that put a strange character in their name).
---------------------------------------------------------------------
Aeolun on Feb 12, 2022 at 6:03 pm [-]
> MIT but [bunch of stuff]
Not MIT then. The beauty of MIT is that there is no stuff.
---------------------------------------------------------------------
stavros on Feb 12, 2022 at 4:49 pm [-]
Probably not the same person, but did you ever play on RoD by any
chance?
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:52 pm [-]
Probably not, since I dont know what RoD is.
---------------------------------------------------------------------
stavros on Feb 12, 2022 at 4:55 pm [-]
I thought so, thanks!
---------------------------------------------------------------------
herodoturtle on Feb 12, 2022 at 4:19 pm [-]
Nice work, thanks for sharing.
---------------------------------------------------------------------
mmcclimon on Feb 12, 2022 at 4:38 pm [-]
SSEs are one of the standard push mechanisms in JMAP [1], and they're
part of what make the Fastmail UI so fast. They're straightforward to
implement, for both server and client, and the only thing I don't
like about them is that Firefox dev tools make them totally
impossible to debug.
1. https://jmap.io/spec-core.html#event-source
---------------------------------------------------------------------
szastamasta on Feb 12, 2022 at 3:59 pm [-]
My experience with sse is pretty bad. They are unreliable, don't
support headers and require keep-alive hackery. In my experience
WebSockets are so much better.
Also ease of use doesn't really convince me. It's like 5 lines of
code with socket.io to have working websockets, without all the
downsides of sse.
---------------------------------------------------------------------
ricardobeat on Feb 12, 2022 at 5:51 pm [-]
Mind expanding on your experience and how are websockets more
reliable than SSE? one of the main benefits of SSE is reliability
from running on plain HTTP.
---------------------------------------------------------------------
88913527 on Feb 12, 2022 at 5:55 pm [-]
HTTP headers must be written before the body; so once you start
writing the body, you can't switch back to writing headers.
Server-sent events appears to me to just be chunked transfer encoding
[0], with the data structured in a particular way (at least from the
perspective of the server) in this reference implementation (tl,dr
it's a stream):
https://gist.github.com/jareware/aae9748a1873ef8a91e5#file-s...
[0]: https://en.wikipedia.org/wiki/Chunked_transfer_encoding
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:27 pm [-]
They don't support headers in javascript, that is more a problem with
javascript than SSE.
Read my comment below about that.
---------------------------------------------------------------------
jFriedensreich on Feb 12, 2022 at 4:25 pm [-]
sounds like you did not really evaluate both technologies at the
heart but only some libraries on top?
---------------------------------------------------------------------
szastamasta on Feb 12, 2022 at 5:11 pm [-]
Yeah, sorry. In socket.io it's 2 lines. You need 5 lines with browser
APIs :).
You simply get stuff like auto-reconnect and graceful failover to
long polling for free when using socket.io
---------------------------------------------------------------------
coder543 on Feb 12, 2022 at 5:51 pm [-]
SSE EventSource also has built-in auto-reconnect, and it doesn't even
need to support failover to long polling.
Neither of those being built into a third party websocket library are
actually advantages for websocket... they just speak to the additional
complexity of websocket. Plus, long polling as a fallback mechanism
can only be possible with server side support for both long polling
and websocket. Might as well just use SSE at that point.
---------------------------------------------------------------------
mikojan on Feb 12, 2022 at 4:12 pm [-]
What? How do they not support headers?
You have to send "Content-Type: text/event-stream" just to make them
work.
And you keep the connection alive by sending "Connection: keep-alive"
as well.
I've never had any issues using SSEs.
---------------------------------------------------------------------
szastamasta on Feb 12, 2022 at 5:13 pm [-]
I mean you cannot send stuff from client. If you're using tokens for
auth and don't want to use session cookies, you end with ugly
polyfils.
---------------------------------------------------------------------
coder543 on Feb 12, 2022 at 6:00 pm [-]
> If you're using tokens for auth and don't want to use session
cookies
That sounds like a self-inflicted problem. Even if you're using
tokens, why not store them in a session cookie marked with SameSite=
strict, httpOnly, and secure? Seems like it would make everything
simpler, unless you're trying to build some kind of cross-site
widget, I guess.
---------------------------------------------------------------------
Kyro38 on Feb 12, 2022 at 5:16 pm [-]
SSE won't work with tokens, see https://stackoverflow.com/questions/
28176933/http-authorizat...
---------------------------------------------------------------------
mythz on Feb 12, 2022 at 4:16 pm [-]
We use SSE for our APIs Server Events feature https://
docs.servicestack.net/server-events with C#, JS/TypeScript and Java
high-level clients.
It's a beautifully simple & elegant lightweight push events option
that works over standard HTTP, the main gotcha for maintaining
long-lived connections is that server/clients should implement their
own heartbeat to be able to detect & auto reconnect failed
connections which was the only reliable way we've found to detect &
resolve broken connections.
---------------------------------------------------------------------
rawoke083600 on Feb 12, 2022 at 3:17 pm [-]
I like them, they surprisingly easy to use..
One example where i found it to be not the perfect solution was with
a web turn-based game.
The SSE was perfect to update gamestate to all clients, but to have
great latency from the players point of view whenever the player had
to do something, it was via a normal ajax-http call.
Eventually I had to switch to uglier websockets and keep connection
open.
Http-keep-alive was that reliable.
---------------------------------------------------------------------
coder543 on Feb 12, 2022 at 5:03 pm [-]
With HTTP/2, the browser holds a TCP connection open that has various
streams multiplexed on top. One of those streams would be your SSE
stream. When the client makes an AJAX call to the server, it would be
sent through the already-open HTTP/2 connection, so the latency is
very comparable to websocket -- no new connection is needed, no costly
handshakes.
With the downsides of HTTP/1.1 being used with SSE, websockets
actually made a lot of sense, but in many ways they were a kludge
that was only needed until HTTP/2 came along. As you said,
communicating back to the server in response to SSE wasn't great with
HTTP/1.1. That's before mentioning the limited number of TCP
connections that a browser will allow for any site, so you couldn't
use SSE on too many tabs without running out of connections
altogether, breaking things.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:19 pm [-]
You just needed to send a "noop" (no operation) message at regular
intervals.
---------------------------------------------------------------------
jcelerier on Feb 12, 2022 at 4:43 pm [-]
that puts it instantly in the "fired if you ever use it" bin
---------------------------------------------------------------------
leeoniya on Feb 12, 2022 at 4:23 pm [-]
the biggest drawback with SSE, even when unidirectional comm is
sufficient is
> SSE is subject to limitation with regards to the maximum number of
open connections. This can be especially painful when opening various
tabs as the limit is per browser and set to a very low number (6).
https://ably.com/blog/websockets-vs-sse
SharedWorker could be one way to solve this, but lack of Safari
support is a blocker, as usual. https://developer.mozilla.org/en-US/
docs/Web/API/SharedWorke...
also, for websockets, there are various libs that handle
auto-reconnnects
https://github.com/github/stable-socket
https://github.com/joewalnes/reconnecting-websocket
https://dev.to/jeroendk/how-to-implement-a-random-exponentia...
---------------------------------------------------------------------
coder543 on Feb 12, 2022 at 6:07 pm [-]
This isn't a problem with HTTP/2. You can have as many SSE
connections as you want across as many tabs as the user wants to use.
Browsers multiplex the streams over a handful of shared HTTP/2
connections.
If you're still using HTTP/1.1, then yes, this would be a problem.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:25 pm [-]
It used to be 2 sockets per client, so now it's 6?
Well it's a non-problem, if you need more bandwith than one socket in
each direction can provide you have much bigger problems than the
connection limit; which you can just ignore.
---------------------------------------------------------------------
leeoniya on Feb 12, 2022 at 4:45 pm [-]
the problem is multiple tabs. if you have, e.g. a bunch of Grafana
dashboards open on multiple screens in different tabs (on same
domain), you will exhaust your HTTP connection limit very quickly
with SSE.
in most cases this is not a concern, but in some cases it is.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:50 pm [-]
Aha, ok yes then you would need to have many subdomains?
Or make your own tab system inside one browser tab.
I can see why that is a problem for some.
---------------------------------------------------------------------
KaoruAoiShiho on Feb 12, 2022 at 6:01 pm [-]
I have investigated SSE for https://fiction.live a few years back but
stayed with websockets. Maybe it's time for another look. I pay
around $300 a month for the websocket server, it's probably not worth
it yet to try to optimize that but if we keep growing at this rate it
may soon be.
---------------------------------------------------------------------
mmzeeman on Feb 12, 2022 at 3:54 pm [-]
Did research on SSE a short while ago. Found out that the mimetype
"text/event-stream" was blocked by a couple of anti-virus products.
So that was a no-go for us.
---------------------------------------------------------------------
pornel on Feb 12, 2022 at 6:05 pm [-]
It's not blocked. It's just that some very badly written proxies can
try to buffer the "whole" response, and SSE is technically a
never-ending file.
It's possible to detect that, and fall back to long polling. Send an
event immediately after opening a new connection, and see if it
arrives at the client within a short timeout. If it doesn't, make
your server close the connection after every message sent (connection
close will make AV let the response through). The client will
reconnect automatically.
Or run:
while(true) alert("antivirus software is worse than malware")
---------------------------------------------------------------------
ronsor on Feb 12, 2022 at 4:14 pm [-]
These days I feel like the only way to win against poorly designed
antiviruses and firewalls is to--ironically enough--behave like malware
and obfuscate what's going on.
---------------------------------------------------------------------
captn3m0 on Feb 12, 2022 at 4:19 pm [-]
I was using SSE when they'd just launched (almost a decade ago now)
and never faced any AV issues.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:29 pm [-]
They don't block it, they cache the response until there is enough
data in the buffer... just push more garbage data on the first
chunks...
---------------------------------------------------------------------
bastawhiz on Feb 12, 2022 at 4:02 pm [-]
How did you find that out?
---------------------------------------------------------------------
TimWolla on Feb 12, 2022 at 5:45 pm [-]
> RFC 8441, released on September 2018, tries to fix this limitation
by adding support for "Bootstrapping WebSockets with HTTP/2". It has
been implemented in Firefox and Chrome. However, as far as I know, no
major reverse-proxy implements it.
HAProxy supports RFC 8441 automatically. It's possible to disable it,
because support in clients tends to be buggy-ish: https://
cbonte.github.io/haproxy-dconv/2.4/configuration.htm...
Generally I can second recommendation of using SSE / long running
response streams over WebSockets for the same reasons as the article.
---------------------------------------------------------------------
dpweb on Feb 12, 2022 at 3:34 pm [-]
Very easy to implement - still using code I wrote 8 years ago, which
is like 20 lines client and server, choosing it at the time over ws.
Essentially just new EventSource(), text/event-stream header, and
keep conn open. Zero dependencies in browser and nodejs. Needs no
separate auth.
---------------------------------------------------------------------
kreetx on Feb 12, 2022 at 4:14 pm [-]
SSEs had a severe connection limit, something like 4 connections per
domain per browser (IIRC), so if you had four tabs open then opening
new ones would fail.
---------------------------------------------------------------------
coder543 on Feb 12, 2022 at 4:46 pm [-]
Browsers also limit the number of websocket connections. But, if
you're using HTTP/2, as you should be, then the multiplexing means
that you can have effectively unlimited SSE connections through a
limited number of TCP connections, and those TCP connections will be
shared across tabs.
(There's one person in this thread who is just ridiculously opposed
to HTTP/2, but... HTTP/2 has serious benefits. It wasn't developed in
a vacuum by people who had no idea what they were doing, and it
wasn't developed aimlessly or without real world testing. It is used
by pretty much all major websites, and they absolutely wouldn't use
it if HTTP/1.1 was better... those major websites exist to serve
their customers, not to conspiratorially push an agenda of broken
technologies that make the customer experience worse.)
---------------------------------------------------------------------
jcheng on Feb 12, 2022 at 5:04 pm [-]
> Browsers also limit the number of websocket connections
True but the limit for websockets these days is in the hundreds, as
opposed to 6 for regular HTTP requests.
---------------------------------------------------------------------
coder543 on Feb 12, 2022 at 5:08 pm [-]
https://stackoverflow.com/questions/26003756/is-there-a-limi...
It appears to be 30 per domain, not "hundreds", at least as of the
time this answer was written. I didn't see anything more recent that
contradicted this.
In practice, this is unlikely to be problematic unless you're using
multiple websockets per page, but the limit of 6 TCP connections is
even less likely to be a problem if you're using HTTP/2, since those
will be shared across tabs, which isn't the case for the dedicated
connection used for each websocket.
---------------------------------------------------------------------
oplav on Feb 12, 2022 at 4:41 pm [-]
6 connections per domain per browser: https://bugs.chromium.org/p/
chromium/issues/detail?id=275955
There are some hacks to work around it though.
---------------------------------------------------------------------
oneweekwonder on Feb 12, 2022 at 3:33 pm [-]
Personally i use mqtt over websockets, paho[0] is a good js library.
It support last will for dc's and the message queue design makes it
easy to think of and debug. There also a lot of mq brokers that will
scale well.
[0]: https://www.eclipse.org/paho/index.php?page=clients/js/index...
---------------------------------------------------------------------
tgv on Feb 12, 2022 at 5:48 pm [-]
But SSE is a oneway street, isn't it? The client gets one chance to
send days, and that's it? Or is there some way around it?
---------------------------------------------------------------------
Too on Feb 12, 2022 at 5:45 pm [-]
Can someone give a brief summary of how this differs from long
polling. It looks very similar except it has a small layer of
formalized event/data/id structure on top? Are there any differences
in the lower connection layers, or any added support by browsers and
proxies given some new headers?
What are the benefits of SSE vs long polling?
---------------------------------------------------------------------
TimWolla on Feb 12, 2022 at 5:47 pm [-]
> What are the benefits of SSE vs long polling?
The underlying mechanism effectively is the same: A long running HTTP
response stream. However long-polling commonly is implemented by
"silence" until an event comes in and then performing another request
to wait for the next event, whereas SSE sends you multiple events per
request.
---------------------------------------------------------------------
sb8244 on Feb 12, 2022 at 3:09 pm [-]
I can't find any downsides of SSE presented. My experience is that
they're nice in theory but the devils in the details. The biggest
issue being that you basically need http/2 to make them practical.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:14 pm [-]
Absolutely not, HTTP/1.1 is the way to make SSE fly:
https://github.com/tinspin/rupy/wiki/Comet-Stream
Old page, search for "event-stream"... Comet-stream is a collection
of techniques of which SSE is one.
My experience is that SSE goes through anti-viruses better!
---------------------------------------------------------------------
mwcampbell on Feb 12, 2022 at 4:05 pm [-]
> My experience is that SSE goes through anti-viruses better!
Hmm, another commenter says the opposite:
https://news.ycombinator.com/item?id=30313692
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:36 pm [-]
He just needs to push more data on the reply to force the anti-virus
to flush the data. Easy peasy.
---------------------------------------------------------------------
foxbarrington on Feb 12, 2022 at 3:47 pm [-]
I'm a huge fan of SSE. In the first chapter of my book Fullstack
Node.js I use it for the real-time chat example because it requires
almost zero setup. I've also been using SSE on https://rambly.app to
handle all the WebRTC signaling so that clients can find new peers.
Works great.
---------------------------------------------------------------------
viiralvx on Feb 12, 2022 at 4:35 pm [-]
Rambly looks sick, thanks for sharing!
---------------------------------------------------------------------
samwillis on Feb 12, 2022 at 3:39 pm [-]
I have used SSEs extensively, I think they are brilliant and
massively underused.
The one thing I wish they supported was a binary event data type
(mixed in with text events), effectively being able to send in my
case image data as an event. The only way to do it currently is as a
Base64 string.
---------------------------------------------------------------------
keredson on Feb 12, 2022 at 5:25 pm [-]
SSE supports gzip compression, and a gzip-ed base64 is almost as
small as the original jpg:
$ ls -l PXL_20210926_231226615.*
-rw-rw-r-- 1 derek derek 8322217 Feb 12 09:20
PXL_20210926_231226615.base64
-rw-rw-r-- 1 derek derek 6296892 Feb 12 09:21
PXL_20210926_231226615.base64.gz
-rw-rw-r-- 1 derek derek 6160600 Oct 3 15:31
PXL_20210926_231226615.jpg
---------------------------------------------------------------------
samwillis on Feb 12, 2022 at 5:43 pm [-]
Quite true, however from memory Django doesn't (or didn't) support
gzip on streaming responses and as we host on Heroku we didn't want
to introduce another http server such as Nginx into the Heroku Dyno.
As an aside, Django with Gevent/Gunicorn does SSE well from our
experience.
---------------------------------------------------------------------
jtwebman on Feb 12, 2022 at 4:45 pm [-]
Send an event that tells the browser to request the binary image.
---------------------------------------------------------------------
samwillis on Feb 12, 2022 at 4:51 pm [-]
In my case I was aiming for low latency with a dynamically generated
image. To send a url to a saved image, I would have to save it first
to a location for the browser to download it form. That would add at
least 400ms, probably more.
Ultimately what I did was run an SSE request and long polling image
request in parallel, but that wasn't ideal as I had to coordinate
that on the backend.
---------------------------------------------------------------------
bckr on Feb 12, 2022 at 5:26 pm [-]
I'm curious if you could have kept the image in memory (or in Redis)
and served it that way
---------------------------------------------------------------------
samwillis on Feb 12, 2022 at 5:39 pm [-]
That's actually not too far from what we do. The image is created by
a backend service with communication (queue and responses) to the
front end servers via Redis. However rather than saving the image in
its entirety to Redis, it's streamed via it in chunks using LPUSH and
BLPOP.
This lets us then stream the image as a steaming http response from
the front end, potentially before the jpg has finished being
generated on the backend.
So from the SSE we know the url the image is going to be at before
it's ready, and effectively long poll with a 'new Image()'.
---------------------------------------------------------------------
rcarmo on Feb 12, 2022 at 4:39 pm [-]
I have always preferred SSE to WebSockets. You can do a _lot_ with a
minuscule amount of code, and it is great for updating charts and
status UIs on the fly without hacking extra ports, server daemons and
whatnot.
---------------------------------------------------------------------
lima on Feb 12, 2022 at 3:25 pm [-]
One issue with SSE is that dumb enterprise middleboxes and Windows
antivirus software break them :(
They'll try to read the entire stream to completion and will hang
forever.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:29 pm [-]
I managed to get through almost all middle men by using 2 tricks:
1) Push a large amount of data on the pull (the comet-stream SSE
never ending request) response to trigger the middle thing to flush
the data.
2) Using SSE instead of just Comet-Stream since they will see the
header and realize this is going to be real-time data.
We had 99.6% succes rate on the connection from 350.000 players from
all over the world (even satellite connections in the Pacific and
modems in Siberia) which is a world record for any service.
---------------------------------------------------------------------
Matheus28 on Feb 12, 2022 at 4:12 pm [-]
While 350k simultaneous connections is nice, I'd be extremely
skeptical of that being any kind of world record
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:38 pm [-]
The world record is not the 1.100 concurrent users per machine (T2
small then medium on AWS) we had at peak, but the 99.6% connections
we managed. All other multiplayer games have ~80% if they are lucky!
350.000 was the total number of players during 6 years.
---------------------------------------------------------------------
on Feb 12, 2022 at 4:37 pm [-]
[[ deleted ]]
---------------------------------------------------------------------
havkom on Feb 12, 2022 at 4:40 pm [-]
The most compatible technique is long polling (with a re-established
connection after X seconds if no event). Works suprisingly well in
many cases and is not blocket by any proxies.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:46 pm [-]
long-polling are blocked to almost exactly the same extent as
comet-stream and SSE. The only thing you have to do is to push more
data on the response so that the proxy is forced to flush the
response!
Since IE7 is no longer used we can bury long-polling for good.
---------------------------------------------------------------------
nickjj on Feb 12, 2022 at 5:06 pm [-]
This is why I really really like Hotwire Turbo[0] which is a back-end
agnostic way to do fast and partial HTML based page updates over HTTP
and it optionally supports broadcasting events with WebSockets (or
SSE[1]) only when it makes sense.
So many alternatives to Hotwire want to use WebSockets for
everything, even for serving HTML from a page transition that's not
broadcast to anyone. I share the same sentiment as the author in that
WebSockets have real pitfalls and I'd go even further and say unless
used tastefully and sparingly they break the whole ethos of the web.
HTTP is a rock solid protocol and super optimized / well known and
easy to scale since it's stateless. I hate the idea of going to a
site where after it loads, every little component of the page is
updated live under my feet. The web is about giving users control. I
think the idea of push based updates like showing notifications and
other minor updates are great when used in moderation but SSE can do
this. I don't like the direction of some frameworks around wanting to
broadcast everything and use WebSockets to serve HTML to 1 client.
I hope in the future Hotwire Turbo alternatives seriously consider
using HTTP and SSE as an official transport layer.
[0]: https://hotwired.dev/
[1]: https://twitter.com/dhh/status/1346095619597889536?lang=en
---------------------------------------------------------------------
quickthrower2 on Feb 12, 2022 at 4:26 pm [-]
Is it worth upgrading a long polling solution to SSE? Would I see
much benefit?
What I mean by that is client sends request, server responds in up to
2 minutes with result or a try again flag. Either way client resends
request and then uses response data if provided.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:28 pm [-]
Yes, since IE7 is out of the game long-polling is no longer needed.
Comet-stream and SSE will save you alot of bandwidth and CPU!!!
---------------------------------------------------------------------
jFriedensreich on Feb 12, 2022 at 4:27 pm [-]
this is what i have been telling people for years, but its hard to
get the word out there. usually every dev just reflexes without
thinking to websockets when anything realtime or push related comes
up.
---------------------------------------------------------------------
captn3m0 on Feb 12, 2022 at 4:22 pm [-]
I think SSE might make a lot of sense for Serverless workloads? You
don't have to worry about running a websocket server, any serverless
host with HTTP support will do. Long-polling might be costlier
though?
---------------------------------------------------------------------
ravenstine on Feb 12, 2022 at 3:45 pm [-]
I usually use SSEs for personal projects because they are way more
simple than WebSockets (not that those aren't also simple) and most
of the time my web apps just need to listen for something coming from
the server and not bidirectional communication.
---------------------------------------------------------------------
llacb47 on Feb 12, 2022 at 3:30 pm [-]
Google uses SSE for hangouts/gchat.
---------------------------------------------------------------------
goodpoint on Feb 12, 2022 at 3:08 pm [-]
--- WebSockets cannot benefit from any HTTP feature. That is:
No support for compression
No support for HTTP/2 multiplexing
Potential issues with proxies
No protection from Cross-Site Hijacking
---
Is that true? The web never cease to amaze.
---------------------------------------------------------------------
__s on Feb 12, 2022 at 3:13 pm [-]
WebSockets support compression (ofc, the article goes on to detail
this & point out flaws. I'd argue that compression is not generally
useful in web sockets in the context of many small messages, so it
makes sense to be default-off for servers as it's something which
should be enabled explicitly when necessary, but the client should be
default-on since the server is where the resource usage decision
matters)
I don't see why WebSockets should benefit from HTTP. Besides the
handshake to setup the bidirectional channel, they're a separate
protocol. I'll agree that servers should think twice about using
them: they necessitate a lack of statelessness & HTTP has plenty of
benefits for most web usecases
Still, this is a good article. SSE looks interesting. I host an
online card game openEtG, which is far enough from real time that SSE
could potentially be a way to reduce having a connection to every
user on the site
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:18 pm [-]
The problem with WebSockets is that hey are:
1) More complex and binary so you cannot debug them as easily,
specially on live and specially if you use HTTPS.
2) The implementations don't parallelize the processing, with
Comet-Stream + SSE you just need to find a application server that
has concurrency and you are set to scale on the entire machines
cores.
3) WebSockets still have more problems with Firewalls.
---------------------------------------------------------------------
whazor on Feb 12, 2022 at 3:02 pm [-]
I tried out server side events, but they are still quite troubling
with the lack of headers and cookies. I remember I needed some
polyfill version which gave more issues.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:15 pm [-]
How do you mean lack of headers and cookies?
That is wrong. Edit: Actually it seems correct (a javascript problem,
not SSE problem) but it's a non-problem if you use a parameter for
that data instead and read it on the server.
---------------------------------------------------------------------
tytho on Feb 12, 2022 at 3:31 pm [-]
You cannot send custom headers when using the built-in EventSource[1]
constructor, however you can pass the 'include' value to the
credentials option. Many polyfills allow custom headers.
However you are correct that if you're not using JavaScript and
connecting directly to the SSE endpoint via something else besides a
browser client, nothing is preventing anyone from using custom
headers.
[1] https://developer.mozilla.org/en-US/docs/Web/API/EventSource...
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:35 pm [-]
Aha, well why do you need to send a header when you can just put the
data on the GET URL like so "blabla?cookie=erWR32" for example?
In my example I use this code:
var source = new EventSource('pull?name=one');
source.onmessage = function (event) {
document.getElementById('events').innerHTML += event.data;
};
---------------------------------------------------------------------
tytho on Feb 12, 2022 at 3:52 pm [-]
I think that works great! The complaint I've heard is that you may
need to support multiple ways to authenticate opening up more attack
surface.
---------------------------------------------------------------------
kreetx on Feb 12, 2022 at 3:53 pm [-]
What if you use http-only cookies?
---------------------------------------------------------------------
tytho on Feb 12, 2022 at 3:56 pm [-]
You can pass a 'withCredentials' option.
---------------------------------------------------------------------
withinboredom on Feb 12, 2022 at 3:34 pm [-]
I'm pretty sure I saw him sending headers in the talk. Did you watch
the talk?
---------------------------------------------------------------------
tytho on Feb 12, 2022 at 3:50 pm [-]
He was likely using a polyfill. It's definitely not in the spec and
there's an open discussion about trying to get it added: https://
github.com/whatwg/html/issues/2177
---------------------------------------------------------------------
axiosgunnar on Feb 12, 2022 at 3:45 pm [-]
So do I understand correctly that when using SSE, the login cookie of
the user is not automatically sent with the SSE request like it is
with all normal HTTP requests? And I have to redo auth somehow?
---------------------------------------------------------------------
bastawhiz on Feb 12, 2022 at 4:06 pm [-]
It should automatically send first party cookies, though you may need
to specify withCredentials.
---------------------------------------------------------------------
The_rationalist on Feb 12, 2022 at 4:49 pm [-]
for bidi Rsocket is much better than wevsocket, in fact its official
support is the best feature of spring boot
---------------------------------------------------------------------
beebeepka on Feb 12, 2022 at 3:20 pm [-]
So, what are the downsides to using websockets? They are my go-to
solution when I am doing a game, chat, or something else that needs
interactivity.
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:22 pm [-]
See my comment below: https://news.ycombinator.com/item?id=30313403
---------------------------------------------------------------------
herodoturtle on Feb 12, 2022 at 4:23 pm [-]
Been reading all your comments on this thread (thank you) with
interest.
Can you recommend some resources for learning SSE in depth?
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 4:40 pm [-]
I would look at my own app-server: https://github.com/tinspin/rupy
It's not the most well documented but it's the smallest
implementation while still being one of the most performant so you
can learn more than just SSE.
---------------------------------------------------------------------
pictur on Feb 12, 2022 at 3:16 pm [-]
Does SSE offer support for capturing connect/disconnect situations?
---------------------------------------------------------------------
bullen on Feb 12, 2022 at 3:21 pm [-]
The TCP stack can give you that info if you are lucky in your
topography but generally you cannot rely on this working 100%.
The way I solve it is to send "noop" messages at regular intervals so
that the socket write will return -1 and then I know something is off
and reconnect.
---------------------------------------------------------------------
steve76 on Feb 12, 2022 at 5:56 pm [-]
---------------------------------------------------------------------
Author imageAuthor image
Random thoughts
TwitterGitHubKeybase
Copyright (c) 2022. Powered by Gridsome