[HN Gopher] Engineering for Slow Internet
___________________________________________________________________
Engineering for Slow Internet
Author : jader201
Score : 787 points
Date : 2024-05-31 03:06 UTC (19 hours ago)
(HTM) web link (brr.fyi)
(TXT) w3m dump (brr.fyi)
| 0xWTF wrote:
| I still think engineering for slow internet is really important,
| and massively under appreciated by most software developers, but
| ... LEO systems (like Starlink, especially StarLink) essentially
| solve the core problems now. I did an Arctic transit (Alaska to
| Norway) in September and October of 2023, and we could make
| FaceTime video calls from the ship, way above the Arctic Circle,
| despite cloud cover, being _quite_ far from land, and ice. This
| was at the same time OP was in Antartica. Whatever that
| constraint was, it 's just contracting for the service and
| getting terminals to the sites. The polar coverage is relatively
| sparse, but still plenty, due to the extraordinarily low
| population.
|
| https://satellitemap.space/
| giantrobot wrote:
| Slow Internet isn't just remote places, it also crops up in
| heavily populated urban areas. It's sad that you had better
| connectivity above the Arctic circle than the typical
| connectivity with hotel WiFi. Bad connectivity also happens
| with cellular connections all over the place.
| el_benhameen wrote:
| Not really the point of your post, but that sounds like a
| really cool trip. What were you doing up there?
| drozycki wrote:
| There's a diner in SF I frequent. I usually sit 15 feet from
| the door, on a busy retail corridor, with Verizon premium
| network access. My iPhone XS reports two bars of LTE but
| there's never enough throughout for DNS to resolve. Same at my
| dentist's office. I hope to live in a post slow internet world
| one day, but that is still many years away.
|
| (The XS does have an Intel modem, known to be inferior to the
| Qualcomm flagship of the era)
| radicaldreamer wrote:
| I think this is a tough because a lot of bands have been
| repurposed for 5G and an Xs doesn't support any of those.
| drozycki wrote:
| I get 400 Mbps down standing at the door of that same
| diner. My understanding is that 4G bands are repurposed for
| 5G in rough proportion to the usage of 4G vs 5G devices at
| that tower, plus there's some way to use a band for both.
| In any case I was having these indoor performance issues
| back in 2019. I'm pretty sure it's an Intel issue, and any
| Qualcomm modem would be fine.
| themoonisachees wrote:
| I see this in my french city, there's a particular spot
| on my commute where my phone (mediatek) will report 2
| bars of 5G but speeds will actually be around 3G. I've
| also noticed other people on the tram having their videos
| buffer at that spot, so it's not just me. The carriers do
| not care, of course.
|
| I think there's just some of these areas where
| operational conditions make the towers break in some
| specific way.
| rsynnott wrote:
| I mean, they're not breaking, they're just overloaded.
| Solution is generally to add more towers, but that's
| expensive.
| kjkjadksj wrote:
| What do we pay them for if not to build out our telecom
| towers?
| polairscience wrote:
| What ship were you on and was it the northwest passage? We
| haven't had good luck north of 80 degrees with starlink.
| Thlom wrote:
| FYI, Space Norway will launch two satellites this summer on a
| Falcon 9 that will be going in a HEO orbit, among the
| payloads on the satellites is a Viasat/Inmarsat Ka-band
| payload which will provide coverage north of 80 degrees.
| Latency will probably be GEO+, but coverage is coverage I
| guess. :-)
| chipdart wrote:
| > I still think engineering for slow internet is really
| important, and massively under appreciated by most software
| developers, but ... LEO systems (like Starlink, especially
| StarLink) essentially solve the core problems now.
|
| I don't think that this is a valid assessment of the underlying
| problem.
|
| Slow internet means many things, and one of them is connection
| problems. In connection-oriented protocols like TCP this means
| slowness induced by drop of packets, and in fire-and-forget
| protocols like UDP this means your messages don't get through.
| This means that slowness might take multiple forms, such as low
| data rates or moments of high throughput followed by momentary
| connection drops.
|
| One solid approach to deal with slow networks is supporting
| offline mode, where all data pushes and pulls are designed as
| transactions that take place asynchronously, and data pushes
| are cached locally to be retried whenever possible. This brings
| additional requirements such as systems having to support
| versioning and conflict resolution.
|
| Naturally, these requirements permeate onto additional UI
| requirements, such as support for manually synching/refreshing,
| displaying network status, toggling actions that are
| meaningless when the network is down, rely on eager loading to
| remain usable while offline, etc.
| robjan wrote:
| I'd say these days it's more common to deploy in ap-
| southeast-1 (Singapore) rather than Japan to cover most of
| APAC.
| kbr2000 wrote:
| Delay/disruption tolerant networking (DTN) seeks to address
| these kind of problems, using alternative techniques and
| protocols: store-and-forward, Bundle protocols and Licklider
| Transmission Protocol. Interesting stuff, enjoy!
| astro-throw wrote:
| Pole doesn't have Starlink. McMurdo does. There are reasons.
|
| Polar coverage from GEO satellites is limited because how close
| to the horizon GEO satellites are from Pole. Pole uses old GEO
| satellites which are low on fuel and have relatively large
| inclinations... then you can talk to them for ~ 6 hours per 24.
|
| Schedule: https://www.usap.gov/technology/1935/
| g15jv2dp wrote:
| How do LEO satellite help me when a commuter train full of
| people connecting to the same AP enters the station I'm in? I
| live in one of the most densely populated places on Earth,
| chock-full of 5G antennas and wifi stations. Yet I still feel
| it when poorly engineered websites trip up on slow/intermittent
| connections.
| kylehotchkiss wrote:
| Idealistic! I think a lot of countries are going to block
| starlink in the future by interfering with the signals, much
| like the success some countries are having interfering so
| heavily with GPS. Their governments won't want uncensored web,
| or an American company being the gateway to the internet.
| They'll maintain whatever territorial networks they have now
| and the speed question is still relevant.
|
| Also the number of people worldwide whose only access to the
| internet is a $100 android phone with older software and
| limited CPU should be considered
| JeremyNT wrote:
| Even if people want to / are allowed to, I'm trying to
| imagine how well starlink could plausibly function if 2
| billion people switched from their sketchy terrestrial
| service to starlink.
|
| As a luxury product used by a few people, maybe it "solves"
| the problem, but I don't think this is a very scalable
| solution.
| sambazi wrote:
| doesn't take much stress to make starlink exacerbate packet
| loss levels way above docsis. it's ok for off-grid but not for
| the majority.
| NelsonMinar wrote:
| Starlink has its own networking issues thanks to a lot of
| latency jitter and 0.5% or more packet loss. See the discussion
| from last month: https://news.ycombinator.com/item?id=40384959
|
| The biggest issue for Starlink at the poles is, as you say,
| very sparse coverage. Also I suspect Starlink has to usually
| relay polar packets between satellites, not just a simple bent
| pipe relaying to a ground station.
| grishka wrote:
| IMO author shouldn't have censored the app names. The world
| deserves to know.
| koito17 wrote:
| Querying an exact match of a few strings on Google shows me
| that Slack is the very first example given in the blog post.
| For additional confirmation, the "6-byte message" screenshot
| lists an xoxc token and rich_text object, both of which you
| will frequently encounter in the Slack API. To be honest, I was
| expecting it to be Jira at first since I was unaware of Slack's
| size.
|
| Searching for an exact match of "PRL_ERR_WEB_PORTAL_UNEXPECTED"
| gives away Parallels as the first example of a hard-coded HTTPS
| timeout.
|
| So on, so forth.
| Tepix wrote:
| I agree! They call out Apple, Samsung and a few others, but not
| app vendors.
| grishka wrote:
| The messaging app with an updater is clearly Signal. The
| checkmark icons gave it away for me.
| future10se wrote:
| Those I recognize (from interface, branding, strings, etc.):
|
| * Slack -- https://brr.fyi/media/engineering-for-slow-
| internet/load-err...
|
| * Signal (main screen) -- https://brr.fyi/media/engineering-
| for-slow-internet/in-app-d...
|
| * 1Password (config/about page) --
| https://brr.fyi/media/engineering-for-slow-internet/in-app-d...
|
| * Zoom (updater screen on Mac) --
| https://brr.fyi/media/engineering-for-slow-internet/in-app-d...
|
| * Parallels Desktop ("prl_client_app" is the binary name) --
| https://brr.fyi/media/engineering-for-slow-internet/hardcode...
| voyagerfan5761 wrote:
| You got all the same answers I did, which helps me determine
| how good my sleuthing skills are. I used exclusively strings,
| either API routes, error codes, or version/build numbers.
| astro-throw wrote:
| My experience is that Slack worked great last winter, when
| the broadband satellite was up. When it's down, folks use an
| IRC-style client to cope with the very limited & expensive
| bandwidth from Iridium.
| djtango wrote:
| Yes - name and shame. Slack is INFURIATING on intermittent
| connectivity. That is simply not good enough for a product
| who's primary value is communication.
|
| Anyone who has tried to use Slack:
|
| - in the countryside with patchy connection - abroad - in
| China - on the London Underground
|
| Can attest to how poor and buggy Slack is on bad internet.
|
| These aren't weird edgecases - London is a major tech hub.
| Remote workers and open source communities rely on Slack
| around the world.
|
| China is the second largest economy in the world with a
| population of 1.7B (incidentally it's blocked at least it was
| when I was last there but even on VPN it was weird and
| buggy).
|
| How aren't these kinds of metrics tracked by their product
| teams. How isn't WhatsApp the gold standard now for message
| delivery, replicated everywhere.
|
| Neither email nor WhatsApp have the weird consistency issues
| Slack has with simply sending a message with dodgy internet.
| Not to mention the unreliable and sometimes user-hostile
| client state management when Slack can't phone home which can
| sometimes lead to lost work or inability to see old messages
| you literally were able to see until you tried to interact
| with stuff.
| lawgimenez wrote:
| Telegram also works well in remote places.
| djtango wrote:
| I doubt I'll ever work with a place that uses Telegram
| but yes its clear that resilient message delivery is a
| solved problem nowadays but Slack is still hopeless at
| the one most important key feature of its product. Now
| that WhatsApp also has groups there's even less of an
| excuse for Slack to perform so badly
| tecleandor wrote:
| Slack and the Jira suite are terrible.
|
| Slack web downloads 40MB of Javascript. The macOS Slack
| client, that I guess should have all that stuff already,
| downloads 10MB of stuff just by starting it (and going
| directly to a private text only chat).
| don-code wrote:
| Slack additionally decides to hard-reload itself, seemingly
| without reason.
|
| I work on the road (from a train / parking lot / etc) for
| five or six hours per week. My T-Mobile plan is
| grandfathered in, so I can't "upgrade" to a plan that
| allows full-speed tethering without considerably impacting
| my monthly bill.
|
| Realistically, I hit around 1.5Mbps down. When Slack
| reloads itself, I have to stop _everything else_ that I'm
| doing, immediately, and give Slack full usage of my
| available bandwidth. Often times, it means taking my phone
| out of my pocket, and holding it up near the ceiling of the
| train, which (I've confirmed in Wireshark) reduces my
| packet loss. Even then, it takes two or three tries just to
| get Slack to load.
| djtango wrote:
| I feel your pain - one minute you're reading some
| messages or a note and the next you're locked out of
| slack with faded screens and infinite spinnies.
|
| Apparently we must be very niche amongst their user base
| because these kinds of fixes hasn't made it onto their
| roadmap in years
| fragmede wrote:
| I wonder if you could stick your own root CA into your
| OS'S certificate store and then MitM the connections
| slack makes, and then respond no don't update with
| burpsuite and cache with squid to alleviate the problem.
| don-code wrote:
| I've also found that the AWS and Azure consoles behave this
| way. While not listed in the blog post, they load JavaScript
| bundles in the tens of megabytes, and must have a hard-coded
| timeout that fails the entire load if that JavaScript hasn't
| been downloaded inside of a few minutes.
|
| To Amazon's credit, my ability to load the AWS console has
| improved considerably in recent months, but I can't say the
| same for Azure.
| rlv-dan wrote:
| > Please keep in mind that I wrote the majority of this post ~7
| months ago, so it's likely that the IT landscape has shifted
| since then.
|
| Not sure if this is serious or intended as a joke. It made me
| giggle nonetheless. Which is kind of sad.
| modeless wrote:
| They're being obtuse. What "it's likely the IT landscape has
| shifted" actually means is "they got Starlink and their
| connection is fast now, and I know this for certain but I want
| to downplay it as much as possible because I'm trying to make a
| point".
| sham1 wrote:
| Or they could be making a joke about how quickly trends shift
| in IT. It's like how people joke (or at least used to joke)
| that you'd get a dozen new JavaScript frameworks daily.
|
| Exaggeration for comedic effect, in other words.
| kortilla wrote:
| It's serious presumably because Starlink coverage includes the
| poles now. 7 months ago was around the time they did a demo
| with the McMurdo base IIRC.
| bizzyb wrote:
| McMurdo has starlink. South Pole doesn't, but not due to
| technical reasons from starlink's side. From what I
| understand when they tested at Pole they noticed interference
| with some of the science experiments, its possible they will
| engineer around that at some point but for now starlink is a
| low priority compared to ensuring the science goes on. I
| forget the exact distance, but its something like 5 miles
| from pole that they ask groups traversing to turn off their
| starlink.
| fragmede wrote:
| that is to say, the starlink terminals radiating EM
| sufficient to mess with the sensitive sensors at the south
| pole, which is fascinating since they're supposed to have
| passed compliance testing that they're not doing too much
| of that. but the south pole has a different definition of
| too much, it seems. fascinating!
| danpalmer wrote:
| Having a lot of experience commuting on underground public
| transport (intermittent, congested), and living/working in
| Australia (remote), I can safely say that most services are
| terrible for people without "ideal" network conditions.
|
| On the London Underground it's particularly noticeable that most
| apps are terrible at handling network that comes and goes every
| ~2 minutes (between stops), and which takes ~15s to connect to
| each AP as a train with 500 people on it all try to connect at
| the same time.
|
| In Australia you're just 200ms from everything most of the time.
| That might not seem like much, but it really highlights which
| apps trip up on the N+1 request problem.
|
| The only app that I am always impressed with is WhatsApp. It's
| always the first app to start working after a reconnect, the last
| to get any traffic through before a disconnect, and even with the
| latency, calls feel pretty fast.
| pimeys wrote:
| I guess in London you get wifi only on stops, it's the same in
| Berlin. In Helsinki the wifi connection is available inside the
| trains, and in the stations. So you never get a connection loss
| when moving. I never understood the decision in Berlin to do
| this, why not just provide internet inside the train...
|
| And yeah, most of the internet works very badly when you drop
| the network all the time...
| pintxo wrote:
| Berlin did not have mobile connections inside the tunnels
| until very recently (this year, I believe). This included the
| trains not being connected to any outside network. Thus wifi
| on the subway was useless to implement.
| saagarjha wrote:
| I was in Berlin earlier this month and the cellular
| connections underground were quite good now. So maybe this
| is less of a problem?
| nicbou wrote:
| It's provider-specific
| alanpearce wrote:
| Not any more?
| https://unternehmen.bvg.de/pressemitteilung/grossprojekt-
| erf...
|
| Summary: since 2024-05-06, users of all networks also get
| LTE in the U-Bahn thanks to a project between BVG and
| Telefonica (not surprising that Telefonica deployed the
| infra because they had the best U-Bahn LTE coverage
| beforehand)
| amaccuish wrote:
| They did if you were on o2, that's why I'm still with Aldi
| Talk (they use the o2 network); they've had LTE through the
| entire network for a while now. The new thing is 5G for
| everyone.
| rkachowski wrote:
| Despite Berlin's general lack of parity with modern
| technology, I've never actually had a problem with internet
| access across the ubahn network in the past decade. I
| noticed that certain carriers used to have very different
| availability when travelling and so switched to a better
| one, but I was always surprised at being able to handle
| mobile data whilst underground.
| nicbou wrote:
| Really? I don't even get consistent internet on the
| Ringbahn. There are lots of holes in the coverage in
| Berlin.
|
| Which provider are you with? Vodafone is still dead in
| large parts of the U-Bahn, but I know that one of them
| works much better.
| avh02 wrote:
| used to have spotty coverage underground with vodafone,
| when i switched to telekom, internet suddenly magically
| worked underground on the routes i used.
|
| I believe someone published a map of the data coverage of
| different providers on the berlin ubahn, but probably
| outdated now
| rkachowski wrote:
| Yeah, admittedly this year I've also started experiencing
| holes on the ringbahn (strangely and consistently around
| frankfurter allee), but the ubahn has been fine.
|
| I'm with sim.de which I believe is essentially an O2
| reseller (apn references o2)
| brabel wrote:
| Wow! I was in Berlin last week and kept losing
| connection... like all the time. I use 3 with a Swedish
| plan. In Sweden, it literally never drops, not on trains,
| not on metro, not on faraway mountains... it works
| everywhere.
| Liskni_si wrote:
| Yes, right now it's mostly just wifi at stations only.
| However, they're deploying 4G/5G coverage in the tunnels and
| expect 80% coverage by the end of 2024 [1].
|
| So... you can expect apps developed by engineers in London to
| get much worse on slow internet in 2025. :-)
|
| [1]: https://tfl.gov.uk/campaign/station-wifi
| bombcar wrote:
| WiFi at a stop is as easy as putting up a few wireless
| routers, it's a bit more complex than at home but the same
| general idea.
|
| Wifi inside the trains involves much more work, and to get
| them to ALSO be seamless across the entire setup - even
| harder. Easily 10x or 100x the cost.
|
| It's sad, because the Internet shouldn't be that bad when the
| network drops all the time; it should just be slower as it
| waits to send good data.
| Anotheroneagain wrote:
| I think they just put a wire in the tunnel.
| greenish_shores wrote:
| Yes, that's the best way which is often used. A "leaky
| cable" aka "leaky feeder", to be particular.
| chipdart wrote:
| > In Australia you're just 200ms from everything most of the
| time. (...)
|
| > The only app that I am always impressed with is WhatsApp.
| It's always the first app to start working after a reconnect,
| the last to get any traffic through before a disconnect, and
| even with the latency, calls feel pretty fast.
|
| The 200ms is telling.
|
| I bet that WhatsApp is one of the rare services you use which
| actually deployed servers to Australia. To me, 200ms is a
| telltale sign of intercontinental traffic.
|
| Most global companies deploy only to at most three regions:
|
| * the US (us-east, us-central, us-east+us-east)
|
| * Europe (west-europe),
|
| * and somewhat rarely far-east (meither us-west or Japan)
|
| This means that places such as south Africa, south America, and
| of course Australia typically have to pull data from one of
| these regions, which means latencies of at least 200ms due to
| physics.
|
| Australia is particularly hit because, even with dedicated
| deployments in their theoretical catchment area, often these
| servers are actualy located in an entirely separate continent
| (west-us or Japan) and thus users do experience the performance
| impact of having packets cross half a globe.
| toast0 wrote:
| > I bet that WhatsApp is one of the rare services you use
| which actually deployed servers to Australia. To me, 200ms is
| a telltale sign of intercontinental traffic.
|
| So, I used to work at WhatsApp. And we got this kind of
| praise when we only had servers in Reston, Virginia (not at
| aws us-east1, but in the same neighborhood). Nowadays,
| Facebook is most likely terminating connections in Australia,
| but messaging most likely goes through another continent.
| Calling within Australia should stay local though (either p2p
| or through a nearby relay).
|
| There's lots of things WhatsApp does to improve experience on
| low quality networks that other services don't (even when we
| worked in the same buildings and told them they should
| consider things!)
|
| In no particular order:
|
| 0) offline first, phone is the source of truth, although
| there's multi-device now. You don't need to be online to read
| messages you have, or to write messages to be sent whenever
| you're online. Email used to work like this for everyone; and
| it was no big deal to grab mail once in a while, read it and
| reply, and then send in a batch. Online messaging is great,
| if you can, but for things like being on a commuter train
| where connectivity ebbs and flows, it's nice to pick up
| messages when you can.
|
| a) hardcode fallback ips for _when_ DNS doesn 't work (not
| if)
|
| b) setup "0rtt" fast resume, so you can start getting
| messages on the second round trip. This is part of noise
| pipes or whatever they're called, and tls 1.3
|
| c) do reasonable-ish things to work with MTU. In the old
| days, FreeBSD reflected the client MSS back to it, which
| helps when there's a tunnel like PPPoE and it only modifies
| outgoing syns and not incoming syn+ack. Linux never did that,
| and afaik, FreeBSD took it out. Behind Facebook
| infrastructure, they just hardcode the mss for i think 1480
| MTU (you can/should check with tcpdump). I did some limited
| testing, and really the best results come from monitoring for
| /24's with bad behavior (it's pretty easy, if you look for it
| --- never got any large packets and packet gaps are a
| multiple of MSS - space for tcp timestamps) and then sending
| back client - 20 to those; you could also just always send
| back client - 20. I think Android finally started doing pMTUD
| blackhole detection stuff a couple years back, Apple has been
| doing it really well for longer. Path MTU Discovery is still
| an issue, and anything you can do to make it happier is good.
|
| d) connect in the background to exchange messages when
| possible. Don't post notifications unless the message content
| is on the device. Don't be one of those apps that can only
| load messsages from the network when the app is in the
| foreground, because the user might not have connectivity then
|
| e) prioritize messages over telemetry. Don't measure
| everything, only measure things when you know what you'll do
| with the numbers. Everybody hates telemetry, but it can be
| super useful as a developer. But if you've got giant
| telemetry packs to upload, that's bad by itself, and if you
| do them before you get messages in and out, you're failing
| the user.
|
| f) pay attention to how big things are on the wire. Not
| everything needs to get shrunk as much as possible, but login
| needs to be very tight, and message sending should be too.
| IMHO, http and json and xml are too bulky for those, but are
| ok for multimedia because the payload is big so framing
| doesn't matter as much, and they're ok for low volume
| services because they're low volume.
| cheema33 wrote:
| > I used to work at WhatsApp..
|
| Do you know why there is a 4 device limit? I run into this
| limit quite a bit, because I have a lot more devices.
|
| And... Why is there is WhatsApp for most commonly used
| devices, but iPads?
| swiftcoder wrote:
| > Why is there is WhatsApp for most commonly used
| devices, but iPads?
|
| I was frustrated by this a while back, so I asked the
| PMs. Basically when investing engineering effort WhatsApp
| prioritises the overall number of users connected, and
| supporting iPads doesn't really move that metric, because
| (a) the vast majority of iPad owners also own a
| smartphone, and (b) iPads are pretty rare outside of
| wealthy western cities.
| toast0 wrote:
| I've been gone too long for accurate answers, but I can
| guess.
|
| For iPad, I think it's like the sibling notes; expected
| use is very low, so it didm't justify the engineering
| cost while I was there. But I see some signs it might
| happen eventually [1]; WhatsApp for Android Tablets
| wasn't a thing when I was there either, but it is now.
|
| For the four device limit, there's a few things going on
| IMHO. Synchronization is _hard_ and the more devices are
| playing, the harder it is. Independent devices makes it
| easier in some ways because the user devices don 't have
| to be online together to communicate (like when whatsapp
| web was essentially a remote control for your phone), but
| it does mean that all of your communications partner's
| devices have to work harder and the servers have to work
| harder, too.
|
| Four deviced covers your phone, a desktop at home and
| work, and a laptop; but really most of the users only
| have a phone. Allowing more devices makes it more likely
| that you'll lose track of one or not use it for long
| enough that it's lost sync, etc.
|
| WhatsApp has usually focused on product features that
| benefit the most users, and more than 4 devices isn't
| going to benefit many people, and 4 is plenty for
| internal use (phone, prod build, dev build, home
| computer). I'm sure they've got metrics of how many
| devices are used, and if there's a lot of 4 device users
| and enough requests, it's a #define somewhere.
|
| [1] https://www.macworld.com/article/668638/how-to-get-
| whatsapp-...
| erinaceousjones wrote:
| WhatsApp is (or was) using XMPP for the chat part too,
| right?
|
| When I was IT person on a research ship, WhatsApp was a
| nice easy one to get working with our "50+ people sharing
| two 256kbps uplinks" internet. Big part of that was being
| able to QoS prioritise the XMPP traffic which WhatsApp was
| a big part of.
|
| Not having to come up with filters for HTTPS for IP ranges
| belonging to general-use CDNs that managed to hit the right
| blocks used by that app, was a definite boon. That, and the
| fact XMPP was nice and lightweight.
|
| As far as I know google cloud messaging (GCN? GCM?
| firebase? Play notifications? Notifications by Google?
| Google Play Android Notifications Service?) also did/does
| use XMPP, so we often had the bizarre and infuriating very
| fast notifications _where sometimes the content was in the
| notification_ but when you clicked on it, other apps would
| fail to load it due to the congestion and latency and
| hardcoded timeouts TFA mentions.. argh.
|
| But WhatsApp pretty much always worked, as long as the ship
| had an active WAN connection.... And that kept us all
| happy, because we could reach our families.
| toast0 wrote:
| > WhatsApp is (or was) using XMPP for the chat part too,
| right?
|
| It's not exactly XMPP, it started with XMPP, but XML is
| big, so it's tokenized (some details are published in the
| European Market Access documentation), and there's no
| need for interop with standard XMPP clients, so login
| sequence is I think way different.
|
| But it runs on port 5222? by default (with fallbacks to
| port 443 and 80).
|
| I think GCM or whatever it's called today is plain XMPP
| (including, optionally, on the server to server side),
| and runs on ports 5228-5230. Not sure what protocol apple
| push is, but they use port 5223 which is affiliated with
| xmpp over tls.
|
| So I think using a non 443 port was helpful for your QoS?
| But being avaialable on port 443 is helpful for getting
| through blanket firewall rules. AOL used to run AIM on
| _all_ the ports, which is even better at getting through
| firewalls.
| izacus wrote:
| Yeah, it's very very noticeable that WhatsApp is
| architected in a way that makes experience great for all
| kind of poor connectivity scenarios that most other
| software just... isn't.
| camel-cdr wrote:
| It's not only the services them self. I have a very slow mobile
| connection, and one thing that bothered me immensly is
| downloading images in the browser: How is it, than when I go to
| a .jpg url to view an image in the browser it takes way longer
| and sometimes times out, than hopping over to termux and
| running wget. I had this problem with both firefox and chrome
| based browsers. Note that even the wget download usually takes
| 10-30 seconds on my mobile connection.
| chris_pie wrote:
| I have the same issue with nearly every static asset.
| bombcar wrote:
| Browsers usually try to multiplex things, sometimes even the
| same image if the server supports "get specific byte range"
| or whatever.
|
| There may be a setting to turn a browser back into a dumb
| wget visual displayer.
| armchair_expert wrote:
| You can try going into proxy settings and setting to "none"
| instead of autodetect. Also, the dns server used by the
| browser could be different (and slower).
| giantrobot wrote:
| Too many services today do stupid image transcoding today.
| While the URL says jpg it will decide that because your
| browser supports WebP that what you _really_ must have wanted
| was a WebP. It 'll then either transcode or just send you
| WebP data for the image or send you a redirect. This is
| rarely what you actually want.
|
| With wget it sends you the source you actually requested and
| doesn't try to get clever (stupid). Google likes WebP so that
| means everyone needs to join the WebP cargo cult even if it
| means transcoding a lossy format to another lossy format.
| kylehotchkiss wrote:
| WhatsApp has a massive audience in developing countries where
| it's normal for people to have slower internet and much slower
| devices. That perspective being so embedded in their
| development goals certainly has given WhatsApp good reason to
| be the leading messaging platform in many countries around the
| world
| nicbou wrote:
| It works remarkably well when your phone runs out of data and
| you get capped at 8 kbps. Even voice calls work smoothly.
| qingcharles wrote:
| LOL 8kbps. Damn. That takes me back. I built the first
| version of one of the world's largest music streaming sites
| on a 9.6kbps connection.
|
| I was working from home (we had no offices yet) and my
| cable Internet got cut off. My only back up was a serial
| cable to a 2G Nokia 9000i. I had to re-encode a chunk of
| the music catalog at 8kbps so I could test it from home
| before I pushed the code to production.
|
| Psychoacoustic compression is a miracle.
| greenish_shores wrote:
| Nokia 9000i, so you had to work on CSD (which is usually
| billed per-minute, like dial-up), not even GPRS. How much
| did that cost you? :P
|
| BTW, an interesting thing is that some/most carriers
| allow you to use CSD/HSCSD over 3G these days, and you
| can establish data CSD connection between two phone
| numbers, yielding essentially a dedicated L2 pipe which
| isn't routed over internet. Can have much lower latency
| and jitter if that's what you need. Some specialized
| telemetry is still using that, however as 3G is slowly
| getting phased out, it will probably have to change.
| qingcharles wrote:
| God, the cost was probably horrid, but I was connecting
| in, setting tasks running and logging out. This was late
| 1999 in the UK, so per-minute prices were high. Also,
| these were Windows servers, so I had to sluggishly RDP
| into them, no nice low-bandwidth terminals.
| Scoundreller wrote:
| Even wealthy countries will have dead zones (Toronto subway
| until recently, and like 90% of the landmass), and at least
| in Canada, "running out of data" and just having none left
| (or it being extremely expensive) was relatively common until
| about the last year or two when things got competitive
| (finally!).
|
| Still have an entire territory where everything is satellite
| fed (Nunavut), including its capital.
| greenish_shores wrote:
| Wow. I didn't knew that Nunavut is entirely satellite fed.
| That's very interesting to know, thanks. Do you have some
| more info, though? What kind of satellite - geostationary,
| LEO? Also which constellation has the most share of traffic
| from Nunavut?
| throwaway211 wrote:
| The London Underground not having any connectivity for decades
| after other metro systems showed only that high connectivity
| during a commute isn't necessary.
| HPsquared wrote:
| London fails to provide a lot of essentials.
| throwaway211 wrote:
| Of which the need for status updates and short video isn't
| one.
| initramfs wrote:
| awesome. for the other 6 continents, check out:
| https://solar.lowtechmagazine.com/2015/10/how-to-build-a-low...
| oefrha wrote:
| This is why I find it dreadful that evangelists here are heavily
| promoting live$whatever technology where every local state change
| requires at least one server roundtrip, or "browsers support esm
| now, bundling is a thing of the past!" etc. You don't need to be
| at Antarctica to feel the latencies caused by the waterfall of
| roundtrips, or roundtrip on every click, as long as you're a mere
| 200ms from the server, or in a heavily congested place.
| xobs wrote:
| An example of a program that's atrocious about unreliable
| connectivity is `git` -- it has no way to resume downloads, and
| will abort all progress if it fails mid-transfer.
|
| The only way I've found to reliably check out a git repository
| over an unreliable link is to check it out somewhere reliable and
| `rsync` the .git directory over.
| o11c wrote:
| Usually `git clone --depth 1 URL` works, then you can
| incrementally deepen it.
|
| This does cause extra load on the servers, but if it's that big
| a problem for them, they can write the incremental patches
| themselves.
|
| (I suspect that the "dumb http" transport are also incremental
| if you squint hard enough at them, but I've never had reason to
| investigate that closely)
| snthpy wrote:
| Have a look at https://offlinefirst.org/ and
| https://www.inkandswitch.com/local-first/ .
| 0xbadcafebee wrote:
| > a lot of the end-user impact is caused by web and app
| engineering which fails to take slow/intermittent links into
| consideration.
|
| Technology today is developed by, and for, privileged people. Has
| been that way for a while. Ever since you had to upgrade your
| computer in order to read the news, there has been a slow, steady
| slog of increasing resource use and conspicuous consumption.
|
| I remember using the 9600 baud modem to get online and do the
| most basic network transactions. It felt blazing fast, because it
| was just some lines of text being sent. I remember the 2.5KBps
| modem, allowing me to stream pictures and text in a new World
| Wide Web. I remember the 5KBps modem making it possible to
| download _an entire movie_! (It took 4 days, and you had to find
| special software to multiplex and resume cancelled downloads,
| because a fax on the dialup line killed the connection) I
| remember movies growing to the size of CDROMs, and later DVDROMs,
| so those who could afford these newer devices could fit the newer
| movies, and those who couldn 't afford them, didn't. I remember
| the insane jump from 5KBps to 1.5Mbps, when the future arrived.
| Spending days torrenting hundreds of songs to impress the cool
| kids at school, burning them CDs, movies, compiling whole
| libraries of media [hey, 15 year olds can't afford retail
| prices!].
|
| I remember when my poor friends couldn't use the brand new ride-
| sharing services Uber and Lyft because you had to have an
| expensive new smartphone to hail them. They'd instead have to
| call and then pay for a full fare taxi, assuming one would stop
| for them in the poor neighborhood, or wait an hour and a half to
| catch the bus. I remember when I had to finally ditch my gaming
| laptop, with the world-class video card you could've done crypto
| mining on, because opening more than 5 browser tabs would churn
| the CPU and hard-drive, max out the RAM, and crash the browser. I
| remember having to upgrade my operating system, because it could
| no longer run a new enough browser, that was now required to load
| most web pages. I remember buying smartphone after smartphone
| after smartphone - not because the previous one stopped working,
| but because more apps required more cpu and more memory and more
| storage. I remember trying to download and run a chat app on my
| local machine, and running out of memory, because the chat app
| had an embedded web browser. I remember running out of my data
| cap on my cell phone because some app decided it wanted to stream
| a load of data as if it was just unlimited. I remember running
| out of space on my smartphone because 70% of the space was being
| used just to store the Operating System files.
|
| I'm not complaining, though. It's just how the world works.
| Humanity grows and consumes ever more resources. The people at
| the top demand a newer, better cake, and they get it; everyone
| else picks up the crumbs, until they too get something resembling
| cake. I sure ate my share. Lately I try to eat as little cake as
| possible. Doesn't change the world, but does make me feel better.
| Almost like the cake is a lie.
| self_awareness wrote:
| What are the hopes for "engineering for slow internet" to happen,
| when people have engineered applications for "fast internet" when
| all we had was "slow internet".
|
| Nice thought in theory but unnecessarily gives false hope.
|
| Confluence/Jira sometimes need to download _20 megabytes_ in
| order to show a page with only text and some icons. I have a
| friend that tells me that in their company they had _two jiras_ ,
| one for developers, one for the rest of the company, because it
| was that dead slow.
|
| I've lost all faith already that this will change for better.
| Tepix wrote:
| I had a similar experience as the author on a boat in the south
| pacific. Starlink was available but often wasn't used because of
| its high power usage (60+ watts). So we got local SIM cards
| instead which provided 4G internet in some locations and EDGE
| (2G) in others.
|
| EDGE by itself isn't too bad on paper - you get a couple dozen
| kilobits per second. In reality, it was much worse. I ran into
| apps with short timeouts that would have worked just fine, if the
| authors had taken into account that loading can take minutes
| instead of milliseconds.
|
| Low bandwith, high latency connections need to be part of the
| regular testing of software. For Linux, there's netem
| (https://wiki.linuxfoundation.org/networking/netem) that will let
| you do this.
|
| An issue that the anonymous blog author didn't have was metered
| connections. Doing OS or even app upgrades was pretty much out of
| the question for cost reasons. Luckily, every few weeks or so, we
| got to a location with an unmetered connection to perform such
| things. But we got very familiar with the various operating
| systems' ways to mark connections as metered/unmetered disable
| all automatic updates and save precious bandwidth.
| chipdart wrote:
| > Low bandwith, high latency connections need to be part of the
| regular testing of software.
|
| One size does not fit all. It would be a waste of time and
| effort to architect (or redesign) an app just because a
| residual subset of potential users might find themselves on a
| boat in the middle of the Pacific.
|
| Let's keep things in perspective: some projects even skip
| testing WebApps on more than one browser because they deem that
| wasteful and an unjustified expense, even though it's trivial
| to include them on a test matrix, and this is a UI-only.
| fragmede wrote:
| it's not a total redesign, it's just raising a timeout from
| 30 to 3000
| Nevolihs wrote:
| Websites regularly break because I don't have perfect network
| coverage on my phone every single day. In a lot of places, I
| don't even have decent reception. This in Germany in and
| around a major city.
|
| Why do you think this only applies to people on a boat?
| chipdart wrote:
| > Websites regularly break because I don't have perfect
| network coverage on my phone every single day.
|
| Indeed, that's true. However, the number of users that go
| through similar experiences are quite low and even those
| who do are always a F5 away from circumventing that issue.
|
| I repeat: even supporting a browser other than the latest N
| releases of Chrome is a hard sell to some companies.
| Typically the test matrix is limited to N versions of
| Chrome and the latest release of Safari when Apple products
| are supported. If budgets don't stretch even to cover the
| basics, of course that even rarer edge cases such as a user
| accessing a service through a crappy network will be far
| from the list of concerns.
| throwaway2037 wrote:
| The South Pacific should be very sunny. I guess that you didn't
| have enough solar panels to provide 60+ watts. I am genuinely
| surprised.
|
| And "local SIM cards" implies that you set foot on (is)lands to
| buy said SIM cards. Where did you only get 2G in the 2020s? I
| cannot believe any of this is still left in the South Pacific.
| RetroTechie wrote:
| > Where did you only get 2G in the 2020s?
|
| My previous smartphone supported 4G/3G/Edge, but for some
| reason the 4G didn't work. At all, ever, anywhere ( _not_ a
| provider /subscription or OS settings issue, and WiFi was
| fine).
|
| In my country 3G was turned off a while ago to free up
| spectrum. So it fell back to Edge all the time.
|
| That phone died recently. I'm temporarily using an older
| phone which also supports 4G/3G/Edge, and where the 4G bit
| works. Except... in many places where I hang out (rural /
| countryside) 4G coverage is spotty or non-existant. So it
| _also_ falls back to Edge most of the time.
|
| Just the other day (while on WiFi) I installed Dolphin as a
| lightweight browser alternative. Out in the countryside, it
| doesn't work ("no connection"), even though Firefox works
| fine there.
|
| Apps won't download unless on WiFi. Not even if you're
| patient: downloads break somewhere, don't resume properly, or
| what's downloaded doesn't install because the download was
| corrupted. None of these issues over WiFi. Same with some
| websites: roundtrips take too long, server drops the
| connection, images don't load, etc etc.
|
| Bottom line: app developers or online services don't (seem
| to) care about slow connections.
|
| But here's the thing: for the average person in this world,
| fast mobile connections are still the exception, _not_ the
| norm. Big city / developed country / 4G or 5G base stations
| 'everywhere' doesn't apply to a large % of the world's
| population (who _do_ own smartphones these days, even if low-
| spec ones).
|
| Not that some low-tier mobile plans also cap connection
| speeds. Read: slow connection even _if_ there 's 4G/5G
| coverage. There's a reason internet cafe's are still a thing
| around the world.
| kjkjadksj wrote:
| I live in a developed country with 4g/5g everywhere and its
| still no better than the 3g era I remember. Modern apps and
| sites have gobbled up the spare bandwith so the general ux
| feels the same to the user in terms of latency. On top of
| that there are frequent connection dropouts even with the
| device claiming a decent connection to the tower. Using
| mobile internet seems like 4g often can't bring the speed
| to load a modern junked up news or recipe site in sometimes
| any amount of time.
| throw46365 wrote:
| As a web developer I actually resisted much faster internet for
| ages.
|
| Until 2022 I had a rock-solid, never-failed 7 megabit/s-ish down,
| 640k up connection and I found it very easy to build sites that
| others describe as blazing fast.
|
| This was slow really by the standards of much of the UK
| population even by 2015.
|
| So all I had to do was make it fast for me.
|
| A change of provider for practical reasons gave me an ADSL2+
| connection that is ten times faster; still arguably slower than a
| lot of residential broadband in the UK but not so helpfully.
|
| So now I test speed on mobile; even in the south east of England
| it is not that difficult to find poor mobile broadband. And when
| it's poor, it's poor in arguably more varied ways.
| oefrha wrote:
| As a web developer you can just throttle your connection in
| developer tools though, no self-limiting required. But nobody
| does that in big corporations building most of the sites needed
| by people with slow connections.
| throw46365 wrote:
| Yeah, though it doesn't quite capture all of the experience
| of working with slower broadband.
|
| For example if you have a website that is meant to be used
| alongside a video call or while watching video, it's
| difficult to really simulate all of that "feel".
|
| Using a link that is slow in practice is an invaluable
| experience.
| xandrius wrote:
| You can install programs at software level to emulate it as
| a whole. I remember using one for OSX and it working pretty
| well.
| jeroenhd wrote:
| In my experience, browsers limit speeds in a way that's kind
| of nice and stable. You tell them to stick to 100kbps and
| they'll have 100kbps. Packet loss, jitter, it's all a single
| number, and rather stable. It's like a 250kbps fiber optic
| connection that just happens to be very long.
|
| In my experience, real life slow internet isn't like that.
| Packet loss numbers jump around, jitter switches second by
| second, speeds vary wildly and packets arrive out of order
| more than in order. Plus, with sattelites, the local router
| sends fake TCP acknowledgements to hide the slow data
| transfer, so the browser thinks it's connected while the
| traffic is still half a second away.
|
| There are software tools to limit connectivity in a more
| realistic way, often using VMs, but they're not used as often
| as the nice browser speed limiter.
| oefrha wrote:
| Good points, but it would still be a major step forward if
| websites start handling browser-simulated 3G well. Right
| now the typical webshit used by regular people more often
| than not ranges from barely usable to completely unusable
| on browser-simulated 3G, let alone browser-simulated 2G or
| real world bad connections. As a first step, make your site
| work well on, say, 200ms and 1Mbps.
| parentheses wrote:
| I recall using a device called the "Mini Maxwell" by this
| company:
|
| https://www.iwl.com/
|
| It enabled you to simulate network slow-downs, packet-loss,
| packet corruption, packet reordering and more. It was so critical
| in testing our highly network sensitive software.
| RedShift1 wrote:
| For browser apps, a network connection simulator is included in
| the Chrome developer tools' Network tab.
| christophilus wrote:
| It doesn't do a great job, though. A good simulator will have
| random drops, jitter, bursts, etc.
| KronisLV wrote:
| I don't think any of our apps are built with slow connections in
| mind at all.
|
| Most of our web libraries and frameworks are indeed quite bloated
| (with features of convenience), downloading 20 MB of JS and 50 MB
| of content in total to render a page is insane when you think
| about it. We'd need to be able to turn off most images or visual
| elements to focus purely on the elements and their functionality,
| except for cases where displaying an image is critical for the
| function (and even then give the choice of showing a low quality
| version with a smaller file size). Things like web safe fonts
| that are already present in the browser/OS, most likely no
| libraries like React or Vue either (maybe Preact or Svelte).
|
| We'd need to allow for really long request (say, fetch) timeout
| values, maybe even to choose how to set them based on the
| connection quality (if a user has a really fast connection but a
| request suddenly hangs and is taking upwards of a minute,
| something has probably gone wrong and it'd make sense to fail
| that request, vs a user in a remote area for whom all requests
| are similarly slow), assuming that the server doesn't mind
| connections that linger around for a long time at slow speeds.
|
| We'd also need to allow for configuring an arbitrary HTTP
| cache/proxy for any site visited and file requested (say, store
| up to 1 TB on some local server based on file hashes and return
| the same for any user that requests that), but obviously things
| don't usually work that way over privacy/security concerns
| (nowadays different sites even download duplicate copies of the
| same files due to changes in the browsers:
| https://www.peakhour.io/blog/cache-partitioning-firefox-chro...
| ). Maybe even for any web request that the OS might want to do,
| like system updates, basically a full on MitM for the whole
| system.
|
| Speaking of which, no more Electron or large software packages.
| Only native software with like Win32/WPF or maybe something like
| GTK/Qt, but nowadays it seems like even phone apps, not just
| desktop software, often don't use the system GUI frameworks, but
| instead ship a bunch of visual fluff, which might look nice and
| work well, but also takes up a bunch of space.
|
| I don't think there are incentives out there to guide us towards
| a world like that, which doesn't quite make sense to me.
| Lightweight websites should lead to better customer/user
| retention, but in practice that doesn't seem like something that
| anyone is optimizing for - ads everywhere, numerous tracking
| scripts, even autoplay videos, for everything from news sites to
| e-commerce shops.
|
| People who do optimize for that sort of stuff, seem to be a part
| of a smaller niche enthusiast community (which is still nice to
| see), like:
|
| https://1mb.club/
|
| https://512kb.club/
|
| Admittedly, even I'm guilty of bloating my homepage size from
| ~150 KB to ~ 600 KB due to wanting to use a custom set of fonts
| (that I host myself), even WOFF2 didn't save me there.
| padolsey wrote:
| A lot of this resonates. I'm not in Antartica, I'm in Beijing,
| but still struggle with the internet. Being behind the great
| firewall means using creative approaches. VPNs only sometimes
| work, and each leaves a signature that the firewall's hueristics
| and ML can eventually catch onto. Even state-mandated ones are
| 'gently' limited at times of political sensitivity. It all ends
| up meaning that, even if I get a connection, it's not stable, and
| it's so painful to sink precious packets into pointless web-app-
| react-crap roundtrips.
|
| I feel like some devs need to time-travel back to 2005 or
| something and develop for that era in order to learn how to build
| things nimbly. In deficit of time travel, if people could just
| learn to open web tools and use its throttling tool: turn it to
| 3g, and see if their webapp is resilient. Please!
| kylehotchkiss wrote:
| I hear you on frontend-only react. But hopefully the newer
| React Server Components are helping? They just send HTML over
| the wire (right?)
| padolsey wrote:
| Yes, server-rendering definitely helps, though I have
| suspicions about its compiled outputs still being very heavy.
| There's also a lot of CSS frameworks that have an inline-
| first paradigm meaning there's no saving for the browser in
| downloading a single stylesheet. But I'm not sure about that.
| chrisldgk wrote:
| Yes, though server side rendering is everything but a new
| thing in the react world. NextJS, Remix, Astro and many other
| frameworks and approaches exist (and have done so for at
| least five years) to make sure pages are small and efficient
| to load.
| crote wrote:
| The problem isn't in what is being sent over the wire - it's
| in the request lifecycle.
|
| When it comes to static HTML, the browser will just slowly
| grind along, showing the user what it is doing. It'll
| incrementally render the response as it comes in. Can't
| download CSS or images? No big deal, you can still read text.
| Timeouts? Not a thing.
|
| Even if your Javascript framework is rendering HTML chunks on
| the server, it's still essentially hijacking the entire
| request. You'll have some button in your app, which fires off
| a request when clicked. But it's now up to the individual
| developer to properly implement things like progress
| bars/spinners, timouts, retries, and all the rest the browser
| normally handles for you.
|
| They _never_ get this right. Often you 're stuck with an app
| which will give absolutely zero feedback on user action, only
| updating the UI when the response has been received. Request
| failed? Sorry, gotta F5 that app because you're now stuck in
| an invalid state!
| MatthiasPortzel wrote:
| Yep. I'm a JS dev who gets offended when people complain
| about JS-sites being slower because there's zero technical
| reason why interactions should be slower. I honestly
| suspect a large part of it is that people don't expect
| clicking a button to take 300ms and so they feel like the
| website must be poorly programmed. Whereas if they click a
| link and it takes 300ms to load a new version of the page
| they have no ill-will towards the developer because they're
| used to 300ms page loads. Both interactions take 300ms but
| one uses the browser's native loading UI and the other uses
| the webpage's custom loading UI, making the webpage feel
| slow.
|
| This isn't to exonerate SPAs, but I don't think it helps to
| talk about it as a "JavaScript" problem because it's really
| a user experience problem.
| geek_at wrote:
| Sounds like what would benefit you is a HTMX approach to the
| web.
| tleb_ wrote:
| What about plain HTML & CSS for all the websites where this
| approach is sufficient? Then apply HTMX or any other approach
| for the few websites that are and need to be dynamic.
| sethammons wrote:
| That is exactly what htmx is and does. Everything is
| rendered server side and sections of the page that you need
| to be dynamic and respond to clicks to fetch more data have
| some added attributes
| tleb_ wrote:
| I see two differences: (1) the software stack on the
| server side and (2) I guess there is JS to be sent to the
| client side for HTMX support(?). Both those things make a
| difference.
| victorbjorklund wrote:
| The size of HTMX compressed is 10kb and very rarely
| changes which means it can stay in your cache for a very
| long time.
| galangalalgol wrote:
| I'm embedded so I don't much about web stuff but
| sometimes I create dashboards to monitor services just
| for our team, tganks for introducing me to htmx. I do
| think html+css should be used for anything that is a
| document or static for longer than a typical view lasts.
| Arxiv is leaning towards HTML+css vs latex in
| acknowledgement that paper is no longer how "papers" are
| read. And on the other end, eBay works really well with
| no js right up until you get to an item's page, where it
| breaks. If ebay can work without js, almost anything that
| isn't monitoring and visualizing constant data (last few
| minutes of a bid, or telemetry from an embedded sensor)
| can work without js. I don't understand how amazon.com
| has gotten so slow and clunky for instance.
|
| I have been using wasm and webgpu for visualization,
| partly to offload any burden from the embedded device to
| be monitored, but that could always be a third machine.
| Htmx says it supports websockets, is there a good way to
| have it eat a stream and plot data as telemetry, or is
| that time for a new tool?
| mohn wrote:
| It sounds like GP would benefit from satellite internet
| bypassing the firewall, but I don't know how hard the Chinese
| government works to crack down on that loophole.
| devjab wrote:
| We design for slow internet, react is one of the better options
| for it with ssr, code splitting and http2 push, mixed in with
| more off-line friendly clients like Tauri. You can also deploy
| very near people if you work "on the edge".
|
| I'm not necessarily disagreeing with your overall point, but
| modern JS is actually rather good at dealing with slow internet
| for server-client "applications". It's not necessarily easy to
| do, and there is almost no online resources that you can base
| your projects on if you're a Google/GPT programmer. Part of
| this is because of the ocean of terrible JS resources online,
| but a big part of it is also that the organisations which work
| like this aren't sharing. We have 0 public resources for the
| way we work as an example, because why would we hand that info
| to our competition?
| jiggawatts wrote:
| By far the lightest weight JS framework isn't React, it's _no
| javascript at all_.
|
| I regularly talk to developers who aren't even aware that
| this is an option.
| mike_hearn wrote:
| If you're behind an overloaded geosynchronous satellite
| then no JS at all just moves the pain around. At least once
| it's loaded a JS-heavy app will respond to most mouse
| clicks and scrolls quickly. If there's no JS then every
| single click will go back to the server and reload the
| entire page, even if all that's needed is to open a small
| popup or reload a single word of text.
| bayindirh wrote:
| However, getting 6.4KB of data (just tested on my blog)
| or 60KB of data (a git.sr.ht repository with a README.md
| and a PNG) is way better than getting 20MB of frameworks
| in the first place.
| mwcampbell wrote:
| False dichotomy, with what is likely extreme hyperbole on
| the JS side. Are there actual sites that ship 20 MB, or
| even 5 MB or more, of frameworks? One can fit a lot of
| useful functionality in 100 KB or less of JS< especially
| minified and gzipped.
| SpaceNugget wrote:
| Well, in TFA, if you re-read the section labeled
| "Detailed, Real-world Example" yes, that is exactly what
| the person was encountering. So no hyperbole at all
| actually.
| bayindirh wrote:
| I just tried some websites: -
| https://web.whatsapp.com 11.12MB compressed / 26.17MB
| real. - https://www.arstechnica.com 8.82MB
| compressed / 16.92MB real. -
| httsp://www.reddit.com 2.33MB compressed / 5.22 MB real.
| - https://www.trello.com (logged in) 2.50MB compressed /
| 10.40MB real. - https://www.notion.so (logged
| out) 5.20MB compressed / 11.65MB real. -
| https://www.notion.so (logged in) 19.21MB compressed /
| 34.97MB real.
| tecleandor wrote:
| Well, I'm working right now so let me check our daily
| "productivity" sites (with an adblocker installed):
| - Google Mail: Inbox is ~18MB (~6MB Compressed). Of that,
| 2.5MB is CSS (!) and the rest is mostly JS - Google
| Calendar: 30% lower, but more or less the same
| proportions - Confluence: Home is ~32MB (~5MB
| Comp.). There's easily 20MB of Javascript and at least
| 5MB of JSON. - Jira: Home is ~35MB (~7MB
| compressed). I see more than 25MB of Javascript -
| Google Cloud Console: 30MB (~7MB Comp.). I see at least
| 16MB of JS - AWS Console: 18MB (~4MB Comp.). I
| think it's at least 12MB of JS - New Relic: 14MB
| (~3MB Comp.). 11MB of JS. This is funny because
| even being way more data heavy than the rest, its weight
| is way lower. - Wiz: 23MB (~6MB Comp.) 10MB of JS
| and 10MB of CSS - Slack: 60MB (~13MB Compressed).
| Of that, 48MB of JS. No joke.
| tonyhart7 wrote:
| too much crap holy and this is worse case scenario with
| adblock
| jerf wrote:
| I sometimes wish I could spare the time just to tear into
| something like that Slack number and figure out what it
| is all doing in there.
|
| Javascript should even generally be fairly efficient in
| terms of bytes/capability. Run a basic minimizer on it
| and compress it and you should be looking at something
| approaching optimal for what is being done. For instance,
| a variable reference can amortize down to less than one
| byte, unlike compiled code where it ends up 8 bytes (64
| bits) at the drop of a hat. Imagine how much assembler
| "a.b=c.d(e)" can compile into to, in what is likely
| represented in less compressed space than a single 64-bit
| integer in a compiled language.
|
| Yet it still seems like we need 3 megabytes of minified,
| compressed Javascript on the modern web just to clear our
| throats. It's kind of bizarre, really.
| LtWorf wrote:
| js developers had this idea of "1 function = 1 library"
| for a really long time, and "NEVER REIMPLEMENT ANYTHING".
| So they will go and import a library instead of writing a
| 5 line function, because that's somehow more maintainable
| in their mind.
|
| Then of course every library is allowed to pin its own
| dependencies. So you can have 15 different versions of
| the same thing, so they can change API at will.
|
| I poked around some electron applications.
|
| I've found .h files from openssl, executables for other
| operating systems, megabytes of large image files that
| were for some example webpage, in the documentation of
| one project. They literally have no idea what's in there
| at all.
| mike_hearn wrote:
| That's a good question. I just launched Slack and took a
| look. Basically: it's doing everything. There's no
| specialization whatsoever. It's like a desktop app you
| redownload on every "boot".
|
| You talk about minification. The JS isn't minified much.
| Variable names are single letter, but property names and
| more aren't renamed, formatting isn't removed. I guess
| the minifier can't touch property names because it
| doesn't know what might get turned into JSON or not.
|
| There's plenty of logging and span tracing strings as
| well. Lots of code like this:
| _n.meta = { name: "createThunk",
| key: "createThunkaddEphemeralMessageSideEffectHandler",
| description: "addEphemeralMessageSideEffect side effect
| handler" };
|
| The JS is completely generic. In many places there are if
| statements that branch on all languages Slack was
| translated into. I see checks in there for whether
| localStorage exists, even though the browser told the
| server what version it is when the page was loaded. There
| are many checks and branches for experiments, whether the
| company is in trial mode, whether the code is executing
| in Electron, whether this is GovSlack. These combinations
| could have been compiled server side to a more minimal
| set of modules but perhaps it's too hard to do that with
| their JS setup.
|
| Everything appears compiled using a coroutines framework,
| which adds some bloat. Not sure why they aren't using
| native async/await but maybe it's related to not being
| specialized based on execution environment.
|
| Shooting from the hip, the learnings I'd take from this
| are:
|
| 1. There's a ton of low hanging fruit. A language
| toolchain that was more static and had more insight into
| what was being done where could minify much more
| aggressively.
|
| 2. Frameworks that could compile and optimize with way
| more server-side constants would strip away a lot of
| stuff.
|
| 3. Encoding logs/span labels as message
| numbers+interpolated strings would help a lot. Of course
| the code has to be debuggable but hopefully, not on every
| single user's computer.
|
| 4. Demand loading of features could surely be more
| aggressive.
|
| But Slack is very popular and successful without all
| that, so they're probably right not to over-focus on this
| stuff. Especially for corporate users on corporate
| networks does anyone really care? Their competition is
| Teams after all.
| taeric wrote:
| This is mind blowing to me. I expect that the majority of
| any application will be the assets and content. And
| megabytes of CSS is something I can't imagine. Not the
| least for what it implies about the DOM structure of the
| site. Just, what!? Wow.
| b3orn wrote:
| I'm getting almost 2MB (5MB uncompressed) just for a
| google search.
| ezequiel-garzon wrote:
| I agree with adding _very little_ JavaScript, say 1kB
| https://instant.page/ to make it snappier.
| andrepd wrote:
| Yes. It's inexcusable that text and images and video
| pulls in megabytes of dependencies from dozens of
| domains. It's wasteful on every front: network, battery,
| and it's also _SLOW_.
| LtWorf wrote:
| The crap is that even themes for static site generators
| like mkdocs link resources from cloudflare rather than
| including them in the theme.
|
| For typedload I've had to use wget+sed to get rid of that
| crap after recompiling the website.
|
| https://codeberg.org/ltworf/typedload/src/branch/master/M
| ake...
| RGamma wrote:
| Also wonder how many savings are still possible with a
| more efficient HTML/CSS/JS binary representation. Text is
| low tech and all but it still hurts to waste so many
| octets for such a relatively low amount of possible
| symbols.
|
| Applies to all formal languages actually. 2^(8x20x10^6)
| ~= 2x10^48164799 is such a ridiculously large space...
| pgraf wrote:
| Shouldn't HTTP compression reap most of the benefits of
| this approach for bigger pages?
| chris_pie wrote:
| Check this proposal out:
| https://github.com/tc39/proposal-binary-ast
| jiggawatts wrote:
| The generalisation of this concept is what I like call
| the "kilobyte" rule.
|
| A typical web page of text on a screen is about a
| kilobyte. Sure, you can pack more in with fine print, and
| obviously additional data is required to represent the
| styling, but the _actual text_ is about 1 kb.
|
| If you've sent 20 MB, then that is 20,000x more data than
| what was displayed on the screen.
|
| Worse still, an _uncompressed_ 4K still image is only
| 23.7 megabytes. At some point you might be better off
| doing "server side rendering" with a GPU instead of
| sending more JavaScript!
| Lex-2008 wrote:
| > "server side rendering" with a GPU instead of sending
| more JavaScript
|
| Some 7~10 years ago I remember I saw somewhere (maybe
| here on HN) a website which did exactly this: you gave it
| an URL - it downloaded a webpage with all its resources,
| rendered and screenshot'ed it (probably in headless
| Chrome or something), and compared size of png screenshot
| versus size of webpage with all its resources.
|
| For many popular websites, png screenshot of a page
| indeed was several times less than webpage itself!
| jcgrillo wrote:
| If your server renders the image as text we'll be right
| back down towards a kilobyte again. See
| https://www.brow.sh/
| skydhash wrote:
| I read epubs, and they're mostly html and css files
| zipped. The whole book usually comes under a MB if
| there's not a lot of big pictures. Then you come across a
| website and for just an article you have to download tens
| of MBs. Disable JavaScript and the website is broken.
| RGamma wrote:
| Soo.. there should be a standardized web API for page
| content. And suddenly... gopher (with embedded
| media/widgets).
| LtWorf wrote:
| Surely you're aware of gzip encoding on the wire for http
| right?
| RGamma wrote:
| Sure, would be interesting to know how it would fare
| against purpose-made compression under real world
| conditions still...
| mike_hearn wrote:
| Yeah, but your blog is not a full featured chat system
| with integrated audio and video calling, strapped on top
| of a document format.
|
| There are a few architectural/policy problems in web
| browsers that cause this kind of expansion:
|
| 1. Browsers can update large binaries asynchronously
| (=instant from the user's perspective) but this feature
| is only very recently available to web apps via obscure
| caching headers and most people don't know it exists
| yet/frameworks don't use it.
|
| 2. Large download sizes tend to come from frameworks that
| are featureful and thus widely used. Browsers could allow
| them to be cached but don't because they're over-
| aggressive at shutting down theoretical privacy problems,
| i.e. the browser is afraid that if one site learns you
| used another site that uses React, that's a privacy leak.
| A reasonable solution would be to let HTTP responses opt
| in to being put in the global cache rather than a
| partitioned cache, that way sites could share frameworks
| and they'd stay hot in the cache and not have to be
| downloaded. But browsers compete to satisfy a very noisy
| minority of people obsessed with "privacy" in the
| abstract, and don't want to do anything that could kick
| up a fuss. So every site gets a partitioned cache and
| things are slow.
|
| 3. Browsers often ignore trends in web development. React
| style vdom diffing could be offered by browsers
| themselves, where it'd be faster and shipped with browser
| updates, but it isn't so lots of websites ship it
| themselves over and over. I think the SCIter embedded
| browser actually does do this. CSS is a very inefficient
| way to represent styling logic which is why web devs
| write dialects like sass that are more compact, but
| browsers don't adopt it.
|
| I think at some pretty foundational level the way this
| stuff works architecturally is wrong. The web needs a
| much more modular approach and most JS libraries should
| be handled more like libraries are in desktop apps. The
| browser is basically an OS already anyway.
| Sesse__ wrote:
| > CSS is a very inefficient way to represent styling
| logic which is why web devs write dialects like sass that
| are more compact, but browsers don't adopt it.
|
| I don't know exactly which features you are referring to,
| but you may have noticed that CSS has adopted native
| nesting, very similarly to Sass, but few sites actually
| use it. Functions and mixins are similar
| compactness/convenience topics being worked on by the
| CSSWG.
|
| (Disclosure: I work on style in a browser team)
| mike_hearn wrote:
| I hadn't noticed and I guess this is part of the problem.
| Sorry this post turned into a bit of a rant but I wrote
| it now.
|
| When it was decided that HTML shouldn't be versioned
| anymore it became impossible for anyone who isn't a full
| time and very conscientious web dev to keep up. Versions
| are a signal, they say "pay attention please, here is a
| nice blog post telling you the most important things you
| need to know". If once a year there was a new version of
| HTML I could take the time to spend thirty minutes
| reading what's new and feel like I'm at least aware of
| what I should learn next. But I'm not a full time web
| dev, the web platform changes constantly, sometimes
| changes appear and then get rolled back, and everyone has
| long since plastered over the core with transpilers and
| other layers anyway. Additionally there doesn't seem to
| be any concept of deprecating stuff, so it all just piles
| up like a mound of high school homework that never
| shrinks.
|
| It's one of the reasons I've come to really dislike CSS
| and HTML in general (no offense to your work, it's not
| the browser implementations that are painful). Every time
| I try to work out how to get a particular effect it turns
| out that there's now five different alternatives, and
| because HTML isn't versioned and web pages / search
| results aren't strongly dated, it can be tough to even
| figure out what the modern way to do it is at all. Dev
| tools just make you even more confused because you start
| typing what you think you remember and now discover there
| are a dozen properties with very similar names, none of
| which seem to have any effect. Mistakes don't yield
| errors, it just silently does either nothing or the wrong
| thing. Everything turns into trial-and-error, plus fixing
| mobile always seems to break desktop or vice-versa for
| reasons that are hard to understand.
|
| Oh and then there's magic like Tailwind. Gah.
|
| I've been writing HTML since before CSS existed, but feel
| like CSS has become basically non-discoverable by this
| point. It's understandable why neither Jetpack Compose
| nor SwiftUI decided to adopt it, even whilst being
| heavily inspired by React. The CSS dialect in JavaFX I
| find much easier to understand than web CSS, partly
| because it's smaller and partly because it doesn't try to
| handle layout. The way it interacts with components is
| also more logical.
| Sesse__ wrote:
| You may be interested in the Baseline initiative, then.
| (https://web.dev/baseline/2024)
| mike_hearn wrote:
| That does look useful, thanks!
| nrabulinski wrote:
| Yeah, right. GitHub migrated from serving static sites to
| displaying everything dynamically and it's basically
| unusable nowadays. Unbelievably long load times,
| frustratingly unresponsive, and that's on my top spec m1
| MacBook Pro connected to a router with fiber connection.
|
| Let's not kid ourselves, no matter how many fancy
| features, splitting, optimizing, whatever you do, JS
| webapps may be an upgrade for developers, they're a huge
| downgrade for users in all aspects
| FridgeSeal wrote:
| Every time I click a link in GitHub, and watch their
| _stupid_ SPA "my internal loading bar is better than
| yours" I despair.
|
| It's _never_ faster than simply reloading the page. I
| don't know what they were thinking, but they shouldn't
| have.
| skydhash wrote:
| I have an instance of Forgejo and it's so snappy. Even
| though I'm the only user, but the server is only 2GB,
| 2vcores with other services present. Itp
|
| On the other side, Gitlab doesn't work with JS disabled.
| rcxdude wrote:
| This makes perfect sense in theory and yet it's the
| opposite of my experience in practice. I don't know how,
| but SPA websites are pretty much always much more laggy
| than just plain HTML, even if there are a lot of page
| loads.
| NohatCoder wrote:
| Having written a fair amount of SPA and similar I can
| confirm that it is actually possible to just write some
| JavaScript that does fairly complicated jobs without the
| whole thing ballooning into the MB space. I should say
| that I could write a fairly feature-rich chat-app in say
| 500 kB of JS, then minified and compressed it would be
| more like 50 kB on the wire.
|
| How my "colleagues" manage to get to 20 MB is a bit of
| mystery.
| RunSet wrote:
| > How my "colleagues" manage to get to 20 MB is a bit of
| mystery.
|
| More often than not (and wittingly or not) it is
| effectively by using javascript to build a browser-
| inside-the-browser, Russian doll style, for the purposes
| of tracking users' behavior and undermining privacy.
|
| Modern "javascript frameworks" do this all by default
| with just a few clicks.
| nicoburns wrote:
| It often is that way, but it's not for technical reasons.
| They're just poorly written. A lot of apps are written by
| inexperienced teams under time pressure and that's what
| you're seeing. Such teams are unlikely to choose plain
| server-side rendering because it's not the trendy thing
| to do. But SPAs absolutely can be done well. For simple
| apps (HN is a good example) you won't get too much
| benefit, but for more highly interactive apps it's a much
| better experience than going via the server every time
| (setting filters on a shopping website would be a good
| example).
| chefandy wrote:
| Yep. In SPAs with good architecture, you only need to
| load the page once, which is obviously weighed down by
| the libraries, but largely is as heavy or light as you
| make it. Everything else should be super minimal API
| calls. It's especially useful in data-focused apps that
| require a lot of small interactions. Imagine implementing
| something like spreadsheet functionality using forms and
| requests and no JavaScript, as others are suggesting all
| sites should be: productivity would be terrible not only
| because you'd need to reload the page for trivial actions
| that should trade a but of json back and forth, but also
| because users would throw their devices out the window
| before they got any work done. You can also queue and
| batch changes in a situation like that so the requests
| are not only comparatively tiny, you can use fewer
| requests. That said, most sites definitely should not be
| SPAs. Use the right tool for the job
| nicoburns wrote:
| > which is obviously weighed down by the libraries, but
| largely is as heavy or light as you make it
|
| One thing which surprised me at a recent job was that
| even what I consider to be a large bundle size (2MB)
| didn't have much of an effect on page load time. I was
| going to look into bundle splitting (because that
| included things like a charting library that was only
| used in a small subsection of the app). But in the end I
| didn't bother because I got page loads fast (~600ms)
| without it.
|
| What did make a huge different was cutting down the
| number of HTTP requests that the app made on load (and
| making sure that they weren't serialised). Our app was
| originally going auth by communicating with Firebase Auth
| directly from the client, and that was terrible for
| performance because that request was quite slow (most of
| second!) and blocked everything else. I created an all-
| in-one auth endpoint that would check the user's auth and
| send back initial user and app configuration data in one
| ~50ms request and suddenly the app was fast.
| bobbob1921 wrote:
| My experience agrees with this comment - I'm not sure why
| web browsers seem to frequently get hung up on only some
| Http requests at times, unrelated to the actual network
| conditions. Ie: in the browser the HTTP request is timing
| out or in a blocked state and hasn't even reached the
| network layer when this occurs. (Not sure if I should be
| pointing the finger here at the browser or the underlying
| OS). However, when testing slow / stalled loading issues,
| this (the browser itself) is frequently one of the
| culprits- however, this issue I am referring to even
| further reinforces the article/sentiments on this HN
| thread (cut down on the number of requests / bloat, and
| this issue too can be avoided.)
| chefandy wrote:
| If if the request itself hasn't reached the network layer
| but is having a networky feeling hang, I'd look into DNS.
| It's network dependent but handled by the system so it
| wouldn't show up in your web app requests. I'm sure
| there's a way to profile this directly but unless I had
| to do it all the time I'd probably just fire up
| wireshark.
| chefandy wrote:
| In many cases, like satellite Internet access or spotty
| mobile service, for sure. But if you have low bandwidth
| but fast response times, that 2mb is murder and the big
| pile o requests is NBD.If you have slow response times
| but good throughput, the 2MB is NBD but the requests are
| murder.
|
| An extreme and outdated example, but back when cable
| modems first became available, online FPS players were
| astonished to see how much better the ping times were for
| many dial up players. If you were downloading a floppy
| disk of information, the cable modem user would obviously
| blow them away, but their round trip time sucked!
|
| Like if you're on a totally reliable but low throughput
| LTE connection, the requests are NBD but the download is
| terrible. If you're on spotty 5g service, it's probably
| the opposite. If you're on, like, a heavily deprioritized
| MVNO with a slower device, they both super suck.
|
| It's not like optimization is free though, which is why
| it's important to have a solid UX research phase to get
| data on who is going to use it, and what their use case
| is.
| FridgeSeal wrote:
| Getting at least "n"kb of html with content in it that
| you can look at in the interim is better than getting the
| same amount of framework code.
|
| SPA's also have a terrible habit of not behaving well
| after being left alone for a while. Nothing like coming
| back to a blank page and having it try to redownload the
| world to show you 3kb of text, because we stopped running
| the VM a week ago.
| withinboredom wrote:
| Here's something that's not true: in js, the first link
| you click navigates you. In the browser, clicking a
| second link cancels the first one and navigates to the
| second one.
|
| GitHub annoys the fuck out of me with this.
| nicoburns wrote:
| In my experience page weight isn't usually the biggest
| issue. On unreliable connections you'll often get decent
| bandwidth when you can get through. It's applications that
| expect to be able to multiple HTTP requests sequentially
| and don't deal well with some succeeding and failing (or
| just network failures in general) that are the most
| problematic.
|
| If I can retry a failed a network request that's fine. If I
| have to restart the entire flow when I get a failure that's
| unusable.
| miki123211 wrote:
| No JS can actually increase roundtrips in some cases, and
| that's a problem if you're latency-bound and not
| necessarily speed-bound.
|
| Imagine a Reddit or HN style UI with upvote and downvote
| buttons on each comment. If you have no JS, you have to
| reload the page every time one of the buttons is clicked.
| This takes a lot of time and a lot of packets.
|
| If you have an offline-first SPA, you can queue the upvotes
| up and send them to the server when possible, with no
| impact on the UI. If you do this well, you can even make
| them survive prolonged internet dropouts (think being on a
| subway). Just save all incomplete voting actions to local
| storage, and then try re-submmitting them when you get
| internet access.
| chiefalchemist wrote:
| It's not always the application itself per se. It's the
| various / numerour marketing, analytics or (sometimes) ad-
| serving scripts. These third party vendors aren't often
| performance minded. They could be. They should be.
| FridgeSeal wrote:
| And the insistence on pushing everything into JS instead of
| just serving the content. So you've got to wait for the
| skeleton to dl, then the JS, which'll take its sweet time,
| just to then(usually blindly) make half a dozen
| _more_requests back out, to grab JSON, which it'll then
| convert into html and eventually show you. Eventually.
| chiefalchemist wrote:
| Yup. There's definitely too much unnecessary complexity
| in tech and too much over-design in presentation.
| Applications, I understand. Interactions and experience
| can get complicated and nuanced. But serving plain ol'
| content? To a small screen? Why has that been made into
| rocket science?
| LtWorf wrote:
| Well grabbing json isn't that bad.
|
| I made a CLI for ultimateguitar
| (https://packages.debian.org/sid/ultimateultimateguitar)
| that works by grabbing the json :D
| chiefalchemist wrote:
| I've not so much good vs not so bad vs bad. It's more
| necessary vs unnecessary. There's also "just because you
| can, doesn't mean you should."
| matthews2 wrote:
| HTTP/2 push is super dead: https://evertpot.com/http-2-push-
| is-dead/
| iforgotpassword wrote:
| Tried multiple VPNs in China and finally rolled my own
| obfuscation layer for Wireshark. A quick search revealed there
| are multiple similar projects on GitHub, but I guess the
| problem is once they get some visibility, they don't work that
| well anymore. I'm still getting between 1 and 10mbit/s (mostly
| depending on time of day) and pretty much no connectivity
| issues.
| LorenzoGood wrote:
| Wireguard?
| iforgotpassword wrote:
| Haha yes, thanks. I used Wireshark extensively the past
| days to debug a weird http/2 issue so I guess that messed
| me up a bit ;)
| LorenzoGood wrote:
| I do that too looking stuff up.
| vmfunction wrote:
| >A lot of this resonates. I'm not in Antartica, I'm in Beijing,
| but still struggle with the internet.
|
| Not even that, with outer space travel, we all need to build
| for very slow internet and long latency. Devs do need to time-
| travel back to 2005.
| andrepd wrote:
| I'm sure this is not what you meant but made me lol anyways:
| sv techbros would sooner plan for "outer space internet" than
| give a shit about the billions of people with bad internet
| and/or a phone older than 5 years.
| lukan wrote:
| "I feel like some devs need to time-travel back to 2005 or
| something and develop for that era in order to learn how to
| build things nimbly."
|
| No need to invent time travel, just let them have a working
| retreat somewhere with only bad mobile connection for a few
| days.
| mkroman wrote:
| Just put them on a train during work hours! We have really
| good coverage here but there's congestion and frequent random
| dropouts, and a lot of apps just don't plan for that at all.
| qingcharles wrote:
| Amen to this. And give them a mobile cell plan with 1GB of
| data per month.
|
| I've seen some web sites with 250MB payloads on the home page
| due to ads and pre-loading videos.
|
| I work with parolees who get free government cell phones and
| then burn through the 3GB/mo of data within three days. Then
| they can't apply for jobs, get bus times, rent a bike, top up
| their subway card, get directions.
| jmbwell wrote:
| "But all the cheap front-end talent is in thick client
| frameworks, telemetry indicates most revenue conversions
| are from users on 5G, our MVP works for 80% of our target
| user base, and all we need to do is make back our VC's
| investment plus enough to cash out on our IPO exit
| strategy, plus other reasons not to care" -- self-
| identified serial entrepreneur, probably
| lukan wrote:
| Having an adblocker (firefox mobile works with uBlock
| origin) and completely deactivate loading of images and
| videos can get you quite far with limited connection.
| qingcharles wrote:
| You're 100% right. uBlock Origin can reduce page weight
| by an astronomical amount.
| fsckboy wrote:
| uMatrix (unsupported but still works) reduces page weight
| and compute even more
| beefnugs wrote:
| Yeah and then give them thousands upon thousands of paying
| customers with these constraints worth caring about
| rpastuszak wrote:
| I lived in Shoreditch for 7 years and most of my flats had
| almost 3G internet speeds. The last one had windows that
| incidentally acted like a faraday cage.
|
| I always test my projects with throttled bandwidth, largely
| because (just like with a11y) following good practices results
| in better UX for all users, not just those with poor
| connectivity.
|
| Edit: Another often missed opportunity is building SPAs as
| offline-first.
| CM30 wrote:
| Oh, London is notorious for having... questionable internet
| speeds in certain areas. It's good if you live in a new build
| flat/work in a recently constructed office building or you
| own your own home in a place OpenReach have gotten to yet,
| but if you live in an apartment building/work in an office
| building more than 5 or so years old?
|
| Yeah, there's a decent chance you'll be stuck with crappy
| internet as a result. I still remember quite a few of my
| employers getting frustrated that fibre internet wasn't
| available for the building they were renting office space in,
| despite them running a tech company that really needed a good
| internet connection.
| zerkten wrote:
| >> Another often missed opportunity is building SPAs as
| offline-first.
|
| You are going to get so many blank stares at many shops
| building web apps when suggesting things like this. This kind
| of consideration doesn't even enter into the minds of many
| developers in 2024. Few of the available resources in 2024
| address it that well for developers coming up in the
| industry.
|
| Back in the early-2000s, I recall these kinds of things being
| an active discussion point even with work placement students.
| Now that focus seems to have shifted to developer experience
| with less consideration on the user. Should developer
| experience ever weigh higher than user experience?
| salawat wrote:
| >Should developer experience ever weigh higher than user
| experience?
|
| Developer experience is user experience. However, in a
| normative sense, I operate such that Developer suffering is
| preferable to user suffering to get any arbitrary task
| done.
| analyte123 wrote:
| SPAs and "engineering for slow internet" usually don't belong
| together. The giant bundles usually guarantee slow first
| paint, and the incremental rendering/loading usually
| guarantees a lot of network chatter that randomly breaks the
| page when one of the requests times out. Most web
| applications are fundamentally _online_. For these, consider
| what inspires more confidence when you 're in a train on a
| hotspot: an old school HTML forms page (like HN), or a page
| with a lot of React grey placeholders and loading spinners
| scattered throughout? I guess my point is that while you
| _can_ take a lot of careful time and work to make an SPA work
| offline-first, as a pattern it tends to encourage the bloat
| and flakiness that makes things bad on slow internet.
| konstantinua00 wrote:
| > a11y
|
| The biggest Soviet I've heard about this abbreviation is to
| not use it, since users do Nazi what it stands for.
|
| But I'm thankful that you wish it on your greatest enemies -
| us
| Nevolihs wrote:
| Tbh, developers just need to test their site with existing
| tools or just try leaving the office. My cellular data
| reception in Germany in a major city sucks in a lot of spots. I
| experience sites not loading or breaking every single day.
| LtWorf wrote:
| developers shouldn't be given those ultra performant
| machines. They can have a performant build server :D
| joseda-hg wrote:
| I live in a well connected city, but my work only pays for
| other continent based Virtual Machines so most of my projects
| end up "fast" but latency bound, it's been an interesting
| exercise of minimizing pointless roundtrips in a technology
| that expects you to use them for everyting
| jrochkind1 wrote:
| Chrome dev tools offer a "slow 3G" and a "fast 3G". Slow 3G?
|
| With fresh cache on "slow 3G", my site _works_, but has 5-8
| second page loads. Would you have consider that
| usable/sufficient, or pretty awful?
| p3rls wrote:
| Eh, I'm a few miles from NYC and have the misfortune of being a
| comcast/xfinity customer and my packetloss to my webserver is
| sometimes so bad it takes a full minute to load pages.
|
| I take that time to clean a little, make a coffee, you know
| sometimes you gotta take a break and breathe. Life has gotten
| too fast and too busy and we all need a few reminders to slow
| down and enjoy the view. Thanks xfinity!
| cricketlover wrote:
| Great post. I was asked this question in an interview which I
| completely bombed, where the interviewer wanted me to think of
| flaky networks while designing an image upload system. I spoke
| about things like chunking, but didn't cover timeouts, variable
| chunk size and also just sizing up the network conditions and
| then adjusting those parameters.
|
| Not to mention having a good UX and explaining to the customer
| what's going on, helping with session resumption. I regret it.
| Couldn't make it through :(
| teleforce wrote:
| The kind of scenario screams a local-first applications and
| solutions, and it's the reason why the Internet was created in
| the first place [1][2]. People have been duped by the misleading
| no software of Salesforce's advert slogan that goes against the
| very foundation and the spirit of the Internet. For most of its
| life and duration starting back in 1969, the Mbps is the anomaly
| not the norm and the its first killer application of email
| messaging (arguably the still best Internet application) is a
| local-first [3]. Ironically the culprit application that the
| author was lamenting in the article is a messaging app.
|
| [1] Local-first software: You own your data, in spite of the
| cloud:
|
| https://www.inkandswitch.com/local-first/
|
| [2] Local-first Software:
|
| https://localfirstweb.dev/
|
| [3] Leonard Kleinrock: Mr. Internet:
|
| https://www.latimes.com/opinion/la-oe-morrison-use24-2009oct...
| walterbell wrote:
| IETF draft proposal to extend HTTP for efficient state
| synchronization, which could improve UX on slow networks,
| https://news.ycombinator.com/item?id=40480016 The
| Braid Protocol allows multiple synchronization algorithms to
| interoperate over a common network protocol, which any
| synchronizer's network messages can be translated into.. The
| current Braid specification extends HTTP with two dimensions of
| synchronization: Level 0: Today's HTTP Level 1:
| Subscriptions with Push Updates Level 2: P2P Consistency
| (Patches, Versions, Merges) Even though today's
| synchronizers use different protocols, their network messages
| convey the same types of information: versions in time, locations
| in space, and patches to regions of space across spans of time.
| The composition of any set of patches forms a mathematical
| structure called a braid--the forks, mergers, and re-orderings of
| space over time.
|
| Hope springs eternal!
| klabb3 wrote:
| Grump take: More complex technology will not fix a business-
| social problem. In fact, you have to go out of your way to make
| things this shitty. It's not hard to build things with few
| round trips and less bloat, it's much easier. The bloat is
| there for completely different reasons.
|
| Sometimes the bloat is unnoticeable on juicy machines and fast
| internet close to the DC. You can simulate that easily, but it
| requires the company to care. Generally, ad-tech and friends
| cares very little about small cohorts of users. In fact, the
| only reason they care about end users at all is because they
| generate revenue for their actual customers, ie the
| advertisers.
| pnt12 wrote:
| > Generally, ad-tech and friends cares very little about
| small cohorts of users.
|
| Sure, and it will keep being that way. But if this gets
| improved at the transport layer, seems like a win.
|
| As an analogy, if buses are late because roads are bumpy and
| drivers are lousy, fixing the bumpy road may help, even if
| drivers don't change their behavior.
| marcosdumay wrote:
| > fixing the bumpy road may help
|
| It really wouldn't. Lousy drivers are a way thinner
| bottleneck than the roads.
|
| But it will improve the services where the drivers are
| good.
|
| If the protocol is actually any good (its goals by
| themselves already make me suspicious it won't be), the
| well-designed web-apps out there can become even better
| designed. But it absolutely won't improve the situation
| people are complaining about.
| klabb3 wrote:
| > But if this gets improved at the transport layer, seems
| like a win.
|
| What do you mean? TCP and HTTP is already designed for slow
| links with packet loss, it's old reliable tech from before
| modern connectivity. You just have to not pull in thousands
| of modules in the npm dep tree and add 50 microservice
| bloatware, ads and client side "telemetry". You set your
| cache-control headers and etags, and for large downloads
| you'll want range requests. Perhaps some lightweight client
| side retry logic in case of PWAs. In extreme cases like
| Antarctica maybe you'd tune some tcp kernel params on the
| client to reduce RTTs under packet loss. There is nothing
| major missing from the standard decades old toolbox.
|
| Of course it's not optimal, the web isn't perfect for
| offline hybrid apps. But for standard things like reading
| the news, sending email, chatting, you'll be fine.
| meindnoch wrote:
| Yes, please! Even more layers of needlessly complex crap will
| definitely improve things!
| walterbell wrote:
| _> needlessly complex_
|
| The optional Braid extension can _reduce_ complexity for
| offline-first apps, e.g. relative to WebDAV,
| https://news.ycombinator.com/item?id=40482610
| You might be surprised at just how elegantly HTTP extends
| into a full-featured synchronization protocol. A key to this
| elegance is the Merge-Type: this is the abstraction that
| allows a single synchronization algorithm to merge across
| multiple data types. As an application programmer,
| you will specify both the data types of your variables (e.g.
| int, string, bool) and also the merge-types (e.g. "this
| merges as a bank account balance, or a LWW unique ID, or a
| collaborative text field"). This is all the application
| programmer needs to specify. The rest of the synchronization
| algorithm gets automated by middleware libraries that the
| programmer can just use and rely upon, like his compiler, and
| web browser. I'd encourage you to check out the
| Braid spec, and notice how much we can do with how little.
| This is because HTTP already has almost everything we need.
| Compare this with the WebDAV spec, for instance, which tries
| to define versioning on top of HTTP, and you'll see how
| monstrous the result becomes. Example here:
|
| https://news.ycombinator.com/item?id=40481003
| YoshiRulz wrote:
| This sounds suspiciously like Matrix. Does is required buy-in
| from user agents or will it benefit existing browsers once
| implemented?
| walterbell wrote:
| From https://braid.org/ Braid is backwards-
| compatible with today's web, works in today's browsers, and
| is easy to add to existing web applications.. You can use
| Braid features in Chrome with the Braid-Chrome extension.
|
| Demo of Statebus+Braid sync on existing browsers:
| https://stateb.us/#demos
| pech0rin wrote:
| Great post. One thing though. Maybe the engineers were misguided
| but its possible they were trying to mitigate slow loris attacks.
| Which are annoying to deal with and hard to separate from users
| who are just sending data at a really slow pace. Having had to
| mitigate these attacks before, we usually do a global timeout on
| the backend. Maybe different but definitely a possibility.
| EdwardDiego wrote:
| I gave a fellow who'd just come off the ice a ride while he was
| hitchhiking, he was saying that the blog author was somewhat
| resented by others because his blog posts, as amazing as they
| are, tended to hog what limited bandwidth they already had while
| the images uploaded, but he was given priority because the
| administration realised the PR value of it.
|
| Which I thought ties into the discussion about slow internet
| nicely.
| Aachen wrote:
| I was wondering about the practicalities indeed. Not everyone
| knows when their OS or applications decided it is now a great
| time to update. You'll have a phone in your pocket that is
| unnecessarily using all the bandwidth it can get its hands on,
| or maybe you're using the phone but just don't realise that
| watching a 720p video, while barely functional, also means the
| person trying to load it after you cannot watch even 480p
| anymore (you might not notice because you've got buffer and
| they'll give up before their buffer is filled enough to start
| playing).
|
| It seems as though there should be accounting so you at least
| know what % of traffic went to you in the last hour (and a
| reference value of bandwidth_available divided by
| connected_users so you know what % was your share if everyone
| had equal need of it), if not a system that deprioritises
| everyone unless you punched the button that says "yes I'm aware
| what bandwidth I'm using in the next [X<=24] hour(s) and
| actually need it, thank you" which'll set the QoS priority for
| your MAC/IP address to normal
| RedShift1 wrote:
| I would highly recommend not only testing on slow network
| connections, but also on slow computers, tablets and smartphones.
| At least in my case there was some low hanging fruit that
| immediately improved the experience on these slow devices which I
| would have never noticed had I not tested on slower machines.
| alicewilma52 wrote:
| https://lvmpd-portal.dynamics365portals.us/forums/support-fo...
| HL33tibCe7 wrote:
| Why do writers like this feel so entitled to engineering effort
| from companies? Maybe companies don't want to plow millions into
| microoptimising their sites so a handful of people in Antarctica
| can access them, when the vast majority of their clients can use
| their sites just fine.
| theodote wrote:
| The author puts a lot of effort into emphasizing that it's not
| just "a handful of people in Antarctica" facing such issues,
| but quite a noticeable percentage of global population with
| unstable or otherwise weird connectivity. The internet
| shouldn't be gatekept from people behind such limitations and
| reserved for the convenient "target audience" of companies,
| whoever that might be - especially when solutions to these
| problems are largely trivial (as presented in the article) and
| don't require that much "engineering effort" for companies of
| that scale, since they are already half-implemented, just not
| exposed to users.
|
| People should not be limited from employing already existing
| infrastructure to overcome their edge-case troubles just
| because that infrastructure is not exposed due to it being
| unnecessary to the "majority of the clients".
| croes wrote:
| You do realize it's not just Antarctica?
|
| There are lots of places in the world where the internet speed
| isn't great.
|
| I'm actually experiencing slow internet right now and I' m in
| Germany.
| 1oooqooq wrote:
| your comment is self defeating.
|
| for example, the audience of a site might be in southern
| Africa, with same bad connectivity, but the EU/UN site
| developers are in the north so they don't care and the
| consequences are in program adoption that fails to blame that.
| Or you might be doing business with a coffee producer and now
| your spiffy ERP is missing data because it's too much effort
| for them to update the orders, so your procurement team have to
| hire an intern to do it over the phone costing you an extra 2k
| a month. Or which is more likely for the crowd here, yall
| losing clients left and right because your lazy sysadmin
| blocked entire country ip ranges because once they saw a single
| DoS wave from there.
| jabroni_salad wrote:
| If you do not want to write software that works well in the
| antarctic mission, just don't sell to them. Government
| contracts are pretty lucrative though.
| 1oooqooq wrote:
| one American have to go to Antarctica for this site to note the
| obvious for everyone in most of Africa and south America.
| yason wrote:
| I cringe whenever I think how blazing fast things could be today
| if only we hadn't bloated the web by 1000x.
|
| In the dial-up era things used to be unworldly fast merely by
| getting access to something like 10M ethernet. Now mobile
| connections are way, way faster than physical connections in the
| 90's but web pages aren't few KB, they are few MB at minimum.
|
| It takes four seconds and 2.5 MB to load my local meteorological
| institute's weather page which changes not more often than maybe
| once an hour and could be cached and served as a static base page
| in a few dozen milliseconds (i.e. instantly). A modern connection
| that's plenty capable to support all my remote work and
| development over a VPN and interactive shells without any lag
| can't help me get modern web pages load any faster because of the
| amount of data and the required processing/execution of a million
| lines of javascript that's imported from a bunch of number of
| sources, with the appropriate handshake delays of new connections
| implied, for each page load.
|
| A weather page from 2004 served exactly the same information as a
| weather page from 2024, and that information is everything
| required to get a sufficient glimpse of today's weather. One web
| page could be fixed but there are billions of URIs that load
| poorly. The overall user experience hasn't improved much, if at
| all. Yes, you can stream 4K video without any problems which
| reveals how fast things actually are today but you won't see it
| when browsing common pages -- I'd actually like to say web pages
| have only gone slower despite the improvements in bandwidth and
| processing power.
|
| When many pages still had mobile versions it was occasionally a
| very welcome alternative. Either the mobile version was so crappy
| you wanted to use the desktop version on your phone, or it was so
| good you wanted to predominantly load the mobile version even on
| desktop.
|
| I'd love to see an information internet where things like weather
| data, news articles, forum posts, etc. would be downloadable as
| snippets of plaintext, presumably intended to be machine
| readable, and "web" would actually be a www site that builds a
| presentation and UI for loading and viewing these snippets. You
| could use whichever "web" you want but you would still ultimately
| see the same information. This would disconnect information
| sources from the presentation which I think is the reason web
| sites started considering "browser" a programmable platform, thus
| taking away user control and each site bloating their pages each
| individually, leaving no choice for the user but maybe some 3rd
| party monkeyscripts or forced CSS rules.
|
| If the end user could always choose the presentation, the user
| would be greatly empowered in comparison to the current state of
| affairs where web users are currently being tamed down to be mere
| receivers or consumers of information, much not unlike passive TV
| viewers.
| xenodium wrote:
| Not yet officially launched, but I'm working on a no-bloat, no-
| tracking, no-JS... blogging platform, powered by a drag/drop
| markdown file: https://lmno.lol
|
| Blogs can be read from just about any device (or your favourite
| terminal). My blog, as an example: https://lmno.lol/alvaro
|
| Shared more details at
| https://indieweb.social/@xenodium/112265481282475542
|
| ps. If keen to join as an early adopter, email help at lmno.lol
| nicbou wrote:
| I travel a lot. Slow internet is pretty common. Also, right now
| my mobile data ran out and I'm capped at 8 kbps.
|
| Websites that are Just Text On A Page should load fast, but many
| don't. Hacker News is blazing fast, but Google's API docs never
| load.
|
| The worst problem is that most UIs fail to account for slow
| requests. Buttons feel broken. Things that really shouldn't need
| megabytes of data to load still take minutes to load or just
| fail. Google Maps' entire UI is broken.
|
| I wish that developers spent more time designing and testing for
| slow internet. Instead we get data hungry websites that only work
| great on fast company laptops with fast internet.
|
| ---
|
| On a related note, I run a website for a living, and moving to a
| static site generators was one of the best productivity moves
| I've made.
|
| Instead of the latency of a CMS permeating everything I do, I
| edit text files at blazing speed, even when fully offline. I just
| push changes once I'm back online. It's a game changer.
| daveoc64 wrote:
| >Websites that are Just Text On A Page should load fast, but
| many don't. Hacker News is blazing fast, but Google's API docs
| never load.
|
| Things aren't always that simple.
|
| I'm in the UK, and my ping time to news.ycombinator.com is
| 147ms - presumably because it's not using a CDN and is hosted
| in the USA.
|
| cloud.google.com on the other hand has an 8ms ping time.
|
| So yes, Hacker News is a simple, low-JS page - but there can be
| other factors that make it feel slow for users in some places.
| This is despite me being in a privileged situation, having an
| XGS-PON fibre connection providing symmetric 8Gbps speeds.
| Sesse__ wrote:
| HN loads quickly for me _despite_ the 147 ms. I guess
| partially because it doesn't need 20 roundtrips to sent
| useful content to me.
|
| At some point, I wrote a webapp (with one specific, limited
| function, of course) and optimized it to the point where
| loading it required one 27 kB request. And then turned up the
| cwnd somewhat, so that it could load in a single RTT :-)
| Doesn't really matter if you're in Australia then, really.
| toast0 wrote:
| I have experience with having a webpage with a global
| audience, served from random US locations (east/west/texas,
| but no targetting) and pretty unbloated, to something served
| everywhere with geodns and twice the page weight... Load
| times were about the same before and after. If we could have
| kept the low bloat, I expect we would have seen a noticable
| improvement in load times (but it wasn't important enough to
| fight over)
| Tade0 wrote:
| One additional benefit of static sites, which I learned the
| hard way, is that you're mostly immune to attacks.
|
| I have a domain that's currently marked as "dangerous" because
| I didn't use the latest version of Wordpress.
| qingcharles wrote:
| I had a client that I set up with a static site generator.
| Sadly the client changed their FTP password to something
| insecure and someone FTP'd in and added a tiny piece of code
| to every HTML file!
| kjkjadksj wrote:
| Google used to be good about slow apps. Using gmail on the
| school computers in the day, the site would load so slowly it
| would detect that and instead load a basic html version.
|
| Now a days I download a 500mb google map cache on my phone and
| its like there is no point. Everything still has to fetch and
| pop in.
| habosa wrote:
| > Google's API docs never load
|
| I used to work on the team that served those docs. Due to some
| unfortunate technical decisions made in the name of making docs
| dynamic/interactive they are almost entirely uncached.
| Basically every request you send hits an AppEngine app which
| runs Python code to send you back the HTML.
|
| So even though it looks like it should be fast, it's not.
| lawgimenez wrote:
| From where I'm from (Southeast Asia), slow internet is common in
| provincial and remote areas. It's like the OP's experience in
| South Pole but slower.
|
| That's why I always cringe at these fancy-looking UI cross-
| platform apps since I know they will never work in a remote
| environment. Also, that is why offline support is very important.
| I only use Apple Notes and Things 3, both work tremendously in
| such remote settings.
|
| Imagine your notes or to-do list (ehem Basecamp) not loading
| since it needs internet connection
| bombcar wrote:
| What's sad is that the app-style setup on phones SHOULD be
| perfect for this - you download the app when you DO have a good
| connection, and then when you're out on the slow/intermittent
| connection the ONLY thing the app is sending is the new data
| needed.
|
| Instead almost all apps are just a bad web browser that goes to
| one webpage.
| 29athrowaway wrote:
| A must read
|
| https://www.gamedeveloper.com/programming/1500-archers-on-a-...
| anthk wrote:
| Mosh and NNCP will help a lot, but you need some good sysadmin to
| set NNCP as the mail MUA/MTA backend to spool everything
| efficiently. NNCP it's an expert level skill, but your data will
| be _sent_ over _very_ unreliable channels:
|
| https://nncp.mirrors.quux.org/Use-cases.html
|
| Also, relying on propietary OS'es is not recommended. Apple and
| iOS are disasters to work on remote, isolated places. No wonder
| no one uses Apple in Europe for any serious work except for iOS
| development. Most science and engineering setups will use
| anything else as a backend.
|
| Most offline distros, like Ubuntu or Trisquel have methods of
| downloading the software packages for offline installs.
|
| On chats, Slack and Zoom are disasters. Any SIP or Jabber client
| with VOIP support will be far more reliable as it can use several
| different protocols which can use far less bandwidth (OPUS for
| audio) without having to download tons of JS to use it, even if
| you cached your web app it will still download tons of crap in
| the background.
|
| And, again, distros like Debian have a full offline DVD/BR pack
| which ironically can be better if you got that by mail. Or you
| can just use the downloaded/stored ISO files with
|
| apt-cdrom add -m /path/to/your/file.iso
|
| This way everything from Debian could be installed without even
| having an Internet connection.
| greenish_shores wrote:
| Well, statistically average end-user internet connection in
| Europe is much faster than in the US. Maybe outside some places
| like most of western Germany, but these are an exception.
| Europe has really good bandwidth speeds, overall.
|
| I absolutely agree with the rest, though, including the part
| saying any "serious" software will have such features (and
| better support in general), and I second the examples you gave.
| ajsnigrutin wrote:
| Slow internet and also slow devices!
|
| If you're targeting the general population (so, chat service,
| banking app, utility app,...), you should be targeting the
| majority of users, not just the new-flagship ones, so all the
| testing should be done on a cheapest smartphone you could buy in
| a supermarket two years ago (because well.. that's what
| "grandmas" use). Then downgrade the connection to 3g, or maybe
| even edge speeds (can be smulated on network devices), and the
| app/service should still work.
|
| Somehow it seems that devs get the best new flagships only,
| optimize the software for that, and forget about the rest... and
| I understand that for a 3d shooter game or something, but an eg.
| banking app, should work on older devices too!
| Karrot_Kream wrote:
| So I've hacked a lot on networking things over the years and have
| spent time getting my own "slow internet" cases working. Nothing
| as interesting as McMurdo by far but I've chatted and watched
| YouTube videos on international flights, trains through the
| middle of nowhere, crappy rural hotels, and through tunnels.
|
| If you have access/the power (since these tend to be power
| hungry) to a general-purpose computing device and are willing to
| roll your own my suggestion is to use NNCP [1]. NNCP can can take
| data, chunk it, then send it. It also comes with a sync protocol
| that uses noise (though I can't remember if this enables 0RTT)
| over TCP (no TLS needed so only 1.5 RTT time spent establishing
| connection) and sends chunks, retrying failed chunks along the
| way.
|
| NNCP supports feeding data as stdin to a remote program. I wrote
| a YouTube downloader, a Slack bot, a Telegram bot, and a Discord
| bot that reads incoming data and interacts with the appropriate
| services. On the local machine I have a local Matrix (Dendrite)
| server and bot running which sends data to the appropriate remote
| service via NNCP. You'll still want to hope (or experiment such)
| that MTU/MSS along your path is as low as possible to support
| frequent TCP level retries, but this setup has never really
| failed me wherever I go and let's me consume media and chat.
|
| The most annoying thing on an international flight is that the
| NNCP endpoint isn't geographically distributed and depending on
| the route your packets end up taking to the endpoint, this could
| add a lot of latency and jitter. I try to locate my NNCP endpoint
| near my destination but based on the flight's WiFi the actual
| path may be terrible. NNCP now has Yggdrasil support which may
| ameliorate this (and help control MTU issues) but I've never
| tried Ygg under these conditions.
|
| [1]: http://www.nncpgo.org/
| alanpearce wrote:
| This sounds fascinating. Do you have some articles describing
| your setup?
| Karrot_Kream wrote:
| Hah no, but maybe I should. The reason I haven't is that most
| of my work is just glue code. I use yt-dlp to do Youtube
| downloads, make use of the Discord, Slack and Telegram APIs
| to access those services. I run NNCP and the bots in systemd
| units, though at this point I should probably bake all of
| these into a VM and just bring it up on whichever cloud
| instance I want to act as ingress. Cloud IPs stay static as
| long as the box itself stays up so you don't need to deal
| with DNS either. John Goerzen has a bunch of articles about
| using NNCP [1] that I do recommend interested folks look into
| but given the popularity of my post maybe I should write an
| article on my setup.
|
| FWIW I think it's fine that major services do not work under
| these conditions, though I wish messaging apps did. Both
| WhatsApp and Telegram IME are well tuned for poor network
| conditions and do take a lot of these issues into account (a
| former WA engineer comments in this thread and you can see
| their attention to detail.) Complaining about these things a
| lot is sort of like eating out at restaurants and complaining
| at how much sodium and fat goes into the dishes: restaurants
| have to turn a profit and catering to niche dietary needs
| just isn't enough for them to survive. You can always cook at
| home and get the macros you want. But for you to "cook" your
| own software you need access to APIs and I'm glad Telegram,
| Slack, and Discord make this fairly easy. Youtube yt-dlp does
| the heavy lifting but I wish it were easier, at least for
| Premium subscribers, to access Youtube via API.
|
| I find Slack to be the absolute worst offender networking-
| wise. I have no idea how, now that Slack is owned by
| Salesforce, the app experience can continue to be so crappy
| on network usage. It's obvious that management there does not
| prioritize the experience under non-ideal conditions in any
| way possible. Their app's usage of networks is almost
| shameful in how bad it is.
|
| [1]: https://www.complete.org/nncp/
| langsoul-com wrote:
| Too bad only US has starlink active. Other nations, like
| Australia, has nothing...
| daveoc64 wrote:
| Starlink is available in something like 40 countries, including
| Australia.
|
| Unless you're specifically talking about bases in Antarctica?
| hooby wrote:
| All of that sounds like, that torrent-based updaters/downloaders
| should be the absolute killer-app for environments like that.
|
| Infinitely resume-able, never looses progress, remains completely
| unfazed by timeouts, connection loss, etc. - and the ability to
| share received update-data between multiple devices peer-to-peer.
| AllegedAlec wrote:
| I currently develop applications which are used on machines with
| very spotty connections and speeds which are still calculated in
| Baud. We have to hand-write compression protocols to optimize for
| our use. Any updates/installs over network are out of the
| question.
|
| It's a great lesson in the importance of edge computing, but it
| also provides some harsh trusth about the current way we produce
| software. We _cannot_ afford to deliver a spotty product. To get
| new updates out to all parties takes a prohibitively long time.
| This is hard for new people or outsiders giving courses to grok,
| and makes most modern devops practices useless to us.
| miyuru wrote:
| Wonder if the author tried BitTorrent.
|
| Despite its usage issues, its built with reliability in mind and
| its a great way to transfer large files.
| palata wrote:
| How do you use WhatsApp over BitTorrent? And how do you update
| your macOS over BitTorrent?
|
| The author clearly says that downloaders elaborate enough to
| deal with slow connections (e.g. "download in a browser") were
| fine. The problem is that modern apps don't let you download
| the file the way you want, they just expect you to have a fast
| internet connection.
| miyuru wrote:
| No, I just meant if he tried to get a large file(Linux ISO)
| with BitTorrent. which should be reliable in theory.
|
| Bittorent has webseeds support, which can use apples direct
| CDN urls to create a torrent file to download. archive.org
| still uses this technique and AWS S3 used to do this when
| they had torrent support.
|
| There a website that do just that, it creates a torrent file
| from any direct weburl.
|
| https://www.urlhash.com/
| dano wrote:
| Back in 1999 while at MP3.com it was common for people outside of
| engineering to complain about speed when they were at home. In
| the office we had 1G symmetric (that was lot 25 yrs ago!) between
| the office and primary DC. I tied to explain that the large
| graphics wanted by some, and heavy videos didn't work great for
| dial up or with snow cable modem connections. Surely the servers
| are misconfigured!
| xacky wrote:
| Try using the internet on an "exchange only line". Technically
| it's broadband but its speeds are still dialup tier. I know
| several streets in my city that still have these connections.
| xacky wrote:
| And also ever since the 3G shutdown in the UK phones often fall
| back to GPRS and EDGE connections (2G), as 2G is not scheduled
| to shut down in the UK until 2033. I know several apps that are
| too slow to work in such conditions, as they are developed by
| people who use the latest 5G links in urban locations instead
| of testing it in rural and suburban areas with large amounts of
| trees.
| ape4 wrote:
| Perhaps they could benefit from Usenet - with its store-and-
| forward attributes
| onenukecourse wrote:
| List didn't include the one we're all most likely to encounter -
| crappy hotel connections.
| amelius wrote:
| I have a question about this. I have multiple instances of curl
| running in different terminals, all using a single slow internet
| connection.
|
| How can I give priority to one of these instances?
| meindnoch wrote:
| Create multiple virtual interfaces, apply different traffic
| shaping to them, and then use the --interface option of cURL.
| amelius wrote:
| Thanks, but I don't have root access to my machine.
| nashashmi wrote:
| It takes a "special" skill Level to develop web applications in
| JS for low bandwidth connections. it takes time because
| frameworks and libraries are not built for this. there are very
| few libraries and frameworks in JS that are optimized for Low
| bandwidth connections. this requires having to program
| applications from scratch.
|
| I went through such a process. Took me two weeks versus two hours
| using JS query
| austin-cheney wrote:
| 70%+ of the web is putting text on screen and responding to user
| interactions, 25%+ is spyware and advertising, and the last 5%
| are cool applications. How complicated should that really be?
|
| This is a good example of why I gave up a career as a JavaScript
| developer after 15 years. I got tired of fighting stupid, but
| even stupid woefully unqualified people need to make 6 figures
| spinning their wheels to justify their existence.
| password4321 wrote:
| It doesn't take much to slow down RDP over TCP (especially when
| port forwarding through SSH).
|
| I did find mention of increasing the cache1 and lowering the
| refresh rate to 4 fps2 (avoiding unnecessary animations), but I
| still feel the need for a server-side QUIC proxy that is less
| pushy based on network conditions. There is a red team project
| that has the protocol parsed out in Python3 instead of all the
| ActiveX control clients.
|
| 1 https://superuser.com/questions/13487/how-to-increase-perfor...
|
| 2 https://learn.microsoft.com/en-us/troubleshoot/windows-serve...
|
| 3 https://github.com/GoSecure/pyrdp
| matlin wrote:
| It's funny how similar the problems that affect a workstation in
| Antartica are to designing a robust mobile app.
|
| I personally think all apps benefit from being less reliant on a
| stable internet connection and that's why there's a growing
| local-first movement and why I'm working on Triplit[1].
|
| [1] www.triplit.dev
| luuurker wrote:
| Those of us working on apps, websites, etc, need to remember that
| there are lots of people out there that are not connected to the
| fast Wi-Fi or fibre connections we have.
|
| Here in the UK, some networks started shutting down 3G. Some have
| 2G as a low energy fall back, but we're supposed to use 4G/5G
| now. The problem is that 4G is not available everywhere yet, some
| areas until recently only had good 3G signal. So I've been
| dropping to 2G/EDGE more often than I'd like and a lot of stuff
| just stops working. A lot of apps are just not tested on slow,
| high latency, high package loss scenarios.
| jimmaswell wrote:
| Shutting down 3G was a mistake. Besides turning so many devices
| into e-waste, it was a good backup when 4g was congested.
| HPsquared wrote:
| The lower-bandwidth connections get completely saturated by
| modern phones with modern data allowances. Back in the day I
| had 500MB a month on 3G, for instance. I can use that in a
| few minutes these days.
| kjkjadksj wrote:
| Thats been true since the iphone 3g with unlimited data
| plans though
| HPsquared wrote:
| Modern phones and apps use a lot more though. YouTube
| 1080p60 or even 4K, for example.
| luuurker wrote:
| 3G devices should still work over 2G. It's much slower, but
| it works and should do so until well into 2030 in the UK.
|
| The problem with 3G as I understand it is that it uses more
| power and is less efficient than 4G/5G. They're starting to
| re-deploy the 3G bands as 4G/5G, so the other Gs will
| eventually benefit from this shutdown.
| qingcharles wrote:
| Here in the USA a great number of networks will drop back to 2G
| when their data plan runs out. And most poor people are on
| really low data limits, so they spend most of the month on 2G.
|
| Try using Google Maps to get around on 2G :(
| the__alchemist wrote:
| Thank you! I am in the East coast US, and consistently find web
| sites and internet-connected applications are too slow. If I am
| at home, they are probably fine, but on mobile internet? Coffee
| shops etc? Traveling? No!
|
| No excuses! It is easier than ever to build fast, interactive
| websites, now that modern, native Javascript includes so many
| niceties.
|
| Using JS dependencies is a minefield. At work, where I give less
| of a fuck, a dev recently brought in Material UI and Plotly. My
| god.
| gmuslera wrote:
| It is not just bandwidth or latency, and is not just for
| Antarctica. Not in all places of the world you have the best
| connectivity. Even with not so bad connectivity, you may have
| environmental interference, shared use or be just far from the
| wifi router. You may have a browser running in a not so powerful
| CPU, doing more things chewing processor, or the available
| memory, so heavy JS sites may suffer or not work at all there.
| You don't know what is in the other side, putting high
| requirements there may turn your solution unfit for a lot of
| situations.
|
| Things should be improving (sometimes fast, sometimes slowly) in
| that direction, but still is not something guaranteed everywhere,
| or at least in every place that your application is intended or
| needed to run. And there may be even setbacks in that road.
| kjkjadksj wrote:
| I run into this daily on my phone. Where I live, its hilly,
| network is usually saturated, my speeds are crap usually and
| some sites more complicated than hn cannot even load at all
| without timing out sometimes.
| cletus wrote:
| So I have some experience with this because I wrote the non-
| Fleash Speedtest for Google Fiber. I also have experience with
| this by virtue of being from Australia. Let me explain.
|
| So Google Fiber needed a pure JS Speedtest for installers to
| verify connections. Installers were issued with Chromebooks,
| which don't support Flash and the Ookla Speedtest at the time
| used Flash. There's actually good reasons for this.
|
| It turns out figuring out the maximum capacity of a network link
| is a nontrivial problem. You can crash the browser with too much
| traffic (or just slow down your reported result). You can easily
| under-report speed by not sending enough traffic. You have to
| weigh packet sizes with throughput. You need to stop browsers
| trying to be helpful by caching things (by ignoring caching
| headers). There's a long list.
|
| So I did get a pure JS Speedtest that could actually run up to
| about ~8.5Gbps on a 10GbE link to a Macbook (external 10GbE
| controller over TB3).
|
| You learn just how super-sensitive throughput is to latency due
| to TCP throttling. This is a known and longstanding problem,
| which is why Google invested in newer congestion control schemes
| like BRR [1]. Anyway, adding 100ms of latency to a 1GbE
| connection would drop the measured throughput from ~920-930Mbps
| to a fraction of that. It's been a few years so I don't remember
| the exact numbers but even with adjustments I recall the drop off
| being like 50-90%.
|
| The author here talks about satellite Internet to Antarctica that
| isn't always available. That is indeed a cool application but you
| don't need to go this extreme. You have this throughput problem
| _even in Australia_ because pure distance pretty much gives you
| 100ms minimum latency in some parts and there 's literaly nothing
| you can do about it.
|
| It's actually amazing how much breaks or just plain sucks on that
| kind of latency. Networked applications are clearly not designed
| for this and have never been tested on it. This is a general
| problem with apps: some have never been tested in non-perfect
| Internet conditions. Just th eother day I was using one of the
| Citi-bike apps and it could hang trying to do some TCP query and
| you'd every now and again get "Connection timed out" pop ups to
| the user.
|
| That should _never_ happen. This is the lazy dev 's way of just
| giving up, of catching an exception and fatalling. I wish more
| people would actually test their experience when there was 100ms
| latency or if there was just random 2% packet loss. Standard TCP
| congestion control simply doesn't handle packet loss in a way
| that's desirable or appropriate to modern network conditions.
|
| [1]: https://cloud.google.com/blog/products/networking/tcp-bbr-
| co...
| juangacovas wrote:
| Come to Spain. We haven't everything, but our optic fibers and
| internet plans are top notch ;P
| phaedrus wrote:
| One of my first professional software projects, as an intern, was
| I wrote a tool for simulating this type of latency. I modeled it
| as a set of pipe objects that you could chain together with
| command line arguments. There was one that would do a fixed
| delay, another that would introduce random dropped packets, a tee
| component in case you wanted to send traffic to another port as
| well, etc.
| infinet wrote:
| I had similar problem on a ship with many users share a 2M VSAT
| Internet. Few tricks made Internet less painful:
|
| - block windows update by returning DNS query for microsoft
| update endpoints as NXDOMAIN.
|
| - use a captive portal to limit user session duration, so that
| unattended devices won't consume bandwidth.
|
| - with freebsd dummynet, pfSense can share bandwidth equally
| among users. It can also share bandwidth by weight among groups.
| It helps.
|
| - inside Arctic circle, the geosynchronous satellites are very
| low on the horizon and were blocked frequently when ship turns. I
| was able to read the ship's gyro and available satellites from
| VSAT controller and generate a plot to show the satellite
| blockage. It was so popular that everyone is using it to forecast
| next satellite online.
| gbalduzzi wrote:
| I agree with the overall take by OP, but I find this point quite
| problematic:
|
| > If you have the ability to measure whether bytes are flowing,
| and they are, leave them alone, no matter how slow. Perhaps show
| some UI indicating what is happening.
|
| Allowing this means easy DDOS attack. An attacker can simply keep
| thousand of connections open
| klabb3 wrote:
| Close after 10-60s of complete inactivity, don't use JS
| bloatware and allow for range/etag requests should go a long
| way though. The issue is people setting fixed timeouts per
| request which isn't meant for large transfers.
| jrhey wrote:
| Engineering for slow CPUs next. No matter how fast our machines
| get these days, it's just never enough for the memory/CPU/battery
| hungry essential apps and operating systems we use nowadays.
| mayormcmatt wrote:
| This topic resonates with me, because I'm currently building a
| horrible marketing static page with images and videos that top
| 150MB, prior to optimization. It causes me psychic pain to think
| about pushing that over the wire to people that might have data
| caps. Not my call, though...
| beeandapenguin wrote:
| This is why we need more incremental rendering[1] (or
| "streaming"). This pattern become somewhat of a lost art in the
| era of SPAs -- it's been possible since HTTP/1.1 via chunked
| transfer encoding, allowing servers to start sending a response
| without knowing the total length.
|
| With this technique, the server can break down a page load into
| smaller chunks of UI, and progressively stream smaller parts of
| the UI to the client as they become available. No more waiting
| for the entire page to load in, especially in poor network
| conditions as the author experienced from Anartica.
|
| [1]: https://www.patterns.dev/react/streaming-ssr
| jasoncartwright wrote:
| I remember doing this with ASPv3 pages back in the day on a
| content site. It made it easy to dump what HTML has already
| been completed out before continuing to generate the heavier,
| but much less important, comments section below.
| nirav72 wrote:
| > From my berthing room at the South Pole, it was about 750
| milliseconds
|
| I'm currently on a moving cruise ship in the Mediterranean with a
| starlink connection. A latency of 300-500 ms seems to be normal.
| Although bandwidth is tolerable at 2-4 mbps during the day with
| hundreds of passengers using it. At night it gets better. But
| latency can still be frustrating.
| palata wrote:
| One problem is that developers have the best hardware and
| Internet because "it's their job", so they are completely biased.
| A bit like rich people tend to not understand what it means to be
| poor.
|
| The other problem is that nobody in the software industry gives a
| damn. Everyone wants to make shiny apps with the last shiny tech.
| Try to mention optimizing for slow hardware/Internet and look at
| the face of your colleagues, behind their brand new M3.
|
| I worked in a company with some remote colleagues in Africa.
| There were projects that they could literally not build, because
| it would require downloading tens of GB of docker crap multiple
| times a week for no apparent reason. The solution was to not have
| those colleagues work on those projects. Nobody even considered
| that maybe there was something to fix somewhere.
| y-c-o-m-b wrote:
| Some of these web apps are from very profitable or big companies
| and that drives me insane because they have more than enough
| funding to do things right.
|
| Take Home Depot for example. Loading their website in a mobile
| browser is soooooooooo slow. The rendering is atrocious, with
| elements jumping all over the place. You click on one thing and
| it ends up activating a completely different element, then you
| have to wait for whatever you just clicked to load and jump all
| over the place again. Very frustrating! Inside their stores is
| even worse! I asked for help locating an item from one of their
| workers one day and they pull up their in-store app. That too was
| slower than molasses and janky, so we ended up standing there for
| several minutes just chatting waiting for it to load.
| AdamH12113 wrote:
| It's interesting that these are exactly the sort of conditions
| that the internet protocols were designed for. A typical suite of
| internet software from the 90s would handle easily.
|
| One key difference is that client software was installed locally,
| which (in modern terms) decouples UI from content. As the article
| points out, the actual data you're dealing with is often measured
| in bytes. An email reader or AOL Instant Messenger would only
| have to deal with that data (plus headers and basic login)
| instead of having to download an entire web app. And since the
| protocols didn't change often, there was no need to update
| software every few weeks (or months, or even years).
|
| Another key difference, which is less relevant today, is that
| more data came from servers on the local network. Email and
| Usenet were both designed to do bulk data transfer between
| servers and then let users download their individual data off of
| their local server. As I recall, email servers can spend several
| days making delivery attempts before giving up.
| FerretFred wrote:
| A fascinating article and I need to revisit this when I have more
| time. So, all I'd say now is that there's way too much emphasis
| on GUI. Also, check out some of the web sites on https://1mb.club
| - it's amazing what can be achieved in less than 1Mb of HTML ...
| MagicMoonlight wrote:
| As I was reading this I realised that everything here had already
| been solved - by torrents.
|
| Everything is split into chunks. Downloads and uploads happen
| whenever connections can happen. If someone local has a copy,
| they can seed it to you without you needing an external
| connection. You can have a cache server that downloads important
| stuff for everyone.
| warpech wrote:
| I assume it was considered, but I don't see it mentioned: Would
| it be a terrible idea to use a cloud computer and Remote
| Desktop/VNC to it? Your slow internet only needs to stream the
| compressed pixels to your thin client.
| hi-v-rocknroll wrote:
| The presumption that "every user" has 20 ms latency, 200 Mbps
| bandwidth, and unlimited data limits is fundamentally
| inconsiderate to other cases such as great distances, local
| congestion, or where accessibility issues exist.
|
| This problem will return with a vengeance once humans occupy the
| Moon and Mars.
|
| PSA: Please optimize your website, web apps for caching and
| efficiency, and offer slow/graceful fallback versions instead of
| 8K fullscreen video as your homepage for all users.
| agarwa90 wrote:
| very well written post!
| SergeAx wrote:
| Telegram messenger is fantastic. It works over GPRS (AKA 2.5G)
| connection. I love sailing, and the moment we see nearby island
| and get data connection - Telegram immediately starts working.
| WhatsApp tries, but actually works only over 3G.
| tirey wrote:
| https://github.com/tireymorris/hyperwave
|
| hyperwave is great for slow connections - in my testing, even a
| 2G throttled connection is still usable with load times in the 5s
| range.
| dosourcenotcode wrote:
| Definitely agree with the article that engineers should be more
| aware of scenarios where those interacting with the systems they
| build have slow internet.
|
| Another thing I think people should think about is scenarios with
| intermittent connectivity where there is literally no internet
| for periods ranging from minutes to days.
|
| Sadly in both these regards I believe we're utterly screwed.
|
| Even the Offline First and Local First movements who you'd think
| would handle these issues in at least a semi-intelligent manner
| don't actually practice what they preach.
|
| Look at Automerge or frankly the vast majority of the other
| projects that came out of those movements. Logically you'd think
| they have offline documentation that allows people to study them
| in a Local First fashion. Sadly that's not the case. The
| hypocrisy is truly a marvel to behold. You'd think that if they
| can get hard stuff like CRDTs right they'd get simple stuff right
| like actually providing offline / local first docs in a trivial
| to obtain way. Again sadly not.
|
| The following two links are yet another example of a similar kind
| of hypocrisy:
| https://twitter.com/mitchellh/status/1781840288300097896
| https://github.com/hashicorp/vagrant/issues/1052#issuecommen...
|
| Again at this point the jokes are frankly writing themselves.
| Like bro make it possible for people to follow your advice.
|
| Also if you directly state or indirectly insinuate that your tool
| is ANY/ALL OF Local First, or Open Source, or Free As In Freedom
| you better have offline docs.
|
| If you don't have offline docs your users and collaborators don't
| have Freedom 1. If you can't exercise Freedom 1 you are severely
| hampered in your ability to exercise Freedoms 0, 2, or 3 for any
| nontrivial FOSS system.
|
| The problem has gotten so bad the I started the Freedom
| Respecting Technology movement which I'm gonna plug here:
| https://makesourcenotcode.github.io/freedom_respecting_techn...
___________________________________________________________________
(page generated 2024-05-31 23:01 UTC)