[HN Gopher] A 14kb page can load much faster than a 15kb page (2...
       ___________________________________________________________________
        
       A 14kb page can load much faster than a 15kb page (2022)
        
       Author : truxs
       Score  : 418 points
       Date   : 2025-07-19 08:26 UTC (14 hours ago)
        
 (HTM) web link (endtimes.dev)
 (TXT) w3m dump (endtimes.dev)
        
       | palata wrote:
       | Fortunately, most websites include megabytes of bullshit, so it's
       | not remotely a concern for them :D.
        
         | Hamuko wrote:
         | I recently used an electric car charger where the charger is
         | controlled by a mobile app that's basically a thin wrapper over
         | a website. Unfortunately I only had a 0.25 Mb/s Internet plan
         | at the time and it took me several minutes just staring at the
         | splash screen as it was downloading JavaScript and other
         | assets. Even when I got it to load, it hadn't managed to
         | download all fonts. Truly an eye-opening experience.
        
           | fouronnes3 wrote:
           | Why can't we just pay with a payment card at electric
           | chargers? Drives me insane.
        
             | Hamuko wrote:
             | These chargers have an RFID tag too, but I'd forgotten it
             | in my jacket, so it was mobile app for me.
             | 
             | There are some chargers that take card payments though. My
             | local IKEA has some. There's also EU legislation to mandate
             | payment card support.
             | 
             | https://electrek.co/2023/07/11/europe-passes-two-big-laws-
             | to...
        
             | DuncanCoffee wrote:
             | It wasn't required by law and the ocpp charging protocol,
             | used to manage charge sessions at a high level between the
             | charger and the service provider (not the vehicle) did not
             | include payments management. Everybody just found it easier
             | to manage payments using apps and credits. But I think
             | Europe is going to make it mandatory soon(ish)
        
       | zevv wrote:
       | And now try to load the same website over HTTPS
        
         | xrisk wrote:
         | Yeah I think this computation doesn't work anymore once you
         | factor in the tls handshake.
        
         | aziaziazi wrote:
         | From TFA:
         | 
         | > Also HTTPS requires two additional round trips before it can
         | do the first one -- which gets us up to 1836ms!
        
           | supermatt wrote:
           | This hasn't been the case since TLS1.3 (over 5 years ago)
           | which reduced it to 1-RTT - or 0-RTT when keys are known
           | (cached or preshared). Same with QUIC.
        
             | aziaziazi wrote:
             | Good to know, however "when the keys are know" refers to a
             | second visit (or request) of the site right ? That isn't
             | helpful for the first data paquets - at least that what I
             | understand from the site.
        
               | jeroenhd wrote:
               | Without cached data from a previous visit, 1-RTT mode
               | works even if you've never vistited the site before
               | (https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/#1-rtt-
               | mode). It can fall back to 2-RTT if something funky
               | happens, but that shouldn't happen in most cases.
               | 
               | 0-RTT works after the first handshake, but enabling it
               | allows for some forms of replay attacks so that may not
               | be something you want to use for anything hosting an API
               | unless you've designed your API around it.
        
         | mrweasel wrote:
         | I know some people who are experimenting with using shorter
         | certificates, i.e. shorter certificate chains, to reduce
         | traffic. If you're a large enough site, then you can save a ton
         | of traffic every day.
        
           | tech2 wrote:
           | Please though, for the love of dog, have your site serve a
           | complete chain and don't have the browser or software stack
           | do AIA chasing.
        
             | jeroenhd wrote:
             | With half of the web using Let's Encrypt certificates, I
             | think it's pretty safe to assume the intermediates are in
             | most users' caches. If you get charged out the ass for
             | network bandwidth (i.e. you use Amazon/GCP/Azure) then you
             | may be able to get away with shortened chains as long as
             | you use a common CA setup. It's a hell of a footgun and
             | will be a massive pain to debug, but it's possible as a
             | traffic shaving measure if you don't care about serving
             | clients that have just installed a new copy of their OS.
             | 
             | There are other ways you can try to optimise the
             | certificate chain, though. For instance, you can pick a CA
             | that uses ECC rather than RSA to make use of the much
             | shorter key sizes. Entrust has one, I believe. Even if the
             | root CA has an RSA key, they may still have ECC
             | intermediates you can use.
        
               | tech2 wrote:
               | The issue with the lack of intermediates in the cert
               | isn't browsers (they'll just deal with it). Sure, if they
               | aren't already in the cache then there's a small hit
               | first time. The problem is that if your SSL endpoint is
               | accessed by any programming language (for example, you
               | offer image URL to a B2B system to download so they can
               | perform image resizing for you, or somesuch) then there's
               | a chance the underlying platform doesn't automatically do
               | AIA chasing. Python is one-such system I'm aware of, but
               | there are others that will be forced to work around this
               | for no net benefit.
        
             | mrweasel wrote:
             | That is a really good point. Googles certificate service
             | can issue a certificate signed directly by Google, but not
             | even Google themselves are using it. They use the one
             | that's cross signed by GlobalSign (I think).
             | 
             | But yes, ensure that you're serving the entire chain, but
             | keep the chain as short as possible.
        
       | moomoo11 wrote:
       | I'd care about this if I was selling in India or Africa.
       | 
       | If I'm selling to cash cows in America or Europe it's not an
       | issue at all.
       | 
       | As long as you have >10mbps download across 90% of users I think
       | it's better to think about making money. Besides if you don't
       | know that lazy loading exists in 2025 fire yourself lol.
        
         | jofzar wrote:
         | It really depends on who your clients are and where they are.
         | 
         | https://www.mcmaster.com/ was found last year to be doing some
         | real magic to make it load literally as fast as possible for
         | the crapiest computers possible.
        
           | kosolam wrote:
           | The site is very fast indeed
        
             | actionfromafar wrote:
             | I want to buy fasteners now.
        
               | kosolam wrote:
               | Fasterners, as fast as possible
        
           | A_D_E_P_T wrote:
           | Do you have any idea what they actually did? It would be
           | interesting to study. That site really is blazing fast.
        
             | gbuk2013 wrote:
             | Quick look: GSLB (via Akamai) for low latency, tricks like
             | using CSS sprite to serve a single image in place of 20 or
             | so for fewer round-trips, heavy use of caching, possibly
             | some service worker magic but I didn't dig that far. :)
             | 
             | Basically, looks like someone deliberately did many right
             | things without being lazy or cheap to create a performant
             | web site.
        
             | _nivlac_ wrote:
             | I am SO glad jofzar posted this - I remember this website
             | but couldn't recall the company name. Here's a good video
             | on how the site is so fast, from a frontend perspective:
             | 
             | https://youtu.be/-Ln-8QM8KhQ
        
             | theandrewbailey wrote:
             | I was intrigued that they request pages in the background
             | on mouse-over, then swap on click. I decided to do likewise
             | on my blog, since my pages are about a dozen kb of HTML,
             | and I aggressively cache things.
        
         | xrisk wrote:
         | I think you'd be surprised to learn that Indian mobile speeds
         | are pretty fast, at 133 mbps median. Ranked 26th in the world
         | (https://www.speedtest.net/global-index#mobile).
        
           | hyperbrainer wrote:
           | And in the last few years, access has grown tremendously, a
           | big part of which has been Jio's aggressive push with ultra-
           | cheap plans.
        
         | flohofwoe wrote:
         | I wouldn't be surprised if many '3rd world' countries have
         | better average internet speeds than some developed countries by
         | leapfrogging older 'good enough' tech that's still dominating
         | in the developed countries, e.g. I've been on a 16 MBit
         | connection in Germany for a long time simply because it was
         | mostly good enough for my internet consumption. One day my
         | internet provider 'forcefully' upgraded me to 50 MBit because
         | they didn't support 16 MBit anymore ;)
        
           | mrweasel wrote:
           | For the longest time I tried arguing with my ISP that I only
           | needed around 20Mbit. They did have a 50Mbit at the time, but
           | the price difference between 50, 100 and 250 and meant that
           | you basically got ripped off for anything but the 100Mbit.
           | It's the same now, I can get 300Mbit, but the price
           | difference between 300 and and 500 is to small to be viewed
           | as an actual saving, similar, you can get 1000Mbit, but I
           | don't need it and the price difference is to high.
        
         | mrweasel wrote:
         | Hope you're not selling to the rural US then.
        
         | masklinn wrote:
         | There's plenty of opportunities to have slow internet (and
         | especially long roundtrips) in developed countries e.g.
         | 
         | - rural location
         | 
         | - roommate or sibling torrent-ing the shared connection into
         | the ground
         | 
         | - driving around on a road with spotty coverage
         | 
         | - places with poor cellular coverage (some building styles are
         | absolutely hell on cellular as well)
        
       | paales2 wrote:
       | Or maybe we shouldn't. A good experience doesnt have to load
       | under 50ms, it is fine for it to take a second. 5G is common and
       | people with slower connections accept longer waiting times.
       | Optimizing is good but fixating isn't.
        
       | 9dev wrote:
       | The overlap of people that don't know what TCP Slow Start is and
       | those that should care about their website loading a few
       | milliseconds faster is incredibly small. A startup should focus
       | on, well, starting up, not performance; a corporation large
       | enough to optimise speed on that level will have a team of
       | experienced SREs that know over which detail to obsess.
        
         | andrepd wrote:
         | > a corporation large enough will have a team of experienced
         | SREs that know over which detail to obsess.
         | 
         | Ahh, if only. Have you seen applications developed by large
         | corporations lately? :)
        
           | achenet wrote:
           | a corporation large enough to have a team of experienced SREs
           | that know which details to obsess over will also have enough
           | promotion-hungry POs and middle managers to tell them devs to
           | add 50MB of ads and trackers in the web page. Maybe another
           | 100MB for an LLM wrapper too.
           | 
           | :)
        
             | hinkley wrote:
             | Don't forget adding 25 individual Google Tag Managers to
             | every page.
        
         | elmigranto wrote:
         | Right. That's why all the software from, say, Microsoft works
         | flawlessly and at peak efficiency.
        
           | 9dev wrote:
           | That's not what I said. Only that the responsible engineers
           | know which tradeoffs they make, and are competent enough to
           | do so.
        
             | samrus wrote:
             | The decision to use react for the start menu wasnt out of
             | competency. The guy said on twitter that thats what he knew
             | so he used it [1]. Didnt think twice. Head empty no
             | thoughts
             | 
             | 1 https://x.com/philtrem22/status/1927161666732523596
        
               | ldjb wrote:
               | Please do share any evidence to the contrary, but it
               | seems that the Tweet is not serious and is not from
               | someone who worked on the Start Menu.
        
               | bool3max wrote:
               | No way people on HN are falling for bait Tweets. We're
               | cooked
        
               | mort96 wrote:
               | I found this:
               | https://www.youtube.com/watch?v=kMJNEFHj8b8&t=287s
               | 
               | I googled the names of the people holding the talk and
               | they're both employed by Microsoft as software engineers,
               | I don't see any reason to doubt what they're presenting.
               | Not the whole start menu is React Native, but parts are.
        
               | hinkley wrote:
               | Why is that somehow worse?
        
               | fsh wrote:
               | It is indeed an impressive feat of engineering to make
               | the start menu take several seconds to launch in the age
               | of 5 GHz many-core CPUs, unlimited RAM, and multi-GByte/s
               | SSDs. As an added bonus, I now have to re-boot every
               | couple of days or the search function stops working
               | completely.
        
               | the_real_cher wrote:
               | Fair warning, X has has more trolls than 4chan.
        
               | Henchman21 wrote:
               | Please, it has more trolls than _Middle Earth_
        
               | 9dev wrote:
               | That tweet is fake, and as repeatedly stated by Microsoft
               | engineers, the start menu is written in C# of course, the
               | only part using React native is a promotion widget
               | _within_ the start menu. While even that is a strange
               | move, all the rest is just FUD spread via social media.
        
               | hombre_fatal wrote:
               | "Hi it's the guy who did <thing everyone hates>" is a
               | Twitter meme.
        
               | hinkley wrote:
               | Orange Cat Programmer.
        
           | SXX wrote:
           | This. It's exactly why Microsoft use modern frameworks such
           | as React Native for their Start Menu used by billions of
           | people every day.
        
             | Nab443 wrote:
             | And probably the reason why I have to restart it at least
             | twice a week.
        
             | chamomeal wrote:
             | Wait... please please tell me this is a weirdly specific
             | joke
        
               | kevindamm wrote:
               | Only certain live portions of it, and calling it React is
               | a stretch but not entirely wrong:
               | 
               | https://news.ycombinator.com/item?id=44124688#:~:text=Jus
               | t%2...
               | 
               | the notion was popularized as an explanation for a CPU
               | core spiking whenever the start menu opens on Win11
        
             | hinkley wrote:
             | And this is why SteamOS is absolutely kicking Windows' ass
             | on handhelds.
        
         | nasso_dev wrote:
         | I agree, it feels like it _should_ be how you describe it.
         | 
         | But if Evan Wallace didn't obsess over performance when
         | building Figma, it wouldn't be what it is today. Sometimes,
         | performance _is_ a feature.
        
         | austin-cheney wrote:
         | I don't see what size of corporation has to do with performance
         | or optimization. Almost never do I see larger businesses doing
         | anything to execute more quickly online.
        
           | zelphirkalt wrote:
           | Too many cooks spoil the broth. If you got multiple people
           | pushing agenda to use their favorite new JS framework,
           | disregarding simplicity in order to chase some imaginary goal
           | or hip thing to bolster their CV, it's not gonna end well.
        
         | anymouse123456 wrote:
         | This idea that performance is irrelevant gets under my skin.
         | It's how we ended up with Docker and Kubernetes and the
         | absolute slop stack that is destroying everything it touches.
         | 
         | Performance matters.
         | 
         | We've spent so many decades misinterpreting Knuth's quote about
         | optimization that we've managed to chew up 5-6 orders of
         | magnitude in hardware performance gains and still deliver slow,
         | bloated and defective software products.
         | 
         | Performance does in fact matter and all other things equal, a
         | fast product is more pleasurable than a slow one.
         | 
         | Thankfully some people like the folks at Figma took the risk
         | and proved the point.
         | 
         | Even if we're innovating on hard technical problems (which most
         | of us are not), performance still matters.
        
           | zelphirkalt wrote:
           | Performance matters, but at least initially only as far as it
           | doesn't complicate your code significantly. That's why a
           | simple static website often beats some hyper modern latest
           | framework optimization journey websites. You gotta maintain
           | that shit. And you are making sacrifices elsewhere, in the
           | areas of accessibility and possibly privacy and possibly
           | ethics.
           | 
           | So yeah, make sure not to lose performance unreasonably, but
           | also don't obsess with performance to the point of making
           | things unusable or way too complicated for what they do.
        
             | sgarland wrote:
             | > way too complicated for what they do
             | 
             | Notably, this is subjective. I've had devs tell me that
             | joins (in SQL) are too complicated, so they'd prefer to
             | just duplicate data everywhere. I get that skill is a
             | spectrum, but it's getting to the point where I feel like
             | we've passed the floor, and need to firmly state that there
             | are in fact some basic ideas that are required knowledge.
        
             | anymouse123456 wrote:
             | This kind of thinking is exactly the problem.
             | 
             | Yes, at the most absurd limits, some autists may
             | occasionally obsess and make things worse. We're so far
             | from that problem today, it would be a good one to have.
             | 
             | IME, making things fast almost always also makes them
             | simpler and easier to understand.
             | 
             | Building high-performance software often means building
             | less of it, which translates into simpler concepts, fewer
             | abstractions, and shorter times to execution.
             | 
             | It's not a trade-off, it's valuable all the way down.
             | 
             | Treating high performance as a feature and low performance
             | as a bug impacts everything we do and ignoring them for
             | decades is how you get the rivers of garbage we're swimming
             | in.
        
               | raekk wrote:
               | > It's not a trade-off, it's valuable all the way down.
               | 
               | This.
        
           | mr_toad wrote:
           | Containers were invented because VMs were too slow to cold
           | start and used too much memory. Their whole raison d'etre is
           | performance.
        
             | anonymars wrote:
             | Yeah, I think Electron would be the poster child
        
             | bobmcnamara wrote:
             | Can you live fork containers like you can VMs?
             | 
             | VM clone time is surprisingly quick once you stop copying
             | memory, after that it's mostly ejecting the NIC and
             | bringing up the new one.
        
               | mort96 wrote:
               | I can't say I've ever cared about live forking a
               | container (or VM, for that matter)
        
               | hinkley wrote:
               | Your cloud provider may be doing it for you. Ops informed
               | me one day that AWS was pushing out a critical security
               | update to their host OS. So of course I asked if that
               | meant I needed to redeploy our cluster, and they
               | responded no, and in fact they had already pushed it.
               | 
               | Our cluster keeps stats on when processes start. So we
               | can alert on crashes, and because new processes (cold
               | JIT) can skew the response numbers, and are inflection
               | points to analyze performance improvements or
               | regressions. There were no restarts that morning. So they
               | pulled the tablecloth out from under us. TIL.
        
               | mort96 wrote:
               | None of this is making live forking a container desirable
               | to me, I'm not a cloud hosting company (and if I was, I'd
               | be happy to provide a VPS as a VM rather than a
               | container)
        
               | hinkley wrote:
               | There's using a feature, having a vendor use it for you,
               | or denying its worth.
               | 
               | Anything else is dissonant.
        
               | mort96 wrote:
               | For the VM case, I'm sure I might have benefited from it,
               | if Digital Ocean have been able to patch something live
               | without restarting my VPS. Great. Nothing I need to care
               | about, so I have never cared about live forking a VM. It
               | hasn't come up in my use of VMs.
               | 
               | It's not a feature I miss in containers, is what I'm
               | saying.
        
               | marcosdumay wrote:
               | You mean creating a different container that is exactly
               | equal to the previous one?
               | 
               | It's absolutely possible, but I'm not sure there's any
               | tool out there with that command... because why would
               | you? You'll get about the same result as forking a
               | process inside the container.
        
               | 9dev wrote:
               | Why would you, if you can simply start replacement
               | containers in another location and reroute traffic there,
               | then dispose of the old ones?
        
             | anymouse123456 wrote:
             | That's another reason they're so infuriating. Containers
             | are intended to make things faster and easier. But the
             | allure of virtualization has made most work much, much
             | slower and much, much worse.
             | 
             | If you're running infra at Google, of course containers and
             | orchestration make sense.
             | 
             | If you're running apps/IT for an SMB or even small
             | enterprise, they are 100% waste, churn and destruction.
             | I've built for both btw.
             | 
             | The contexts in which they are appropriate and actually
             | improve anything at all are vanishingly small.
        
               | hinkley wrote:
               | Part of why I adopted containers fairly early was
               | inspired by the time we decided to make VMs for QA with
               | our software on it. They kept fucking up installs and
               | reporting ghost bugs that were caused by a bad install or
               | running an older version and claiming the bugs we fixed
               | weren't fixed.
               | 
               | Building disk images was a giant pain in the ass but less
               | disruptive to flow than having QA cry wolf a couple times
               | a week.
               | 
               | I could do the same with containers, and easier.
        
               | 9dev wrote:
               | I have wasted enough time caressing Linux servers to
               | accommodate for different PHP versions that I know what
               | good containers can do. An application gets tested,
               | built, and bundled with all its system dependencies, in
               | the CI; then pushed to the registry, deployed to the
               | server. All automatic. Zero downtime. No manual software
               | installation on the server. No server update downtimes.
               | No subtle environment mismatches. No forgotten
               | dependencies.
               | 
               | I fail to see the churn and destruction. Done well, you
               | decouple the node from the application, even, and end up
               | with raw compute that you can run multiple apps on.
        
           | 01HNNWZ0MV43FF wrote:
           | Docker good actually
        
             | anymouse123456 wrote:
             | nah - we'll look back on Docker the same way many of are
             | glaring at our own sins with OO these days.
        
               | hinkley wrote:
               | Docker is just making all the same promises we were made
               | in 1991 that never came to fruition. Preemptive
               | multitasking OSes with virtual memory were suppose to
               | solve all of our noisy neighbor problems.
        
           | sgarland wrote:
           | Agreed, though containers and K8s aren't themselves to blame
           | (though they make it easier to get worse results).
           | 
           | Debian Slim is < 30 MB. Alpine, if you can live with musl, is
           | 5 MB. The problem comes from people not understanding what
           | containers are, and how they're built; they then unknowingly
           | (or uncaringly) add in dozens of layers without any attempt
           | at reducing or flattening.
           | 
           | Similarly, K8s is of course just a container orchestration
           | platform, but since it's so easy to add to, people do so
           | without knowing what they're doing, and you wind up with 20
           | network hops to get out of the cluster.
        
           | hinkley wrote:
           | If you're implying that Docker is the slop, instead of an
           | answer to the slop, I haven't seen it.
        
         | jeroenhd wrote:
         | When your approach is "I don't care because I have more
         | important things to focus on", you never care. There's always
         | something you can do that's more important to a company than
         | optimising the page load to align with the TCP window size used
         | to access your server.
         | 
         | This is why almost all applications and websites are slow and
         | terrible these days.
        
           | keysdev wrote:
           | That and SPA
        
             | andix wrote:
             | SPAs are great for highly interactive pages. Something like
             | a mail client. It's fine if it takes 2-3 seconds extra when
             | opening the SPA, it's much more important to have instant
             | feedback when navigating.
             | 
             | SPAs are really bad for mostly static websites. News sites,
             | documentation, blogs.
        
           | sgarland wrote:
           | This. A million times this.
           | 
           | Performance isn't seen as sexy, for reasons I don't
           | understand. Devs will be agog about how McMaster-Carr manages
           | to make a usable and incredibly fast site, but they don't put
           | that same energy back into their own work.
           | 
           | People like responsive applications - you can't tell me
           | you've never seen a non-tech person frustratingly tapping
           | their screen repeatedly because something is slow.
        
           | marcosdumay wrote:
           | Well, half of a second is a small difference. So yeah, there
           | will probably be better things to work on up to the point
           | when you have people working exclusively on your site.
           | 
           | > This is why almost all applications and websites are slow
           | and terrible these days.
           | 
           | But no, there are way more things broken on the web than lack
           | of overoptimization.
        
             | hinkley wrote:
             | > half a second is a small difference
             | 
             | I don't even know where to begin. Most of us are aiming for
             | under a half second total for response times. Are you
             | working on web applications at all?
        
               | marcosdumay wrote:
               | > Most of us are aiming for under a half second total for
               | response times.
               | 
               | I know people working on that exist. "Most of us" is
               | absolutely not, if they were so many, the web wouldn't be
               | like it's now.
               | 
               | Anyway, most people working towards instantaneous
               | response aren't optimizing the very-high latency case
               | where the article may eventually get a 0.5s slowdown. And
               | almost nobody gets to the extremely low-gain kinds of
               | optimizations there.
        
             | Mawr wrote:
             | "More than 10 years ago, Amazon found that every 100ms of
             | latency cost them 1% in sales. In 2006, Google found an
             | extra .5 seconds in search page generation time dropped
             | traffic by 20%."
        
           | whoisyc wrote:
           | > This is why almost all applications and websites are slow
           | and terrible these days.
           | 
           | The actual reason is almost always some business bullshit.
           | Advertising trackers, analytics etc. No amount of trying to
           | shave kilobytes off a response can save you if your boss
           | demands you integrate code from a hundred "data partners" and
           | auto play a marketing video.
           | 
           | Blaming bad web performance on programmers not going for the
           | last 1% of optimization is like blaming climate change on
           | Starbucks not using paper straws. More about virtue signaling
           | than addressing the actual problem.
        
         | exiguus wrote:
         | I think, this is just an Art project.
        
         | andersmurphy wrote:
         | Doesn't have to be a choice it could just be the default. My
         | billion cells/checkboxes[1] demos both use datastar and so are
         | just over 10kb. It can make a big difference on mobile networks
         | and 3G. I did my own tests and being over 14kb often meant an
         | extra 3s load time on bad connections. The nice thing is I got
         | this for free because the datastar maintainer cares about tcp
         | slow star even though I might not.
         | 
         | - [1] https://checkboxes.andersmurphy.com
        
         | CyberDildonics wrote:
         | If you make something that, well, wastes my time because you
         | feel it is, well, not important, then, well, I don't want to
         | use it.
        
         | sgarland wrote:
         | Depending on the physical distance, it can be much more than a
         | few msec, as TFA discusses.
        
       | simgt wrote:
       | Aside from latency, reducing ressources consumption to the
       | minimum required should always be a concern if we intend to have
       | a sustainable future. The environmental impact of our network is
       | not negligible. Given the snarky comments here, we clearly have a
       | long way to go.
       | 
       | EDIT: some reply missed my point, I am not claiming this
       | particular optimization is the holy grail, only that I'd have
       | liked for added benefit of reducing the energy consumption to be
       | mentioned
        
         | qayxc wrote:
         | It's not low-hanging fruit, though. While you try to optimise
         | to save a couple of mWh in power use, a single search engine
         | query uses 100x more and an LLM chat is another 100x of that.
         | In other words: there's bigger fish to fry. Plus caching, lazy
         | loading etc. mitigates most of this anyway.
        
           | vouaobrasil wrote:
           | Engineering-wise, it sometimes isn't. But it does send a
           | signal that can also become a trend in society to be more
           | respectful of our energy usage. Sometimes, it does make sense
           | to focus on the most visible aspect of energy usage, rather
           | than the most intensive. Just by making your website smaller
           | and being vocal about it, you could reach 100,000 people if
           | you get a lot of visitors, whereas Google isn't going to give
           | a darn about even trying to send a signal.
        
             | qayxc wrote:
             | I'd be 100% on board with you if you were able to show me a
             | single - just a single - regular website user who'd care
             | about energy usage of a first(!) site load.
             | 
             | I'm honestly just really annoyed about this "society and
             | environment"-spin on advise that would have an otherwise
             | niche, but perfectly valid reason behind it (TFA: slow
             | satellite network on the high seas).
             | 
             | This might sound harsh and I don't mean it personally, but
             | making your website smaller and "being vocal about it"
             | (whatever you mean by that) doesn't make an iota of
             | difference. It also only works if your site is basically
             | just text. If your website uses other resources (images,
             | videos, 3D models, audio, etc.), the impact of first load
             | is just noise anyway.
             | 
             | You can have a bigger impact by telling 100,000 people to
             | drive an hour less each month and if just 1% of your
             | hypothetical audience actually does that, you'd achieve
             | orders of magnitude more in terms of environmental and
             | societal impact.
        
               | vouaobrasil wrote:
               | Perhaps you are right. But I do remember one guy who had
               | a YouTube channel and he uploaded fairly low-quality
               | videos at a reduced framerate to achieve a high level of
               | compression, and he explicitly put in his video that he
               | did it to save energy.
               | 
               | Now, it is true that it didn't save much because probably
               | many people were uploaded 8K videos at the time, so drop
               | in the bucket. But personally, I found it quite inspiring
               | and his decision was instrumental in my deciding to never
               | upload 4K. And in general, I will say that people like
               | that do inspire me and keep me going to be as minimal as
               | possible when I use energy in all domains.
               | 
               | For me at least, trying to optimize for using as little
               | energy as possible isn't an engineering problem. It's a
               | challenge to do it uniformly as much as possible, so it
               | can't be subdivided. And I do think every little bit
               | counts, and if I can spend time making my website
               | smaller, I'll do that in case one person gets inspired by
               | that. It's not like I'm a machine and my only goal is
               | time efficiency....
        
               | Mawr wrote:
               | Youtube's compression already butchers the quality of
               | anything 1080p and below. Uploading in 1440p or 4K is the
               | only way to get youtube to preserve at least some of the
               | bitrate. There's a 1080p extra bitrate option available
               | on some videos, but it's locked behind premium, so I'm
               | not sure how well it works.
               | 
               | Depending on the type of video this may not matter, but
               | it often does. For example, my FPS gaming and dashcam
               | footage gets utterly destroyed if uploaded to youtube at
               | 1080p. Youtube's 4K seems roughly equivalent to my high
               | bitrate 1080p recordings.
        
               | Mawr wrote:
               | Correct. It's even worse than that, they'll say they
               | optimized the energy usage of their website by making it
               | 1kb smaller and then fly overseas for holiday. How many
               | billions of page loads would it take to approximate the
               | environmental impact of a single intercontinental flight?
        
             | marcosdumay wrote:
             | So, literally virtue signaling?
             | 
             | And no, a million small sites won't "become a trend in
             | society".
        
               | vouaobrasil wrote:
               | You really don't know if it could become a trend or not.
               | Certainly trends happen in the opposite direction, such
               | as everyone using AI. I think every little difference you
               | can make is a step in the right direction, and is not
               | virtue signalling if you really apply yourself across all
               | domains of life. But perhaps it is futile, given that
               | there are so many defeatist individuals such as yourself
               | crowding the world.
        
             | whoisyc wrote:
             | Realistically "my website fits in 14kb" is a terrible
             | signal because it is invisible to 99.99% of the population.
             | How many HNers inspect the network usage when loading a
             | random stranger's website?
             | 
             | Plus, trying to signal your way to societal change can have
             | unintended downsides. It makes you feel you are doing
             | something when you are actually not making any real impact.
             | It attracts the kind of people who care more about
             | signaling the right signals than doing the right thing into
             | your camp.
        
           | timeon wrote:
           | Sure there are more resource-heavy places but I think the
           | problem is general approach. Neglecting of performance and
           | overall approach to resources brought us to these resource-
           | heavy tools. It seems just dismissive when people pointing to
           | places where there could be made more cuts and call it a day.
           | 
           | If we want to really fix places with bigger impact we need to
           | change this approach in a first place.
        
             | qayxc wrote:
             | Sure thing, but's not low-hanging fruit. The impact is so
             | miniscule that the effort required is too high when
             | compared to the benefit.
             | 
             | This is micro-optimisation for a valid use case (slow
             | connections in bandwidth-starved situations), but in the
             | real world, a single hi-res image, short video clip, or
             | audio sample would negate all your text-squeezing, HTTP
             | header optimisation games, and struggle for minimalism.
             | 
             | So for the vast majority of use cases it's simply
             | irrelevant. And no, your website is likely not going to get
             | 1,000,000 unique visitors per hour so you'd have a hard
             | time even measuring the impact whereas simply NOT ordering
             | pizza and having a home made salad instead would have a
             | measurable impact orders of magnitude greater.
             | 
             | Estimating the overall impact of your actions and non-
             | actions is hard, but it's easier and more practical to
             | optimise your assets, remove bloat (no megabytes of JS
             | frameworks), and think about whether you really need that
             | annoying full-screen video background. THOSE are low-
             | hanging fruit with lots of impact. Trying to trim down a
             | functional site to <14kB is NOT.
        
           | quaintdev wrote:
           | LLM companies should provide how much energy got consumed
           | processing users request. Maybe people will think twice
           | before generating AI slop
        
           | simgt wrote:
           | Of course, but my point is that it's still a constraint we
           | should have in mind at every level. Dupont poisoning public
           | water with pfas does not make you less of an arsehole if you
           | toss your old iPhone in a pond for the sake of convenience.
        
           | victorbjorklund wrote:
           | On the other hand - its kind of like saying we dont need to
           | drive env friendly cars because it is a drop in the bucket
           | compares to containerships etc
        
         | vouaobrasil wrote:
         | Absolutely agree with that. I recently visited the BBC website
         | the other day and it loaded about 120MB of stuff into the cache
         | - for a small text article. Not only does it use a lot of extra
         | energy to transmit so much data, but it promotes a general
         | atmosphere of wastefulness.
         | 
         | I've tried to really cut down my website as well to make it
         | fairly minimal. And when I upload stuff to YouTube, I never use
         | 4K, only 1080P. I think 4K and 8K video should not even exist.
         | 
         | A lot of people talk about adding XYZ megawatts of solar to the
         | grid. But imagine how nice it could be if we regularly had
         | efforts to use LESS power.
         | 
         | I miss the days when websites were very small in the days of
         | 56K modems. I think there is some happy medium somewhere and
         | we've gone way past it.
        
           | raekk wrote:
           | Let's take it further: That atmosphere of wastefulness not
           | only concerns bandwidth and energy use but also architectural
           | decisions. There's technology that punches far above its
           | weight class in terms of efficiency and there's the opposite.
           | It seems like a collective form of learned helplessness, on
           | both sides, the vendors and users. IMHO, the only real reason
           | for slow, JavaScript-heavy sites is surveillance and
           | detailed, distributed profiling of users. The other would be
           | animated UI giving dopamine hits, but that could totally be
           | confined to entertainment and shouldn't be a cue for
           | "quality" software.
        
         | FlyingAvatar wrote:
         | The vast majority of internet bandwidth is people streaming
         | video. Shaving a few megs from a webpage load would be the
         | tiniest drop in the bucket.
         | 
         | I am all for efficiency, but optimizing everywhere is a recipe
         | for using up the resources to actually optimize where it
         | matters.
        
           | vouaobrasil wrote:
           | The problem is that a lot of people DO have their own
           | websites for which they have some control over. So it's not
           | like a million people optimizing their own websites will have
           | any control over what Google does with YouTube for
           | instance...
        
             | jychang wrote:
             | A million people is a very strong political force.
             | 
             | A million determined voters can easily force laws to be
             | made which forces youtube to be more efficient.
             | 
             | I often think about how orthodoxical all humans are. We
             | never think about different paths outside of social norms.
             | 
             | - Modern western society has weakened support for mass
             | action to the point where it is literally an unfathomable
             | "black swan" perspective in public discourse.
             | 
             | - Spending a few million dollars on TV ads to get someone
             | elected is a lot cheaper than whatever Bill Gates spends on
             | NGOs, and for all the money he spent it seems like aid is
             | getting cut off.
             | 
             | - Hiring or acting as a hitman to kill someone to achieve
             | your goal is a lot cheaper than the other options above. It
             | seems like this concept, for better or worse, is not quite
             | in the public consciousness currently. The 1960s 1970s era
             | of assassinations have truly gone and past.
        
               | vouaobrasil wrote:
               | I sort of agree...but not really, because you'll never
               | get a situation where a million people can vote on a
               | specific law about making YT more efficient. One needs to
               | muster some sort of general political will to even get
               | that to be an issue, and that takes a lot more than a
               | million people.
               | 
               | Personally, if a referendum were held tomorrow to disband
               | Google, I would vote yes for that...but good luck getting
               | that referendum to be held.
        
           | atoav wrote:
           | Yes but drops in the bucket count. If I take anything away
           | from your statement, it is that people should be selective
           | where to use videos for communications and where not.
        
           | OtherShrezzing wrote:
           | > but optimizing everywhere is a recipe for using up the
           | resources to actually optimize where it matters.
           | 
           | Is it? My front end engineer spending 90 minutes cutting
           | dependencies out of the site isn't going to deny YouTube the
           | opportunity to improve their streaming algorithms.
        
             | josephg wrote:
             | It might do the opposite. We need to teach engineers of all
             | stripes how to analyse and fix performance problems if
             | we're going to do anything about them.
        
             | molszanski wrote:
             | If you turn this into open problem, without hypothetical
             | limits of what an frontend engineer ca do it would become
             | more interesting and more impactful in real life. That said
             | engineer is human being who could use that time in myriad
             | other ways that would be more productive to helping the
             | environment
        
             | simgt wrote:
             | That's exactly it, but I fully expected whataboutism under
             | my comment. If I had mentioned video streaming as a
             | disclaimer, I'd probably have gotten crypto or Shein as
             | counter "arguments".
             | 
             | Everyone needs to be aware that we are part of an
             | environment that has limited resources beyond "money" and
             | act accordingly, whatever the scale.
        
           | schiffern wrote:
           | In that spirit I have a userscript, ironically called Youtube
           | HD[0], that with one edit sets the resolution to 'medium' ie
           | 360p. On a laptop it's plenty for talking head content (the
           | softening is nice actually), and I only find myself switching
           | to 480p if there's small text on screen.
           | 
           | It's a small thing, but as you say internet video is
           | relatively heavy.
           | 
           | To reduce my AI footprint I use the udm=14 trick[1] to kill
           | AI in Google search. It generally gives better results too.
           | 
           | For general web browsing the best single tip is running
           | uBlock Origin. If you can master medium[2] or hard mode
           | (which _will_ require un-breaking /whitelisting sites) it
           | saves more bandwidth and has better privacy.[3]
           | 
           | To go all-out on bandwidth conservation, LocalCDN[4] and
           | CleanURLs[5] are good. "Set it and forget it," improves
           | privacy and load times, and saves a bit of energy.
           | 
           | Sorry this got long. Cheers
           | 
           | [0] https://greasyfork.org/whichen/scripts/23661-youtube-hd
           | 
           | [1] https://arstechnica.com/gadgets/2024/05/google-searchs-
           | udm14...
           | 
           | [2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ub
           | loc...
           | 
           | [3] https://github.com/gorhill/ublock/wiki/Blocking-mode
           | 
           | [4] https://www.localcdn.org/
           | 
           | [5] https://github.com/ClearURLs/Addon
        
             | andrepd wrote:
             | I've been using uBlock in advanced mode with 3rd party
             | frames and scripts blocked. I recommend it, but it is
             | indeed a pain to find the minimum set of things you need to
             | unblock to make a website work, involving lots of
             | refreshing.
             | 
             | Once you find it for a website you can just save it though
             | so you don't need to go through it again.
             | 
             | LocalCDN is indeed a nobrainer for privacy! Set and forget.
        
           | oriolid wrote:
           | > The vast majority of internet bandwidth is people streaming
           | video. Shaving a few megs from a webpage load would be the
           | tiniest drop in the bucket.
           | 
           | Is it really? I was surprised to see that surfing newspaper
           | websites or Facebook produces more traffic per time than
           | Netflix or Youtube. Of course there's a lot of embedded video
           | in ads and it could maybe count as streaming video.
        
             | danielbln wrote:
             | Cate to share that article, I find that hard to believe.
        
               | oriolid wrote:
               | No article sorry, it's just what the bandwidth display on
               | my home router shows. I could post some screenshots but I
               | don't care for answering to everyone who tries to debunk
               | them. Mobile version of Facebook is by the way much
               | better optimized than the full webpage. I guess desktop
               | browser users are a small minority.
        
               | Capricorn2481 wrote:
               | Well Facebook has video on it. Highly unlikely that a
               | static site is going to even approach watching a video.
        
               | hxorr wrote:
               | It may surprise you how heavy Facebook is these days
        
           | pyman wrote:
           | Talking about video streaming, I have a question for big tech
           | companies: Why? Why are we still talking about optimising
           | HTML, CSS and JS in 2025? This is tech from 35 years ago. Why
           | can't browsers adopt a system like video streaming, where you
           | "stream" a binary of your site? The server could publish a
           | link to the uncompressed source so anyone can inspect it,
           | keeping the spirit of the open web alive. Do you realise how
           | many years web developers have spent obsessing over this
           | document-based legacy system and how to improve its
           | performance? Not just years, their whole careers! How many
           | cool technologies were created in the last 35 years? I lost
           | count. Honestly, why are big tech companies still building on
           | top of a legacy system, forcing web developers to waste their
           | time on things like performance tweaks instead of focusing on
           | what actually matters: the product.
        
             | hnlmorg wrote:
             | That's already how it works.
             | 
             | The binary is a compressed artefact and the stream is a TLS
             | pipe. But the principle is the same.
             | 
             | In fact videos streams over the web are actually based on
             | how HTTP documents are chunked and retrieved, rather than
             | the other way around.
        
               | pyman wrote:
               | I see, I didn't know this
        
             | ahofmann wrote:
             | 1. How does that help not wasting resources? It needs more
             | energy and traffic
             | 
             | 2. Everything in our world is dwarfes standing on the
             | shoulders of giants. To rip everything up and create
             | something completely new is most of the time an idea that
             | sounds better than it really would be. Anyone who thinks
             | something else is mostly to young to see this pattern.
        
             | ozim wrote:
             | I see you mistake html/css for what they were 30 years ago
             | ,,documents to be viewed".
             | 
             | HTML/CSS/JS is the only fully open stack, free as in beer
             | and free and not owned by a single entity and standardized
             | by multinational standardization bodies for building
             | applications interfaces that is cross platform and does
             | that excellent. Especially with electron you can build
             | native apps with HTML/CSS/JS.
             | 
             | There are actually web apps not ,,websites" that are built.
             | Web apps are not html with sprinkled jquery around there
             | are actually heavy apps.
        
               | 01HNNWZ0MV43FF wrote:
               | Practically it is owned by Google, or maybe Google +
               | Apple
        
               | pyman wrote:
               | I'm talking about big ideas. Bigger than WebAssembly. My
               | message was about the future of the www, the next-gen
               | web, not the past.
        
               | ozim wrote:
               | OK now I think you don't understand all the implications
               | of the status quo.
               | 
               | Everyone writing about "future view" or "next gen" would
               | have to prove to me that they really understand current
               | state of things.
        
             | 01HNNWZ0MV43FF wrote:
             | > Why can't browsers adopt a system like video streaming,
             | where you "stream" a binary of your site?
             | 
             | I'll have to speculate what you mean
             | 
             | 1. If you mean drawing pixels directly instead of relying
             | on HTML, it's going to be slower. (either because of
             | network lag or because of WASM overhead)
             | 
             | 2. If you mean streaming video to the browser and rendering
             | your site server-side, it will break features like resizing
             | the window or turning a phone sideways, and it will be
             | hideously expensive to host.
             | 
             | 3. It will break all accessibility features like Android's
             | built-in screen reader, because you aren't going to
             | maintain all the screen reader and braille stuff that
             | everyone might need server-side, and if you do, you're
             | going to break the workflow for someone who relies on a
             | custom tweak to it.
             | 
             | 4. If you are drawing pixels from scratch you also have to
             | re-implement stuff like selecting and copying text, which
             | is possible but not feasible.
             | 
             | 5. A really good GUI toolkit like Qt or Chromium will take
             | 50-100 MB. Say you can trim your site's server-side toolkit
             | down to 10 MB somehow. If you are very very lucky, you can
             | share some of that in the browser's cache with other sites,
             | _if_ you are using the same exact version of the toolkit,
             | on the same CDN. Now you are locked into using a CDN. Now
             | your website costs 10 MB for everyone loading it with a
             | fresh cache.
             | 
             | You can definitely do this if your site _needs_ it. Like,
             | you can't build OpenStreetMap without JS, you can't build
             | chat apps without `fetch`, and there are certain things
             | where drawing every pixel yourself and running a custom
             | client-side GUI toolkit might make sense. But it's like 1%
             | of sites.
             | 
             | I hate HTML but it's a local minimum. For animals, weight
             | is a type of strength, for software, popularity is a type
             | of strength. It is really hard to beat something that's
             | installed everywhere.
        
               | pyman wrote:
               | Thanks for explaining this in such detail
        
             | Naru41 wrote:
             | The ideal HTML I have in mind is a DOM tree represented
             | entirely in TLV binary -- and a compiled .so file instead
             | of .js. And a unpacked data to be used directly in C
             | programming data structure. Zero copy, no parsing, (data
             | vaildation is unavoidable but) that's certainly fast.
        
           | hnlmorg wrote:
           | It matters at web scale though.
           | 
           | Like how industrial manufacturing are the biggest carbon
           | consumers and compared to them, I'm just a drop in the ocean.
           | But that doesn't mean I don't also have a responsibility to
           | recycle because culminate effect of everyone like me
           | recycling quickly becomes massive.
           | 
           | Similarly, if every web host did their bit with static
           | content, you'd still see a big reduction at a global scale.
           | 
           | And you're right it shouldn't the end of the story. However
           | that doesn't mean it's a wasted effort / irrelevant
           | optimisation
        
           | jbreckmckye wrote:
           | I feel this way sometimes about recycling. I am very diligent
           | about it, washing out my cans and jars, separating my
           | plastics. And then I watch my neighbour fill our bin with
           | plastic bottles, last-season clothes and uneaten food.
        
             | extra88 wrote:
             | At least you and your neighbor are operating on the same
             | scale. Don't stop those individual choices but more members
             | of the populace making those choices is not how the problem
             | gets fixed, businesses and whole industries are the real
             | culprits.
        
             | yawaramin wrote:
             | Recycling is mostly a scam. Most municipalities don't
             | bother separating out the plastics and papers that would be
             | recyclable, decontaminating them, etc. because it would be
             | too expensive. They just trash them.
        
           | ofalkaed wrote:
           | I feel better about limiting the size of my drop in the
           | bucket than I would feel about just saying my drop doesn't
           | matter even if it doesn't matter. I get my internet through
           | my phone's hotspot with its 15gig a month plan, I generally
           | don't use the entire 15gigs. My phone and and laptop are
           | pretty much the only high tech I have, audio interface is
           | probably third in line and my oven is probably fourth (self
           | cleaning). Furnace stays at 50 all winter long even when it
           | is -40 out and if it is above freezing the furnace is turned
           | off. Never had a car, walk and bike everywhere including
           | groceries and laundry, have only used motorized transport
           | maybe a dozen times in the past decade.
           | 
           | A nice side effect of these choices is that I only spend a
           | small part of my pay. Never had a credit card, never had
           | debt, just saved my money until I had enough that the
           | purchase was no big deal.
           | 
           | I don't really have an issue with people who say that their
           | drop does not matter so why should they worry, but I don't
           | understand it, seems like they just needlessly complicate
           | their life. Not too long ago my neighbor was bragging about
           | how effective all the money he spent on energy efficient
           | windows, insulation, etc, was, he saved loads of money that
           | winter; his heating bill was still nearly three times what
           | mine was despite using a wood stove to offset his heating
           | bill, my house being almost the same size, barely insulated
           | and having 70 year old windows. I just put on a sweater
           | instead of turning up the heat.
           | 
           | Edit: Sorry about that sentence, not quite awake yet and
           | doubt I will be awake enough to fix it before editing window
           | closes.
        
         | sylware wrote:
         | Country where 10 millions people play their fav greedy-3D game
         | in the evening, with state-of-the-art 400W GPUs, all at the
         | same time...
        
         | presentation wrote:
         | Or we can just commit to building out solar infrastructure and
         | not worry about this rounding error anymore
        
         | hiAndrewQuinn wrote:
         | Do we? Let's compare some numbers.
         | 
         | Creating an average hamburger requires an input of 2-6 kWh of
         | energy, from start to finish. At 15C/ USD/kWh, this gives us an
         | upper limit of about 90C/ of electricity.
         | 
         | The average 14 kB web page takes about 0.000002 kWh to serve.
         | You would need to serve that web page about 1-300,000 times to
         | create the same energy demands of a single hamburger. A 14 mB
         | web page, which would be a pretty heavy JavaScript app these
         | days, would need about 1 to 3,000.
         | 
         | I think those are pretty good ways to use the energy.
        
           | ajsnigrutin wrote:
           | Now open an average news site, with 100s of request, tens of
           | ads, autoplaying video ads, tracking pixels, etc., using
           | gigabytes of ram and a lot of cpu.
           | 
           | Then multiply that by the number of daily visitors.
           | 
           | Without "hamburgers" (food in general), we die, reducing the
           | size of usesless content on websites doesn't really hurt
           | anyone.
        
             | hiAndrewQuinn wrote:
             | Now go to an average McDonalds, with hundreds of orders,
             | automatically added value meals, customer rewards, etc.
             | consuming thousands of cows and a lot of pastureland.
             | 
             | Then multiply that by the number of daily customers.
             | 
             | Without web pages (information in general), we return to
             | the Dark Ages. Reducing the number of hamburgers people eat
             | doesn't really hurt anyone.
        
               | ajsnigrutin wrote:
               | Sure, but you've got to eat something.
               | 
               | Now, if mcdonalds padded 5kB of calories of a cheesburger
               | with 10.000 kilobytes of calories in wasted food like
               | news sites doo, it would be a different story. The ratio
               | would be 200 kilos of wasted food for 100grams of usable
               | beef.
        
               | hombre_fatal wrote:
               | You don't need to eat burgers though. You can eat food
               | that consumes a small fraction of energy, calorie, land,
               | and animal input of a burger. And we go to McDonalds
               | because it's a dopamine luxury.
               | 
               | It's just an inconvenient truth for people who only care
               | about the environmental impact of things that don't
               | require a behavior change on their part. And that reveals
               | an insincere, performative, scoldy aspect of their
               | position.
               | 
               | https://ourworldindata.org/land-use-diets
        
               | ajsnigrutin wrote:
               | Sure, but beef tastes good. I mean.. there are better
               | ways to eat beef than mixed with soy at mcdonalds, but
               | still...
               | 
               | What benefit does an individual get from downloading tens
               | of megabytes of useless data to get ~5kB of useful data
               | in an article? It wastes download time, bandwidth, users
               | time (having to close the autoplaying ad), power/battery,
               | etc.
        
           | justmarc wrote:
           | Just wondering how do you reached at the energy calculation
           | for serving that 14k page?
           | 
           | For a user's access to a random web page anywhere, assuming
           | it's not on a CDN near the user, you're looking at at ~10
           | routers/networks on the way involved in the connection. Did
           | you take that into account?
        
           | swores wrote:
           | If Reddit serves 20 billion page views per month, at an
           | average of 5MB per page (these numbers are at least in the
           | vicinity of being right), then reducing the page size by 10%
           | would by your maths be worth 238,000 burgers, or a 50%
           | reduction worth almost 1.2million burgers per month. That's
           | hardly insignificant for a single (admittedly, very popular)
           | website!
           | 
           | (In addition to what justmarc said about accounting for the
           | whole network. Plus I suspect between feeding them and the
           | indirect effects of their contribution to climate change, I
           | suspect you're being generous about the cost of a burger.)
        
           | justmarc wrote:
           | Slightly veering off topic but I honestly wonder how many
           | burgers will I fry if I ask ChatGPT to make a fart app?
        
             | hombre_fatal wrote:
             | A tiny fraction of a burger.
        
         | spacephysics wrote:
         | This is one of those things that is high effort, low impact.
         | Similar to recycling in some cities/towns where it just gets
         | dumped in a landfill.
         | 
         | Instead we should be looking to nuclear power solutions for our
         | energy needs, and not waste time with reducing website size if
         | its purely a function of environmental impact.
        
         | zigzag312 wrote:
         | So, anyone serious about sustainable future should stop using
         | Python and stop recommending it as introduction to programming
         | language? I remember one test that showed Python using 75x more
         | energy than C to perform the same task.
        
           | mnw21cam wrote:
           | I'm just investigating why the nightly backup of the work
           | server is taking so long. Turns out python (as conda,
           | anaconda, miniconda, etc) have dumped 22 million files across
           | the home directories, and this takes a while to just list,
           | let alone work out which files have changed and need
           | archiving. Most of these are duplicates of each other, and
           | files that should really belong to the OS, like bin/curl.
           | 
           | I myself have installed one single package, and it installed
           | 196,171 files in my home directory.
           | 
           | If that isn't gratuitous bloat, then I don't know what is.
        
             | sgarland wrote:
             | Conda is its own beast tbf. Not saying that Python
             | packaging is perfect, but I struggle to imagine a package
             | pulling in 200K files. What package is it?
        
         | noduerme wrote:
         | Yeah, the environmental impact of jackasses mining jackass
         | coin, or jackasses training LLMs is not insignificant. Are you
         | seriously telling me now that if my website is 256k or 1024k
         | I'm responsible for destroying the planet? Take it out on your
         | masters.
         | 
         | And no, reducing resource use to the minimum in the name of
         | sustainability does not _scale down_ the same way it scales up.
         | You 're just pushing the idea that all human activity is some
         | sort of disease that's best disposed of. That's essentially
         | just wishing the worst on your own species for being
         | successful.
         | 
         | It's never clear to me whether people who push this line are
         | doing so because they're bitter and want to punish other
         | humans, or because they hate themselves. Either way, it evinces
         | a system of thought that has already relegated humankind to the
         | dustbin of history. If, in the long run, that's what happens,
         | you're right and everyone else is wrong. Congratulations. It
         | will make little difference in that case to you if the rest of
         | us move on for a few hundred years to colonize the planets and
         | revive the biosphere. Comfort yourself with the knowledge that
         | this will all end in 10 or 20 thousand years, and the world
         | will go back to being a hot hive of insects and reptiles. But
         | what glory we wrought in our time.
        
           | simgt wrote:
           | > the environmental impact of jackasses mining jackass coin,
           | or jackasses training LLMs is not insignificant
           | 
           | Whataboutism. https://en.m.wikipedia.org/wiki/Whataboutism
           | 
           | > You're just pushing the idea that all human activity is
           | some sort of disease that's best disposed of. That's
           | essentially just wishing the worst on your own species for
           | being successful.
           | 
           | Strawmaning. https://en.m.wikipedia.org/wiki/Straw_man
           | 
           | Every bloody mention of the environmental impact of our
           | activities gets at least a reply like yours that ticks one of
           | these boxes.
        
             | noduerme wrote:
             | _> the environmental impact of jackasses mining jackass
             | coin, or jackasses training LLMs is not insignificant_
             | 
             |  _(this was actually stated in agreement with the original
             | poster, who you clearly misunderstood, so there 's no
             | "what-about" involved here. They were condemning all kinds
             | of consumption, including the frivolous ones I mentioned)._
             | 
             | But
             | 
             | I'm afraid you've missed both my small point and my wider
             | point.
             | 
             | My small point was to argue against the parent's comment
             | that
             | 
             | >>reducing ressources consumption to the minimum required
             | should always be a concern if we intend to have a
             | sustainable future
             | 
             | I disagree with this concept on the basis that nothing can
             | be accomplished on a large scale if the primary concern is
             | simply to reduce resource consumption to a minimum. _If you
             | care to disagree with that, then please address it._
             | 
             | The larger point was that this theory leads inexorably to
             | the idea that humans should just kill themselves or
             | disappear; and it almost always comes from people who
             | themselves want to kill themselves or disappear.
        
               | simgt wrote:
               | > if the primary concern is simply to reduce resource
               | consumption to a minimum
               | 
               | ..."required".
               | 
               | That allows you to fit pretty much everything in that
               | requirement. Which actually makes my initial point a bit
               | weak, as some would put "delivering 4K quality tiktok
               | videos" as a requirement.
               | 
               | Point is that energy consumption and broad environmental
               | impact has to be a constraint in how we design our
               | systems (and businesses).
               | 
               | I stand by my accusations of whataboutism and
               | strawmaning, though.
        
               | noduerme wrote:
               | carelessly thrown about accusations of whataboutism and
               | strawmaning are an excellent example of whataboutism and
               | strawmaning. I was making a specific point, directly to
               | the topic, without either putting words in their mouth or
               | addressing an unrelated issue. I'll stand by my retort.
        
             | noduerme wrote:
             | >> Every bloody mention of the environmental impact of our
             | activities gets at least a reply like yours that ticks one
             | of these boxes.
             | 
             | That's a sweeping misunderstanding of what I wrote, so I'd
             | ask that you re-read what I said in response to the
             | specific quote.
        
             | Capricorn2481 wrote:
             | Wow, are low-effort comments like this really welcome here?
             | 
             | Why don't you read this comment and see if you have the
             | same energy for hamburger eaters that you do for people
             | with websites over 14kb. Because if you don't, it's obvious
             | you're looking to sweat people who actually care about
             | their environmental impact over absolutely nothing.
             | 
             | https://news.ycombinator.com/item?id=44614291
             | 
             | FYI, it's not Whataboutism to say there are more effective
             | things to focus on.
        
         | iinnPP wrote:
         | You'll find that people "stop caring" about just about anything
         | when it starts to impact them. Personally, I agree with your
         | statement.
         | 
         | Since a main argument is seemingly that AI is worse, let's
         | remember that AI is querying these huge pages as well.
         | 
         | Also that the 14kb size is less than 1% of the current average
         | mobile website payload.
        
         | lpapez wrote:
         | Being concerned about page sizes is 100% wasted effort.
         | 
         | Calculate how much electricity you personally consume in total
         | browsing the Internet for a year. Multiply that by 10 to be
         | safe.
         | 
         | Then compare that number to how much energy it takes to produce
         | a single hamburger.
         | 
         | Do the calculation yourself if you do not believe me.
         | 
         | On average, we developers can make a bigger difference by
         | choosing to eat salad one day instead of optimizing our
         | websites for a week.
        
           | Mawr wrote:
           | Or how much energy it took to even get to work by car that
           | day.
        
       | ksec wrote:
       | Missing 2021 in the title.
       | 
       | I know it is not the exact topic, but sometimes I think we dont
       | need the fastest response time but consistent response time. Like
       | every single page within the site to be fully rendered with
       | exactly 1s. Nothing more nothing less.
        
         | sangeeth96 wrote:
         | I think the advise is still very relevant though. Plus, the
         | varying network conditions mentioned in the article would
         | ensure it's difficult if impossible to guarantee consistent
         | response time. As someone with spotty cellular coverage, I can
         | understand the pains of browsing when you're stuck with that.
        
           | ksec wrote:
           | Yes. I don't know how it could be achieved other than having
           | JS rendered the whole thing, wait until time designated
           | before showing it all. And that time could be dependent on
           | network connection.
           | 
           | But this sort of goes against my no / minimal JS front end
           | rendering philosophy.
        
       | the_precipitate wrote:
       | And you do know that .exe file is wasteful, .com file actually
       | saves quite a few bytes if you can limit your executable's size
       | to be smaller than 0xFF00h (man, I am old).
        
         | cout wrote:
         | And a.out format often saves disk space over elf, despite
         | duplicating code across executables.
        
       | crawshaw wrote:
       | If you want to have fun with this: the initial window (IW) is
       | determined by the sender. So you can configure your server to the
       | right number of packets for your website. It would look something
       | like:                   ip route change default via <gw> dev <if>
       | initcwnd 20 initrwnd 20
       | 
       | A web search suggests CDNs are now at 30 packets for the initial
       | window, so you get 45kb there.
        
         | londons_explore wrote:
         | be a bad citizen and just set it to 1000 packets... There isn't
         | really any downside apart from potentially clogging up someone
         | who has a dialup connection and bufferbloat.
        
           | notpushkin wrote:
           | This sounds like a terrible idea, but can anybody pinpoint
           | why exactly?
        
             | buckle8017 wrote:
             | Doing that would basically disable the congestion control
             | at the start of the connection.
             | 
             | Which would be kinda annoying on a slow connection.
             | 
             | Either you'd have buffer issues or dropped packets.
        
             | jeroenhd wrote:
             | Anything non-standard will kill shitty middleboxes so I
             | assume spamming packets faster than anticipated will have
             | corporate networks block you off as a security thread of
             | some kind. Mobile carriers also do some weird proxying
             | hacks to "save bandwidth", especially on <4G, so you may
             | also break some mobile connections. I don't have any proof
             | but shitty middleboxes have broken connections with much
             | less obvious protocol features.
             | 
             | But in practice, I think this should work most of the time
             | for most people. On slower connections, your connection
             | will probably crawl to a halt due to retransmission hell,
             | though. Unless you fill up the buffers on the ISP routers,
             | making every other connection for that visitor slow down or
             | get dropped, too.
        
               | r1ch wrote:
               | Loss-based TCP congestion control and especially slow
               | start are a relic from the 80s when the internet was a
               | few dialup links and collapsed due to retransmissions. If
               | an ISP's links can't handle a 50 KB burst of traffic then
               | they need to upgrade them. Expecting congestion should be
               | an exception, not the default.
               | 
               | Disabling slow start and using BBR congestion control
               | (which doesn't rely on packet loss as a congestion
               | signal) makes a world of difference for TCP throughput.
        
         | sangeeth96 wrote:
         | > A web search suggests CDNs are now at 30 packets for the
         | initial window, so you get 45kb there.
         | 
         | Any reference for this?
        
           | ryan-c wrote:
           | I'm not going to dig it up for you, but this is in line with
           | what I've read and observed. I set this to 20 packets on my
           | personal site.
        
           | darthShadow wrote:
           | * https://sirupsen.com/napkin/problem-15
           | 
           | * https://www.cdnplanet.com/blog/initcwnd-settings-major-
           | cdn-p...
        
         | nh2 wrote:
         | 13 years ago, 10 packets was considered "cheating":
         | 
         | https://news.ycombinator.com/item?id=3632765
         | 
         | https://web.archive.org/web/20120603070423/http://blog.benst...
        
           | crawshaw wrote:
           | We are in a strange world today because our MTU was decided
           | for 10mbps ethernet (MTU/bandwidth on a hub controls
           | latency). The world is strange because 10mbps is still common
           | for end-user network connections, while 10gbps is common for
           | servers, and a goodly number of consumers have 1gbps.
           | 
           | The range means MTU varies from reasonable, where you can
           | argue that an IW of anything from 1-30 packets is good, to a
           | world where the MTU is ridiculously small and the IW is
           | similarly absurd.
           | 
           | We would probably be better off if consumers on >1gbps links
           | got higher MTUs, then an IW of 10-30 could be reasonable
           | everywhere. MTU inside cloud providers is higher (AWS uses
           | 9001), so it is very possible.
        
       | austin-cheney wrote:
       | It seems the better solution is to not use HTTP server software
       | that employs this slow start concept.
       | 
       | Using my own server software I was able to produce a complex
       | single page app that resembled an operating system graphical user
       | interface and achieve full state restoration as fast as 80ms from
       | localhost page request according to the Chrome performance tab.
        
         | mzhaase wrote:
         | TCP settings are OS level. The web server does not touch them.
        
           | austin-cheney wrote:
           | The article says this is not a TCP layer technology, but
           | something employed by servers as a bandwidth estimating
           | algorithm.
           | 
           | You are correct in that TCP packets are processed within the
           | kernel of modern operating systems.
           | 
           | Edit for clarity:
           | 
           | This is a web server only algorithm. It is not associated
           | with any other kind of TCP traffic. It seems from the down
           | votes that some people found this challenging.
        
           | jeffbee wrote:
           | Yet another reason that QUIC is better.
        
       | firecall wrote:
       | Damn... I'm at 17.2KB for my home page! (not including
       | dependencies)
       | 
       | FWIW I optimised the heck out of my personal homepage and got
       | 100/100 for all Lighthouse scores. Which I had not previously
       | thought possible LOL
       | 
       | Built in Rails too!
       | 
       | It's absolutely worth optimising your site though. It just is
       | such a pleasing experience when a page loads without any
       | perceptible lag!
        
         | ghoshbishakh wrote:
         | rails has nothing to do with the rendered page size though.
         | Congrats on the perfect lighthouse score.
        
           | Alifatisk wrote:
           | Doesn't Rails asset pipeline have an effect on the page size,
           | like if Propshaft being used instead of Sprockets. From what
           | I remember, Propshaft intentionally does not include
           | minification or compression.
        
             | firecall wrote:
             | It's all Rails 8 + Turbo + Stimulus JS with Propshaft
             | handling the asset bundling / pipeline.
             | 
             | All the Tailwind building and so on is done using common JS
             | tools, which are mostly standard out of the box Rails 8
             | supplied scripts!
             | 
             | Sprockets used to do the SASS compilation and asset
             | bundling, but the Rails standard now is to facilitate your
             | own preferences around compilation of CSS/JS.
        
           | firecall wrote:
           | Indeed it does not :-)
           | 
           | It was more a quick promote Rails comment as it can get
           | dismissed as not something to build fast website in :-)
        
         | apt-apt-apt-apt wrote:
         | Yeah, the fact that news.ycombinator.com loads instantly
         | pleases my brain so much I flick it open during downtime
         | automonkey-ly
        
           | Alifatisk wrote:
           | Lobsters, Dlangs forum and HN is one of the few places I know
           | that loads instantly, I love it. This is how it should be
           | like!
        
         | leptons wrote:
         | I did a lot of work optimizing the template code we use on
         | thousands of sites to get to 100/100/100/100 scores on
         | Lighthouse. We also score perfect 100s on mobile too. It was a
         | wild adventure.
         | 
         | Our initial page load is far bigger than 17.2KB, it's about
         | 120KB of HTML, CSS, and JS. The big secret is eliminating all
         | extra HTTP requests, and only evaluating JS code that needs to
         | run for things "above the fold" (lazy-evaluating any script
         | that functions below the fold, as it scrolls into view). We
         | lazy-load everything we can, only when it's needed. Defer any
         | script that can be deferred. Load all JS and CSS in-line where
         | possible. Use 'facade' icons instead of loading the 3rd-party
         | chat widget at page load, etc. Delay loading tracking widgets
         | if possible. The system was already built on an SSR back-end,
         | so SSR is also a big plus here. We even score perfect 100s with
         | full-page hires video backgrounds playing at page load above-
         | the-fold, but to get there was a pretty big lift, and it only
         | works with Vimeo videos, as Youtube has become a giant pain in
         | the ass for that.
         | 
         | The Google Lighthouse results tell you everything you need to
         | know to get to 100 scores. It took a whole rewrite of our
         | codebase to get there, the old code was never going to be
         | possible to refactor. It took us a whole new way of looking at
         | the problem using the Lighthouse results as our guide. We went
         | from our customers complaining about page speeds, to being far
         | ahead of our competition in terms of page speed scores. And for
         | our clients, page speed does make a big difference when it
         | factors into SEO rankings (though it's somewhat debatable if
         | page speed affects SEO, but not with an angry client that sees
         | a bad page speed score).
        
       | gammalost wrote:
       | If you care about reducing the amount of back and forth then just
       | use QUIC.
        
       | eviks wrote:
       | Has this theory been tested?
        
       | justmarc wrote:
       | Does anyone know have examples of tiny, yet aesthetically
       | pleasing websites or pages?
       | 
       | Would love it if someone kept a list.
        
         | hackerman_fi wrote:
         | There is an example link in the article. Listing more examples
         | would serve no purpose apart from web design perspective
        
           | justmarc wrote:
           | Well, exactly that, I'm looking for inspiration.
        
         | FlyingSnake wrote:
         | There's https://512kb.club/ which I follow to keep my website
         | lightweight
        
         | wonger_ wrote:
         | 10kbclub.com, archived: https://archive.li/olM9k
         | 
         | https://250kb.club/
         | 
         | Hopefully you'll find some of them aesthetically pleasing
        
       | adastra22 wrote:
       | The linked page is 35kB.
        
         | fantyoon wrote:
         | 35kB after its uncompressed. On my end it sends 13.48kB.
        
           | adastra22 wrote:
           | Makes sense, thanks!
        
       | susam wrote:
       | I just checked my home page [1] and it has a compressed transfer
       | size of 7.0 kB.                 /            2.7 kB
       | main.css     2.5 kB       favicon.png  1.8 kB
       | -------------------       Total        7.0 kB
       | 
       | Not bad, I think! I generate the blog listing on the home page
       | (as well as the rest of my website) with my own static site
       | generator, written in Common Lisp [2]. On a limited number of
       | mathematical posts [3], I use KaTeX with client-side rendering.
       | On such pages, KaTeX adds a whopping 347.5 kB!
       | katex.min.css              23.6 kB       katex.min.js
       | 277.0 kB       auto-render.min.js          3.7 kB
       | KaTeX_Main-Regular.woff2   26.5 kB       KaTeX_Main-Italic.woff2
       | 16.7 kB       ----------------------------------       Total
       | Additional          347.5 kB
       | 
       | Perhaps I should consider KaTeX server-side rendering someday!
       | This has been a little passion project of mine since my
       | university dorm room days. All of the HTML content, the common
       | HTML template (for a consistent layout across pages), and the CSS
       | are entirely handwritten. Also, I tend to be conservative about
       | what I include on each page, which helps keep them small.
       | 
       | [1] https://susam.net/
       | 
       | [2] https://github.com/susam/susam.net/blob/main/site.lisp
       | 
       | [3] https://susam.net/tag/mathematics.html
        
         | welpo wrote:
         | > That said, I do use KaTeX with client-side rendering on a
         | limited number of pages that have mathematical content
         | 
         | You could try replacing KaTeX with MathML:
         | https://w3c.github.io/mathml-core/
        
           | BlackFly wrote:
           | Katex renders to MathML (either server side or client side).
           | Generally people want a slightly more fluent way of
           | describing an equation than is permitted by a soup of html
           | tags. The various tex dialects (generally just referred to as
           | latex) are the preferred methods of doing that.
        
             | mr_toad wrote:
             | Server side rendering would cut out the 277kb library. The
             | additional MathML being sent to the client is probably
             | going to be a fraction of that.
        
           | mk12 wrote:
           | If you want to test out some examples from your website to
           | see how they'd look in KaTeX vs. browser MathML rendering, I
           | made a tool for that here: https://mk12.github.io/web-math-
           | demo/
        
             | em3rgent0rdr wrote:
             | Nice tool! Seems "New Computer Modern" font is the Native
             | MathML rendering that looks closest like standard LaTeX
             | rendering, I guess cause LaTeX uses Computer Modern by
             | default. But I notice extra space around the parenthesis,
             | which annoys me because LaTeX math allows you to be so
             | precise about how wide your spaces (e.g. \, \: \; \\!). Is
             | there a way to get the spaces around the parenthesis to be
             | just as wide as standard LaTeX math? And the ^ hat above
             | f(x) isn't nicely above just the top part of the f.
        
           | susam wrote:
           | > You could try replacing KaTeX with MathML:
           | https://w3c.github.io/mathml-core/
           | 
           | I would love to use MathML, not directly, but automatically
           | generated from LaTeX, since I find LaTeX much easier to work
           | with than MathML. I mean, while I am writing a mathematical
           | post, I'd much rather write LaTeX (which is almost muscle
           | memory for me), than write MathML (which often tends to get
           | deeply nested and tedious to write). However, the last time I
           | checked, the rendering quality of MathML was quite uneven
           | across browsers, both in terms of aesthetics as well as in
           | terms of accuracy.
           | 
           | For example, if you check the default demo at
           | https://mk12.github.io/web-math-demo/ you'd notice that the
           | contour integral sign has a much larger circle in the MathML
           | rendering (with most default browser fonts) which is quite
           | inconsistent with how contour integrals actually appear in
           | print.
           | 
           | Even if I decide to fix the above problem by loading custom
           | web fonts, there are numerous other edge cases (spacing
           | within subscripts, sizing within subscripts within
           | subscripts, etc.) that need fixing in MathML. At that point,
           | I might as well use full KaTeX. A viable alternative is to
           | have KaTeX or MathJaX generate the HTML and CSS on server-
           | side and send that to the client and that's what I meant by
           | server-side rendering in my earlier comment.
        
             | AnotherGoodName wrote:
             | Math expressions are like regex to me nowadays. I ask the
             | llm coding assistant to write it and it's very very good at
             | it. I'll probably forget the syntax soon but no big deal.
             | 
             | "MathML for {very rough textual form of the equation}"
             | seems to give a 100% hit rate for me. Even when i want some
             | formatting change i can ask the llm and that pretty much
             | always has a solution (mathml can render symbols and
             | subscripts in numerous ways but the syntax is deep). It'll
             | even add the css needed to change it up in some way if
             | asked.
        
         | VanTodi wrote:
         | Another idea maybe would be to load the heavy library after the
         | initial page is done. But it's loaded and heavy nonetheless. Or
         | you could create svgs for the formulas and load them when they
         | are in the viewport. Just my 2 cents
        
         | djoldman wrote:
         | I never understood math / latex display via client side js.
         | 
         | Why can't this be precomputed into html and css?
        
           | mr_toad wrote:
           | It's a bit more work, usually you're going to have to install
           | Node, Babel and some other tooling, and spend some time
           | learning to use them if you're not already familiar with
           | them.
        
           | susam wrote:
           | > I never understood math / latex display via client side js.
           | Why can't this be precomputed into html and css?
           | 
           | It can be. But like I mentioned earlier, my personal website
           | is a hobby project I've been running since my university
           | days. It's built with Common Lisp (CL), which is part of the
           | fun for me. It's not just about the end result, but also
           | about enjoying the process.
           | 
           | While precomputing HTML and CSS is definitely a viable
           | approach, I've been reluctant to introduce Node or other
           | tooling outside the CL ecosystem into this project. I
           | wouldn't have hesitated to add this extra tooling on any
           | other project, but here I do. I like to keep the stack simple
           | here, since this website is not just a utility; it is also my
           | small creative playground, and I want to enjoy whatever I do
           | here.
        
             | dfc wrote:
             | Is it safe to say the website is your passion project?
        
             | whism wrote:
             | Perhaps you could stand up a small service on another host
             | using headless chrome or similar to render, and fall back
             | to client side if the service is down and you don't already
             | have the pre rendered result stored somewhere. I suggest
             | this only because you mentioned not wanting to pollute your
             | current server environment, and I enjoy seeing these kind
             | of optimizations done :^)
        
           | marcthe12 wrote:
           | Well there is mathml but it has poor support in chrome til
           | recently. That is the website native equations formatting.
        
       | smartmic wrote:
       | If I understood correctly, the rule is dependent on web server
       | features and/or configuration. In that case, an overview of web
       | servers which have or have not implemented the slow start
       | algorithm would be interesting.
        
       | mikl wrote:
       | How relevant is this now, if you have a modern server that
       | supports HTTP/3?
       | 
       | HTTP/3 uses UDP rather than TCP, so TCP slow start should not
       | apply at all.
        
         | hulitu wrote:
         | > How relevant is this now
         | 
         | Very relevant. A lot of websites need 5 to 30 seconds or more
         | to load.
        
           | throwaway019254 wrote:
           | I have a suspicion that the 30 second loading time is not
           | caused by TCP slow start.
        
           | ajross wrote:
           | Slow start is about saving small-integer-numbers of RTT times
           | that the algorithm takes to ramp up to line speed. A 5-30
           | second load time is an order of magnitude off, and almost
           | certainly due to simple asset size.
        
         | gbuk2013 wrote:
         | As per the article, QUIC (transport protocol underneath HTTP/3)
         | uses slow start as well.
         | https://datatracker.ietf.org/doc/id/draft-ietf-quic-recovery...
        
           | gsliepen wrote:
           | A lot of people don't realize that all these so-called issues
           | with TCP, like slow-start, Nagle, window sizes and congestion
           | algorithms, are not there because TCP was badly designed, but
           | rather that these are inherent problems you get when you want
           | to create any reliable stream protocol on top of an
           | unreliable datagram one. The advantage of QUIC is that it can
           | multiplex multiple reliable streams while using only a single
           | congestion window, which is a bit more optimal than having
           | multiple TCP sockets.
           | 
           | One other advantage of QUIC is that you avoid some latency
           | from the three-way handshake that is used in almost any TCP
           | implementation. Although technically you can already send
           | data in the first SYN packet, the three-way handshake is
           | necessary to avoid confusion in some edge cases (like a
           | previous TCP connection using the same source and destination
           | ports).
        
             | gbuk2013 wrote:
             | They also tend to focus on bandwidth and underestimate the
             | impact of latency :)
             | 
             | Interesting to hear that QUIC does away with the 3WHS - it
             | always catches people by surprise that it takes at least 4
             | x latency to get some data on a new TCP connection. :)
        
       | ilaksh wrote:
       | https://github.com/runvnc/tersenet
        
       | tgv wrote:
       | This could be another reason:
       | https://blog.cloudflare.com/russian-internet-users-are-unabl...
       | 
       | > ... analysis [by Cloudflare] suggests that the throttling [by
       | Russian ISPs] allows Internet users to load only the first 16 KB
       | of any web asset, rendering most web navigation impossible.
        
       | Alifatisk wrote:
       | I agree with the sentiment here, the thing is, I've noticed that
       | the newer generations are using frameworks like Next.js as
       | default for building simple static websites. That's their bare
       | bone start. The era of plain html + css (and maybe a sprinkle of
       | js) feels like it's fading away, sadly.
        
         | jbreckmckye wrote:
         | I think that makes sense.
         | 
         | I have done the hyper optimised, inline resource, no blocking
         | script, hand minimised JS, 14kb website thing before and the
         | problem with doing it the "hard" way is it traps you in a
         | design and architecture.
         | 
         | When your requirements change all the minimalistic choices that
         | seemed so efficient and web-native start turning into technical
         | debt. Everyone fantasises about "no frameworks" until the
         | project is no longer a toy.
         | 
         | Whereas the isomorphic JS frameworks let you have your cake and
         | eat it: you can start with something that spits out compiled
         | pages and optimise it to get performant _enough_, but you can
         | fall back to thick client JavaScript if necessary.
        
         | fleebee wrote:
         | I think you're late enough for that realization that the trend
         | already shifted back a bit. Most frameworks I've dealt with can
         | emit static generated sites, Next.js included. Astro feels like
         | it's designed for that purpose from the ground up.
        
         | austin-cheney wrote:
         | You have noticed that only just recently? This has been the
         | case since jQuery became popular before 2010.
        
           | chneu wrote:
           | Arguably it's been this way since web 2.0 became a thing in
           | like 2008?
        
         | zos_kia wrote:
         | Next.js bundles the code and aggressively minifies it, because
         | their base use case is to deploy on lambdas or very small
         | servers. A static website using next would be quite optimal in
         | terms of bundle size.
        
       | hackerman_fi wrote:
       | The article has IMO two flawed arguments:
       | 
       | 1. There is math for how long it takes to send even one packet
       | over satellite connection (~1600ms). Its a weak argument for the
       | 14kb rule since there is no comparison with a larger website. 10
       | packets wont necessarily take 16 seconds.
       | 
       | 2. There is a mention that images on webpage are included in this
       | 14kb rule. In what case are images inlined to a page's initial
       | load? If this is a special case and 99.9% of images don't follow
       | it, it should be mentioned at very least.
        
         | hsbauauvhabzb wrote:
         | Also the assumption that my userbase uses low latency satellite
         | connections, and are somehow unable to put up with my website,
         | when every other website in current existence is multiple
         | megabytes.
        
           | ricardobeat wrote:
           | There was no such assumption, that was just the first example
           | after which he mentions normal roundtrip latencies are
           | usually in the 100-300ms range.
           | 
           | Just because everything else is bad, doesn't invalidate the
           | idea that you should do better. Today's internet can feel
           | painfully slow even on a 1Gbps connection because of this;
           | websites were actually faster in the early 2000s, during the
           | transition to ADSL, as they still had to cater to dial-up
           | users and were very light as a result.
        
             | sgarland wrote:
             | > Just because everything else is bad, doesn't invalidate
             | the idea that you should do better.
             | 
             | I get this all the time at my job, when I recommend a team
             | do something differently in their schema or queries: "do we
             | have any examples of teams currently doing this?" No,
             | because no one has ever cared to try. I understand not
             | wanting to be guinea pigs, but you have a domain expert
             | asking you to do something, and telling you that they'll
             | back you up on the decision, and help you implement it.
             | What more do you want?!
        
         | throwup238 wrote:
         | _> In what case are images inlined to a page's initial load?_
         | 
         | Low resolution thumbnails that are blurred via CSS filters over
         | which the real images fade in once downloaded. Done properly it
         | usually only adds a few hundred bytes per image for above the
         | fold images.
         | 
         | I don't know if many bloggers do that, though. I do on my blog
         | and it's probably a feature on most blogging platforms (like
         | Wordpress or Medium) but it's more of a commercial frontend
         | hyperoptimization that nudges conversions half a percentage
         | point or so.
        
           | hinkley wrote:
           | Inlined svg as well. It's a mess.
        
       | youngtaff wrote:
       | It's not really relevant in 2025...
       | 
       | The HTTPS negotiation is going to consume the initial roundtrips
       | which should start increasing the size of the window
       | 
       | Modern CDNs start with larger initial windows and also pace the
       | packets onto the network to reduce the chances of congesting
       | 
       | There's also a question as to how relevant the 14kb rule has ever
       | been... HTML renders progressively so as long as there's some
       | meaningful content in the early packets then overall size is less
       | important
        
       | maxlin wrote:
       | The geostationary satellite example, while interesting, is kinda
       | obsolete in the age of Starlink
        
         | theandrewbailey wrote:
         | Starlink is only 1 option in the satellite internet market.
         | There are too many embedded systems and legacy infrastructure
         | that its not reasonable to assume that 'satellite internet'
         | means Starlink. Maybe in 20 years, but not today.
        
           | maxlin wrote:
           | That's like saying vacuum tubes are only one option in the
           | radio market.
           | 
           | The quality of connection is so much better, and as you can
           | get a starlink mini with a 50GB plan for very little money,
           | its already in the zone that just one worker could grab his
           | own and bring it on the rig to use on his free time and to
           | share.
           | 
           | Starlink terminals aren't "infrastructure". Campers often
           | toss one on their roof without even leaving the vehicle.
           | Easier than moving a chair. So, as I said, the geostationary
           | legacy system immediately becomes entirely obsolete other
           | than for redundancy, and is kinda irrelevant for uses like
           | browsing the web.
        
         | 3cats-in-a-coat wrote:
         | "Obsolete" suggests Starlink is clearly better and sustainable,
         | and that's a very bold statement to make at this point. I
         | suspect in few decades the stationary satellites will still be
         | around, while Starlink would've either evolved drastically or
         | gone away.
        
       | LAC-Tech wrote:
       | This looks like such an interesting articles, but it's completely
       | ruined by the fact that every sentence is its own paragraph.
       | 
       | I swear I am not just trying to be a dick here. If I didn't think
       | it had great content I wouldn't have commented. But I feel like
       | I'm reading a LinkedIn post. Please join some of those sentences
       | up into paragraphs!
        
       | GavinAnderegg wrote:
       | 14kB is a stretch goal, though trying to stick to the first 10
       | packets is a cool idea. A project I like that focuses on page
       | size is 512kb.club [1] which is like a golf score for your site's
       | page size. My site [2] came in just over 71k when I measured
       | before getting added (for all assets). This project also
       | introduced me to Cloudflare Radar [3] which includes a great tool
       | for site analysis/page sizing, but is mainly a general dashboard
       | for the internet.
       | 
       | [1] https://512kb.club/
       | 
       | [2] https://anderegg.ca/
       | 
       | [3] https://radar.cloudflare.com/
        
         | FlyingSnake wrote:
         | Second this. I also find 512kb as a more realistic benchmark
         | and use it for my website.
         | 
         | The modern web has crossed the rubicon long time ago for 14kb
         | websites.
        
         | mousethatroared wrote:
         | A question as a non user:
         | 
         | What are you doing with the extra 500kB for me, the user?
         | 
         | > 90% of the time in interested in text. Most of the reminder
         | vector graphics would suffice.
         | 
         | 14 kB is a lot of text and graphics for a page. What is the
         | other 500 for?
        
           | nicce wrote:
           | If you want a fancy syntax highlighter for code blocks with
           | multiple languages on your website, that is alone about that
           | size. E.g. regex rules and the regex engine.
        
             | masfuerte wrote:
             | As an end user I want a website that does the highlighting
             | once on the back end.
        
           | filleduchaos wrote:
           | Text, yes. Graphics? SVGs are not as small as people think
           | especially if they're any more complex than basic shapes, and
           | there are plenty of things that simply cannot be represented
           | as vector graphics anyway.
           | 
           | It's fair to prefer text-only pages, but the "and graphics"
           | is quite unrealistic in my opinion.
        
             | mousethatroared wrote:
             | By vector graphics I meant primitive graphics.
             | 
             | Outside of youtube and... twitter? I really don't need
             | fancy stuff. HN is literally the web ideal for me, and Id
             | think most users if given an option.
        
             | LarMachinarum wrote:
             | How much is gained by using SVG (as opposed to a raster
             | graphics format) varies a lot depending on the content. For
             | some files (even with complex shape paths depending on a
             | couple details) it can be an enormous gain, and for some
             | files it can indeed be disappointing.
             | 
             | That being said, while raw SVG suffers in that respect from
             | the verbosity of the format (being XML-based and designed
             | so as to be humanly readable and editable as text), it
             | would be unfair to compare, for the purpose of HTTP
             | transmission, the size of the raster format image (heavily
             | compressed) with the size of the SVG file (uncompressed) as
             | one would if it were for desktop use. SVG tends to lend
             | itself very well to compressed transmission, even with
             | high-performance compression algorithms like brotli (which
             | is supported by all relevant browsers and lots of HTTP
             | servers), and you can use pre-compressed files (e.g. for
             | nginx with the module ngx_brotli) so that the server
             | doesn't have to handle compression ad hoc.
        
           | ssernikk wrote:
           | I use it for fonts. My website [0] consists of about 15kB of
           | compressed HTML + CSS and 200kB of fonts.
           | 
           | [0] https://wyczawski.dev/
        
             | mousethatroared wrote:
             | Why do I care about fonts? Honestly, if my browser had an
             | option not to load fonts and use my default to save load
             | time I ld choose that 19 out 20 times.
        
         | Brajeshwar wrote:
         | 512kb is pretty achievable for personal websites. My next
         | target is to stay within 99kb (100kb as the ceiling). Should be
         | pretty trivial on a few weekends. My website is in the Orange
         | on 512kb.
        
       | zelphirkalt wrote:
       | My plain HTML alone is 10kB and it is mostly text. I don't think
       | this is achievable for most sites, even the ones limiting
       | themselves to only CSS and HTML, like mine.
        
         | 3cats-in-a-coat wrote:
         | This is about your "plain HTML". If the rest is in cache, then
         | TCP concerns are irrelevant.
        
           | silon42 wrote:
           | You must also be careful not to generate "get-if-modified",
           | or such checks.
        
           | MrJohz wrote:
           | Depending on who's visiting your site and how often, the rest
           | probably isn't in cache though. If your site is a product
           | landing page or a small blog or something else that people
           | are rarely going to repeatedly visit, then it's probably best
           | to assume that all your assets will need to be downloaded
           | most of the time.
        
             | 3cats-in-a-coat wrote:
             | While it'd be fun to try, I doubt you can produce any page
             | at all that's total 14kb with assets, even back at the dawn
             | of the web in the 90s, aside from the spartan minimal
             | academic pages some have. And where loading faster is
             | completely irrelevant.
        
               | MrJohz wrote:
               | The homepage for my blog is apparently 9.95kB, which
               | includes all styles, some JS, and the content. There is
               | an additional 22kB font file that breaks the rule, but
               | when I first designed the site I used built-in browser
               | fonts only, and it looked fine. There are no images on
               | the homepage apart from a couple of inlined SVG icons in
               | the footer.
               | 
               | Looking at the posts themselves, they vary in size but
               | the content/styles/JS probably average around 14kB.
               | You've also got the font file, but again a more minimal
               | site could strip that. Finally, each post has a cover
               | image that makes up the bulk of the content size. I don't
               | think you're ever going to get that under 14kB, but
               | they're also very easy to load asynchronously, and with a
               | CSS-rendered blur hash placeholder, you could have an
               | initial page load that looks fairly good where everything
               | not in the initial 14kB can be loaded later without
               | causing FOUCs/page layout shifts/etc.
               | 
               | For a magazine site or a marketing site, the 14kB thing
               | is almost certainly impossible, but for blogs or simple
               | marketing pages where the content is more text-based or
               | where there are minimal above-the-fold images, 14kB is
               | pretty viable.
               | 
               | For reference, my blog is https://jonathan-frere.com/,
               | and you can see a version of it from before I added the
               | custom fonts here: https://34db2c38.blog-8a1.pages.dev/ I
               | think both of these versions are not "spartan minimal
               | academic pages".
        
       | nottorp wrote:
       | So how bad is it when you add https?
        
       | xg15 wrote:
       | > _Also HTTPS requires two additional round trips before it can
       | do the first one -- which gets us up to 1836ms!_
       | 
       | Doesn't this sort of undo the entire point of the article?
       | 
       | If the idea was to serve the entire web page in the first
       | roundtrip, wouldn't you have lost the moment TLS is used? Not
       | only does the TLS handshake send lots of stuff (including the
       | certificate) that will likely get you over the 14kb boundary
       | before you even get the chance to send a byte of your actual
       | content - but the handshake also includes multiple
       | request/response exchanges between client and server, so it would
       | require additional roundtrips even if it stayed below the 14kb
       | boundary.
       | 
       | So the article's advice only holds for unencrypted plain-TCP
       | connections, which no one would want to use today anymore.
       | 
       | The advice might be useful again if you use QUIC/HTTP3, because
       | that one ditches both TLS and TCP and provides the features from
       | both in its own thing. But then, you'd have to look up first how
       | congestion control and bandwidth estimation works in HTTP3 and if
       | 14kb is still the right threshold.
        
         | toast0 wrote:
         | Modern TLS adds one round trip, unless you have TCP fast open
         | or 0-RTT resumption; neither of which are likely in a browser
         | case, so call it 1 extra round trip. Modern TLS includes TLS
         | 1.3 as well as TLS 1.2 with TLS False Start (RFC 7918, August
         | 2016).
         | 
         | And TLS handshakes aren't that big, even with certificates...
         | Although you do want to use ECC certs if you can, the keys are
         | much smaller. The client handshake should fit in 1-2 packets,
         | the server handshake should fit in 2-3 packets. But more
         | importantly, the client request can only be sent after
         | receiving the whole server handshake, so the congestion window
         | will be refreshed. You could probably calculate how much larger
         | the congestion window is likely to be, and give yourself a
         | larger allowance, since TLS will have expanded your congestion
         | window.
         | 
         | Otoh, the important concept, is that early throughput is
         | limited by latency and congestion control, and it takes many
         | round trips to hit connection limits.
         | 
         | One way to apply that is if you double your page weight at the
         | same time you add many more service locations and traffic
         | direction, you can see page load times stay about the same.
        
       | tomhow wrote:
       | Discussed at the time:
       | 
       |  _A 14kb page can load much faster than a 15kb page_ -
       | https://news.ycombinator.com/item?id=32587740 - Aug 2022 (343
       | comments)
        
       | mikae1 wrote:
       | _> Once you lose the autoplaying videos, the popups, the cookies,
       | the cookie consent banners, the social network buttons, the
       | tracking scripts, javascript and css frameworks, and all the
       | other junk nobody likes -- you 're probably there._
       | 
       | How about a single image? I suppose a lot of people (visitors
       | _and_ webmasters) like to have an image or two on the page.
        
         | coolspot wrote:
         | As long as your page doesn't wait for that image, your page is
         | going to be shown faster if it is 14kb.
        
       | tonymet wrote:
       | Software developer's should be more aware of the media layer . I
       | appreciate the author's post about 3g /5g reliability and
       | latency. Radio almost always retries, and with most HTTP your
       | packets need to arrive in order.
       | 
       | A single REST request is only truly a single packet if the
       | request and response are both < 1400 bytes. Any more than that
       | and your "single" request is now multiple requests & responses .
       | Any one of them may need a retry and they all need to arrive in
       | order for the UI to update.
       | 
       | For practical experiments, try chrome dev tools in 3g mode with
       | some packet loss and you can see even "small" optimizations
       | improving UI responsiveness dramatically.
       | 
       | This is one of the most compelling reasons to make APIs and UIs
       | as small as possible.
        
       ___________________________________________________________________
       (page generated 2025-07-19 23:01 UTC)