[HN Gopher] Will serving real HTML content make a website faster?
___________________________________________________________________
Will serving real HTML content make a website faster?
Author : tkadlec
Score : 113 points
Date : 2022-09-21 17:21 UTC (5 hours ago)
(HTM) web link (blog.webpagetest.org)
(TXT) w3m dump (blog.webpagetest.org)
| the__alchemist wrote:
| This page is serving me a recursive stream of Captchas, as
| pudgetsystems reviews the security of my connection; not a great
| look for the topic alluded to in the headline.
| 1vuio0pswjnm7 wrote:
| There could be a companian article: "Will Consuming Only Real
| HTML Content Make A Website Faster? Let's Experiment"
|
| Having run this "experiment" for many years now by (a)
| controlling DNS so that only the domain in the "address bar" URL
| is resolved^1 and (b) making HTTP requests using a TCP client
| and/or an unpopular nongraphical web browser that only processes
| HTML and does not perform auto-loading of resources. No images,
| JS, CSS, etc.
|
| The answer to the question is yes. This "makes a website faster",
| or, more specifically, as someone else in the thread has stated,
| it does not make the website slow. It does not accomodate the
| practices of "web developers" that slow a website down.
|
| But most importantly, IMO, it makes "website speed", not to
| mention appearance, more consistent across websites. Good luck
| achieving any semblance of that with a popular graphical web
| browser.
|
| Most web pages submitted to HN can be read this way. I find it
| easier to consume information without the distractions enabled by
| "modern" web browsers.
|
| 1. This is the only URL the www user is informed about. In the
| short history of the www so far, auto-loading from other domains,
| whether through HTML, Javascript or otherwise, has unfortnuately
| been abused to the point where allowing it produces more risk-
| taking than convenience. Sadly, instead of deprecating the
| practices that have been abused and make websites slow, the new
| HTTP standards proposed by an advertising company and supported
| by CDNs cater to this practice of "composite" websites comprised
| of resources from various third parties. It stands to reason that
| advertisers and therefore "tech" companies and their providers,
| e.g., CDNs, stand to benefit more from "composite" websites than
| www users do. IMHO the easiest way to "make websites faster" is
| to stop enabling "web developers" to do the things that make them
| slow.
| jokoon wrote:
| It's really funny because I asked on stack exchange why websites
| are slower than apps, my question for removed for being opinion
| based, and I got an answer about hydration.
|
| In my view, the dom should be made obsolete, and there should be
| tighter restrictions, by making things immutable, or just
| completely redesigning how the dom works.
|
| I'm not an expert, but the dom smells very weird.
| smm11 wrote:
| I'd been thinking that 5G is a thing only because the IOT is a
| thing. It had nothing to do with phones, but the build-out is
| funded by phones. So when it all settles down, phones will be as
| slow as they were in the 3G era, at best, what with so much stuff
| clamoring for data.
|
| Plain Jane HTML is going to save us.
| kmeisthax wrote:
| I remember when single-page applications were all the rage. I was
| highly skeptical that they could beat just loading HTML, given
| that the performance benefits were all predicated upon amortizing
| the initial load cost over many page requests. It's a very risky
| bet given that a lot of sites don't have a lot of repeat traffic
| to begin with, unless you just so happen to be an application in
| the guise of a website.
|
| Apparently my skepticism has been validated.
| jen729w wrote:
| As usual, it depends.
|
| I just tested my own site, which I built using Gatsby -- a JS
| framework -- and https://astro.build, whose entire schtick is
| that they deliver as little JS to the page as possible.
|
| (Because I'm thinking of rebuilding my site using Astro. But
| that's not relevant here.)
|
| In the default test, my page loaded in 1.6s and Astro in 1.9s.
| In the 'not bad' ratings below the main figures, my site fared
| better.
|
| Now that my page is loaded, Gatsby does some neat pre-loading
| on hover of links. So clicking around my site is literally
| instantaneous. The same is not true of Astro, where every click
| is a classic HTTP request.
|
| _I am not judging Astro._ That's not the point of this post.
| I'm no Gatsby fanboy, I think it's horribly over-complicated.
| I'm just saying. It's complicated.
| P5fRxh5kUvp2th wrote:
| I bet it could be rewritten to feel instantaneous as well.
|
| it's a common misconception that SSR implies no XHR at all.
| That's never been true except prior to IE5 adding the
| technique for outlook.
| aaaaaaaaaaab wrote:
| >Now that my page is loaded, Gatsby does some neat pre-
| loading on hover of links. So clicking around my site is
| literally instantaneous.
|
| *Except with touch-based interfaces. Which are the majority
| of browsers today.
| [deleted]
| can16358p wrote:
| Shouldn't we have more devices and more connection types to have
| a more controlled experiment?
|
| It's always 4G, mobile Chrome and I assume the same device.
|
| Very likely same carrier at the same place, so roughly same
| connection conditions in terms of latency DL/UL bandwidth and
| jitter. Also always the same device with same CPU/GPU. Perhaps a
| flagship new shiny phone with a superfast SoC which gives a
| headstart to faster JS execution? Or perhaps a very spotty barely
| 1-bar 4G connection. (Just assumptions, maybe both are false, but
| you get the idea)
|
| I'm a bit fan of client-side generation using JS too but I don't
| think this experiment is exhaustive of many practical scenarios.
|
| If we see more connection types and more variety of devices with
| different CPUs then it'd be more convincing.
| wubsitesgood wrote:
| Some of the tests in the article are run on Desktop Chrome
| using a "cable" connection speed instead of 4G, which looks to
| have about a 6x faster round trip time than their 4G does.
| Those results are a little less impactful but still significant
| (many seconds faster still in some metrics).
|
| More testing environments would make the results more or less
| significant, as you'd expect.
|
| In ideal browsing conditions, the impact will be more minor,
| and in the spotty barely 1-bar connection you mention, the
| difference would be much more dramatic than the 4G examples in
| the post.
| Veliladon wrote:
| 4G on a Moto is basically the worst case scenario but also how
| half the world interacts with the internet at large. If you're
| going to pick one scenario that describes a lot of users,
| they're pretty much dead on.
| epolanski wrote:
| I think 4G doesn't tell the whole story, especially in
| several businesses that target users in specific conditions
| (e.g. tourism, where your users have poor unstable 4g) or
| specific markets withpoor avera8ge mobile connections.
| giantrobot wrote:
| Measuring the speed of a page rendered on an iPhone 14 on an
| mmWave 5G connection a foot from an antenna is not a worthwhile
| test. If it takes 5 seconds for Twitter to load a tweet (which
| it does on my iPhone 12 Pro on WiFi) is that somehow better? A
| tweet, famously limited to 140 characters takes _5 seconds to
| load_?
|
| A news article or tweet takes way too long to load on my phone,
| it's just ludicrous that on a mid-range phone and connection it
| would take 45 seconds! A copy of Frankenstein[0] (~78k words)
| weighs in at 463KB. A random CNN article or tweet is not a damn
| copy of Frankenstein. There's no reason either should take more
| than a second to load and render.
|
| An HTML document with a bare minimum CSS to not be ugly has
| enough information to render and be useful to a user. It can do
| that with a single request to a server. At minimum the same
| page rendered with JavaScript needs two connections to a
| server. It's also got a higher minimum threshold for displaying
| something to the user because the JavaScript needs to be
| downloaded, parsed, interpreted/JIT, then requests for useful
| resources made. All to do things a browser will already do for
| free.
|
| There's full JavaScript _applications_ that can 't be built
| with just HTML and CSS. Of course they need to load and run the
| JavaScript. A tweet or news article are not applications. They
| do not need to load the equivalent of copies of Doom to display
| a dozen paragraphs of text or just 140 _characters of text_.
| The modern web 's obsession with JavaScript everywhere is
| asinine.
|
| [0] https://www.gutenberg.org/ebooks/84
| vxNsr wrote:
| This basically just tells us what we already know, SSR is faster
| for the client usually
| epolanski wrote:
| I don't think it necessarily is. You do get better LCP and FCP
| but other metrics suffer (time to interactive, TTFB and several
| other metrics are primary examples).
|
| It's a compromise, and hydration is a huge performance hit. (I
| work on performance of a SSR ecommerce)
| Cyberdog wrote:
| "Time to interactive" and "time to first byte" are pointless
| numbers if the purpose of your site is to display content
| (Reddit, Twitter, pretty much everything else). If I
| (resentfully) click on a Reddit link on a SERP, I'm going
| there to read the content, not to flip open menus or
| whatever.
|
| "Time to human satisfaction" should be a number that front-
| end developers measure and aim to improve. Just rendering the
| content server-side and showing it to the user first, then
| adding on the bells and whistles after that, is how you do
| that.
| lmm wrote:
| > "Time to human satisfaction" should be a number that
| front-end developers measure and aim to improve. Just
| rendering the content server-side and showing it to the
| user first, then adding on the bells and whistles after
| that, is how you do that.
|
| Not necessarily. If you "load" the page but it doesn't do
| what it should when I click on it, that can be much more
| frustrating to the human than taking a little longer to
| load but being fully functional when you do. The assumption
| that anything that isn't HTML is "bells and whistles" is
| pretty dubious (as is the converse assumption that
| everything in the HTML is valuable).
| megaman821 wrote:
| The large failing with this test is that it assumes the time to
| get the relevant page data from the database and render it to
| HTML is 0. If Twitter had your feed ready to go from its cache
| this might be accurate, but realistically I would give the server
| a few seconds to do its work since the site is so personalized.
| makapuf wrote:
| OK but you could maybe render static html frames and fill
| placeholders with pre rendered html as soon as it is available?
| (A bit like htmx can do)
| wubsitesgood wrote:
| As the article says, the first example in the post does include
| the time that it takes to go out and fetch the static HTML that
| swaps in, and it added about a second to the server response of
| the experiment run that doesn't show up in the control run. For
| a big distributed site, a second may be more time than it would
| really take to put together a dynamic response.
|
| Even with that 1-second additional delay included though, the
| improvement in that first test is still large (over 8 seconds
| faster in those tests). If the experiment took a few seconds
| longer on the server, it still would be 5 or 6 seconds faster
| to render content than the control.
| megaman821 wrote:
| I agree that server rendered would be faster, just the
| article had presented the absolute best-case scenario for the
| server. That said there could be other tradeoffs at play.
| Maybe loading the JavaScript and requesting a small amount of
| JSON each page is faster loading the initial page and then
| scrolling 10 pages down is faster than the server rendering
| out each page and appending it to the end.
| thwarted wrote:
| A point of comparison should be to git.kernel.org, which loads
| and renders instantly (at least compared to all these other
| sites), contains a massive amount of _actual_ content per page,
| is highly cacheable on the server, and uses exactly zero
| javascript while remaining usable (for its use case at least,
| which is all links and little form interaction (only the search
| box)).
| klysm wrote:
| The ui is not that functional compared to other git front ends.
| NohatCoder wrote:
| Time for a hot take:
|
| You don't need to make your website fast, all you have to do is
| not make it slow in the first place.
|
| Partially or fully generating a web site client side can be
| plenty fast, the slowness tends to come from using some bloated
| framework to do so.
| bachmeier wrote:
| > You don't need to make your website fast, all you have to do
| is not make it slow in the first place.
|
| This requires testing. Which is, apparently, something most
| companies don't know how to do correctly.
| zozbot234 wrote:
| Newer frameworks like Svelte or SolidJS are a lot less bloated
| on the client. Though we're still far from successfully
| minimizing the amount of network roundtrips involved in a SPA
| update, so there's plenty of room for improvement still.
| nicoburns wrote:
| In my experience, it's not even the framework that's the
| problem. Client-side rendered React is plenty fast for example.
| Not _as_ fast as server-rendered (or even better, static) HTML,
| but fast enough (measured in a few hundreds of ms) that you won
| 't notice the difference. It's generally things like loading
| lots of 3rd party scripts, or not paying to how many network
| roundtrips are required on the critical loading path which make
| it slow.
| morelisp wrote:
| > a few hundreds of ms
|
| Gotta be honest, I'm grimacing already. An order of magnitude
| too much.
| imachine1980_ wrote:
| or loading everithing instead of lazy loding the images or
| use 3mb image instead of kb webp
| P5fRxh5kUvp2th wrote:
| While this is technically true, it's always been technically
| true, even when those in the religion of the SPA claimed it was
| clearly faster to use CSR over SSR.
|
| This is just realigning what many of us already knew, SSR is
| faster.
|
| If I may, the argument of "well... sure, but CSR can be plenty
| fast!" is redrawing the line after someone stepped over it.
| epolanski wrote:
| > You don't need to make your website fast
|
| You absolutely do, even setting aside UX and accessibility
| benefits, there's a huge impact in SEO as web core vitals
| impact seo a lot.
|
| If you are in a niche business or have no competition, then
| this argument is weaker of course and you're left with the
| previous 2.
| the__alchemist wrote:
| The commenter you're replying to would likely agree - the
| comment's point isn't about the value of speed, it's that by
| default pages are fast, and it takes deliberate (but
| ubiquitous) deviations from the rendered HTML to slow it
| down.
| epolanski wrote:
| My bad then.
| pragmatic wrote:
| Also, doesn't matter how fast the front end is if it
| immediately throws up a spinner as it waits on the api.
___________________________________________________________________
(page generated 2022-09-21 23:00 UTC)