[HN Gopher] The Future (and the Past) of the Web Is Server Side ...
___________________________________________________________________
The Future (and the Past) of the Web Is Server Side Rendering
Author : lambtron
Score : 174 points
Date : 2023-02-03 17:30 UTC (5 hours ago)
(HTM) web link (deno.com)
(TXT) w3m dump (deno.com)
| gwbas1c wrote:
| Does anyone know the stats about what's being served?
|
| For things like blogs, server-side HTML with a sprinkle of
| client-side Javascript (or WASM) makes a lot of sense.
|
| But for applications, where you're doing, you know, work and
| stuff, in-browser HTML makes a lot more sense.
|
| The thing is, as a developer, most of the _work_ is in
| applications. (It 's not like we need to keep writing new blog
| engines all the time.) Thus, even though most actual usage of a
| browser might be server-side HTML, most of our development time
| will be spent in in-browser HTML.
| bool3max wrote:
| "Server-side rendering" is destined to rule the future purely
| because of _control_. In the future consumer devices will be
| simplified, much more streamlined, and completely locked down.
| They will be used for the single purpose of displaying streamed,
| pre-packaged, pre-layed-out content from servers.
| rafaelturk wrote:
| Can someone explain me: Deno is becoming such a confusing
| framework, initially NodeJS alternative now it seems to me that
| is trying to compete with NextJs?
| deniz-a wrote:
| It's not trying to compete with Next, but advertising how
| Deno's similarity to the browser and being able to run on CDN-
| like networks (which i refuse to call "the e*ge") can let you
| build a better version of Next's features yourself.
| [deleted]
| rado wrote:
| npm install common-sense
| mikece wrote:
| Error: package cannot be found or is incompatible with your
| system.
| qbasic_forever wrote:
| We regret to inform you the common-sense package has been
| compromised and it has been removed from the ecosystem.
| [deleted]
| bartmika wrote:
| npm ERR! 404 Registry returned 404 for GET on
| https://registry.npmjs.org/left-pad
| anonymousDan wrote:
| Is SSR still much better for seo?
| blobster wrote:
| Yes. There's a separate queue for sites that need js rendering
| and it eats much more into your crawl budget. Best way to avoid
| it imo is to use something like Rendertron, which is made and
| recommended by Google.
| taftster wrote:
| _Rendertron is deprecated
|
| Please note that this project is deprecated. Dynamic
| rendering is not a recommended approach and there are better
| approaches to rendering on the web.
|
| Rendertron will not be actively maintained at this point._
|
| https://github.com/GoogleChrome/rendertron
| somehnguy wrote:
| https://github.com/GoogleChrome/rendertron appears to be
| deprecated and no longer recommended by Google. They are now
| recommended basically what this article is about.
| ushercakes wrote:
| Yeah, big time. It's faster, so crawlers give you better scores
| for page speed, which is important. Secondly, it automatically
| renders all of your content, vs if you dynamically load
| content, the crawler may just see a page with a "Loading"
| element and never actually view the content itself.
|
| Google argues that it is able to handle javascript heavy client
| side code in it's crawlers, but the data seems to show
| otherwise.
| linkjuice4all wrote:
| Perhaps the best method is a mix of static or SSR content for
| the content-heavy stuff that you want indexed and SPAs for
| the truly dynamic experiences. This is easier said than done
| but there's a good chance your marketing team is separated
| from "product" anyway. Marketing can continue to use
| WordPress or some other CMS with a static export or SSR and
| product gets the full app experience stuff.
|
| It's mentioned in other threads that SSR is more expensive as
| your scale - so you might as well make the "outside" layer of
| your site light weight and static/SSR for fast client loading
| and then give them the full SPA once they've clicked through
| your landing pages.
| flippinburgers wrote:
| The "modern" state of the web. I miss old school html with little
| to no javascript. It is all java in the browser all over again.
| Or flash. Same old same old. Very few websites need any of this
| stuff. It is just a bunch of junior devs wishing they worked for
| FB I guess ergo them guzzling react like there is no tomorrow.
| [deleted]
| ChrisMarshallNY wrote:
| Anyone remember There? It's still around[0], but I don't know how
| active it is.
|
| I'm pretty sure it relied on in-browser XSLT to do a lot of its
| magic.
|
| [0] https://there.com
| blank_fan_pill wrote:
| IME the big gains nearly always come from how data is surfaced
| and cached from the storage layer.
|
| You may get some nominal gains from sending less JS or having the
| server render the html, but IME the vast majority of apps have
| much bigger wins to be had further down the stack.
| rbanffy wrote:
| My first contact with HTTP and HTML forms was an immediate
| throwback to my mainframe experience. The browser was like a
| supermodern 3270 terminal, getting screens from the server,
| sending data back, getting another screen and so on.
|
| There were a number of products that allowed a web app to
| maintain a 3270 connection to the mainframe and render the
| terminal screens as an HTML form. Fascinating stuff.
| trollied wrote:
| "Server side rendering" is such a terrible term. The server isn't
| doing rendering, the browser is. The server is sending a complete
| well-formed DOM for the client to render. Well done, modern devs!
| A plain .html file does that.
|
| I really hope some of the heavy front-end frameworks die a death,
| some common sense prevails, and we get a lighter, faster loading,
| more responsive web. I can dream.
| [deleted]
| JayStavis wrote:
| Point certainly taken but I think that "rendering" is the
| overloaded term.
|
| Rendering basically means, to take data & logic and transform
| it into a view for another system (or person).
|
| _Graphical_ rendering is probably the needed operative word
| for this point? A bit of annoying semantics but I think
| rendering just means to provide a structured view for some
| state.
| wg0 wrote:
| Basically, by those standards, ngnix is the most popular server
| side renderer ATM. It can beautifully render HTML and pretty
| much any file format. It can even render video files and with
| Ngnix plus, you get bit more server side rendering for vidoe
| files too.
|
| Apache used to be a good server side renderer too but those
| were the old days.
| revskill wrote:
| I guess you're a backend dev watching all frontend framework
| shines with a jealous eyes. Keep watching :)
|
| Those heavyweight framework exist for a reason, they're not
| born out of thin air. It's about your use case, you don't need
| it for other's use case.
| altdataseller wrote:
| Trust me. Nobody is jealous of that stuff :)
| revskill wrote:
| Trust me, everything exits for a reason, to solve (maybe
| specific problem). It's nonsensical to want it to die for
| "no reason".
| altdataseller wrote:
| Not arguing they aren't useful. Just that no backend dev
| is ever jealous of what's happening with front end
| frameworks.
| habibur wrote:
| Another interesting question : are frontend developers
| jealous of what the backend developers are doing, or are
| they not?
| stcroixx wrote:
| I sure was when I had to do front end work. Finally got
| out of anything front-end for good and it's probably been
| the single most pleasant change in my career ever. I
| didn't start out doing front end work though, so I could
| see while I was doing it how ridiculous it was compared
| to almost any other domain in software dev and only
| getting worse. A good portion of front end devs I meet
| have not done anything else so they don't have a point of
| reference.
| namaria wrote:
| Just not always a good reason. You're not breaking ground
| here by having stumbled upon the concept of causality.
| cristianpascu wrote:
| "A plain .html file" - This made me chuckle. :)
| voytec wrote:
| Server can render HTML code. Browser renders visual
| representation of HTML code.
| cyral wrote:
| > Well done, modern devs! A plain .html file does that
|
| And then if you want to take that rendered data and do anything
| interactive with it you have some js soup of
| parseInt(document.getQuerySelector(".item > .item__quantity")
| all over the place. HN has some weird hate for this new server
| side rendering, when it's really the smart thing to do and
| equivalent to what any app is doing: the "frame" of the app is
| downloaded once (and we can send the initial data with it), and
| then it can become interactive from there. e.g. if the data
| needs to be reloaded we can make a small JSON request instead
| of reloading the whole page and re-rendering it.
| marcosdumay wrote:
| There is nothing wrong with
| parseInt(document.getQuerySelector(".item >
| .item__quantity"), except for it not being
| parseInt(document.getElementById("uniqueAutogeneratedId")).
|
| Developers just shouldn't write that kind of fragile code by
| hand. But there's nothing wrong at all with the code being
| there.
| bmikaili wrote:
| That is literally fragile code. You contradict yourself. I
| really want to see any of the people that hate on modern
| frameworks build any complex web app in a reasonable amount
| of time with the same level of stability as using i.e.
| SvelteKit
| marcosdumay wrote:
| As a rule, every code that people actually run is
| fragile. A small change could break anything, and there
| are almost no safety checks.
|
| Things only work because it's not people that create it.
| revskill wrote:
| Two way binding is the future ?
| unity1001 wrote:
| > And then if you want to take that rendered data and do
| anything interactive with it you have some js soup of
| parseInt(document.getQuerySelector(".item > .item__quantity")
| all over the place
|
| And that's what most of the web needs - few use cases require
| having to manipulate every bit of the dom to send constant
| updates to the end-user. Social networks, financial sites,
| banks, betting sites etc. The rest do not need these heavy
| frameworks and the extensive dom manipulating capability. The
| last thing you want in an ecommerce checkout process is to
| distract the user by manipulating the dom to give him
| 'updates'. So nobody does anything like updating the user
| with info like 'latest prices', 'your friend just bought
| this' etc right in the middle of the checkout process. Same
| goes for blogs, most of publishing.
| cyral wrote:
| I love nothing more than clicking "remove from cart" and
| having the whole page refresh and lose the info that was
| already typed
| unity1001 wrote:
| That can be sorted by a single jQuery or JS function. 2-3
| functions in that cart page handles everything without
| any complication whatsoever.
| tiagod wrote:
| I don't understand how jQuery and direct DOM manipulation
| is in any way better than something like Svelte for a
| modern Web app, especially something like a store.
| 10000truths wrote:
| > And then if you want to take that rendered data and do
| anything interactive with it you have some js soup of
| parseInt(document.getQuerySelector(".item > .item__quantity")
| all over the place.
|
| Nothing stops a dev from providing both a server-side render
| and an API endpoint, for those that don't want the JS soup.
| In fact, such a design is not uncommon, and it's fairly
| straightforward to write a backend interface that both the
| server-side rendered endpoint handler and the API endpoint
| handler can use.
|
| > HN has some weird hate for this new server side rendering,
| when it's really the smart thing to do and equivalent to what
| any app is doing: the "frame" of the app is downloaded once
| (and we can send the initial data with it), and then it can
| become interactive from there. e.g. if the data needs to be
| reloaded we can make a small JSON request instead of
| reloading the whole page and re-rendering it.
|
| The "smart" thing to do depends on what your requirements
| are. For minimal latency, server-side rendering tends to fare
| much better, as it requires only one round trip to fetch all
| the necessary information to render the page contents.
| umvi wrote:
| > Well done, modern devs! A plain .html file does that.
|
| Yes but a plain html file is static, so that's not going to
| work unless your site is purely static (i.e. a blog).
| rightbyte wrote:
| You can do both post and get. That is all you need really to
| make anything work unless your are doing spyware and like
| graphical applications such as maps and what not.
|
| Static pages are easier on Mother Earth too.
| umvi wrote:
| Yeah but the point of "serverside rendering" is that you
| can just fill in the dynamic values serverside and serve
| plain html instead of needing a bunch of javascript and dom
| manipulation
| shams93 wrote:
| yeah people demand this overkill without understanding what
| they are demanding. Everyone seems to use react so we must also
| use react, then the site no longer works for mobile so then you
| also need react native. All when you can use vanilla js to do
| the small bits needed for a PWA from one simple codebase.
| nawgz wrote:
| > then the site no longer works for mobile
|
| What a comical misunderstanding, only to use it to demand
| people stop using frameworks. You seem to have said it best
|
| > people demand ... without understanding what they are
| demanding
| [deleted]
| [deleted]
| dylan604 wrote:
| I'm of two minds. I want agree with you of well formed DOM for
| the browser to render. That's great. Now, do we have to go all
| the way back to flat files where the whole page has to refresh
| to update one silly field or selection update? No, we don't
| have to go full cave man for that. We can still use the front
| end to make changes after the initial load. we don't need an
| app to be running in each user's browser for a large majority
| of places where this is happening.
| forgotmypw17 wrote:
| Yes, the best of both worlds.
|
| Fast-loading, complete, cacheable, archivable pages.
|
| And DOM changes for updating them without reloading the
| entire page.
| robertoandred wrote:
| How can a plain html file pull content from a database?
| xrd wrote:
| I've been using svelte for years and love it.
|
| I've been using sveltekit for years and still struggle with it.
|
| With sveltekit, I'm never really sure when to use prerender. I'm
| never sure how and where my code will run if I switch to another
| adapter.
|
| With pure svelte, my most ergonomic way of working is using a
| database like pocketbase or hasura 100% client side with my
| JavaScript, so the server is a static web server. It's got real
| time subscriptions, graphql so my client code resembles the shape
| of my server side data, and a great authentication story that
| isn't confusing middleware.
|
| I'm sure SSR is better for performance, but it always seems to
| require the use of tricky code that never works like I expect it
| to.
|
| Am I missing something?
| another_story wrote:
| In Sveltekit it's SSR on first load and then client following
| as the components in that page change, as far as I know. How
| SSR is done, not where it's done, depends on the adapter. It's
| always server first unless you specially opt out.
| EugeneOZ wrote:
| No. Because HTML is not the future of the Web.
| beej71 wrote:
| "But all that code is necessary to make our sites work the way we
| want."
|
| Yes, but, on the other hand, is it?
| amadeuspagel wrote:
| You can do HTML templating directly in JS using tagged template
| literals[1], and a library to deal with problems like XSS
| attacks[2].
|
| [1]: https://developer.mozilla.org/en-
| US/docs/Web/JavaScript/Refe...
|
| [2]: https://workers.tools/html/
| mikece wrote:
| And the future beyond that will be client-side rendering. In the
| beginning everything was rendered on the mainframe; then CICS
| allowed partial screen updates and even dynamic green screen
| design. Then the early web where everything was server which made
| the job of web indexing much easier. Then we moved back to rich
| client apps -- applets, flash, eventually SPAs -- with no way for
| search engines to easily index things. A best of all worlds
| scenario is a rich UI that only needs to make API calls to update
| the display, keeping performance fast and content flicker-free
| (and the server-side API could have an agreed upon standard for
| being indexed -- or submitting updates for indexing -- to search
| engines).
|
| There is no truly perfect scheme, only ways in which we think we
| can improve on the status quo by swinging the pendulum back and
| forth.
| [deleted]
| OliverJones wrote:
| The client-server wheel of life just keeps turning, and
| turning, and turning. It's an eternal human truth: each
| generation yearns to improve on the previous generation's
| efforts.
|
| This server-client zeal to improve has been tremendously
| productive of good ideas over the last few decades. It will
| continue. Hopefully saving power and CO2 can be the focus of
| the next couple of turns of the great wheel.
| makmanalp wrote:
| Don't know why this comment was downvoted - it's the truest
| take here I can spot. The fact is that the factors that make
| one versus the other more preferable (the state and quality
| of frontend / backend tooling and environments, compute power
| and rendering capabilities of servers vs clients, round trip
| time cost vs responsiveness requirements etc etc) are
| continually changing over time and that's what's causing the
| back and forth swing, but lessons from previous iterations
| are generally learned.
|
| I wouldn't be shocked if we sooner or later saw language-
| level support (think of something like Elm, improved) for
| writing "just" code and then later marking up which parts
| execute where, and the communications and state
| synchronization crud and compiling down to the native
| language is just handled.
| oakwhiz wrote:
| Extracting money and creating walled gardens, while removing
| user choice, is also a focus of this server side cycle.
| ErikAugust wrote:
| "eventually SPAs -- with no way for search engines to easily
| index things."
|
| It's funny Google can't index a SPA, given the tie to Angular
| (2500 apps in use in-house). Wouldn't be so hard to build
| something that could.
| mikece wrote:
| I would have thought they could spin up headless Chrome
| instances to simply pull down, render, and then index
| websites. Apparently this is too resource intensive for them?
| I'm sure the idea has come up (there's no way I thought of
| this and they didn't).
| xemoka wrote:
| You'd think right? There must be other reasons then... how
| does Google benefit from not building better SPA crawling
| infrastructure? It's certainly gotten _better_ over the last
| few years, but still seems lacking.
| npretto wrote:
| It does indeed render and index them:
| https://developers.google.com/search/docs/crawling-
| indexing/...
| csixty4 wrote:
| Googlebot has been able to index SPAs since 2019. They use a
| Headless Chrome instance and allow a number of seconds for
| things to render after each interaction.
| spiffytech wrote:
| With the caveat that server-generated HTML is indexed
| immediately, while pages that need client-side rendering
| get put into a render queue that takes Google a while to
| get to (days?).
| super256 wrote:
| That's why you write down your use case for every
| project. Have a news site which needs to be indexed by
| Google immediately? SSR. Have some Jira or whatever? CSR.
|
| Most CSR applications are behind a login wall anyway.
| Thinking of the core applications of services like
| WhatsApp, Discord, Gmail, Dropbox, Google Docs etc.
|
| Bottom line, whether SSR really being "the future": "it
| depends".
| capableweb wrote:
| Hence you don't build documents with SPAs, they are meant
| for applications. And usually you don't care about
| indexing the inside of applications, only the landing
| pages and such, which are documents (should not be a part
| of the SPA).
|
| A blog built as a SPA? Sucks. A blog built as a
| collection of documents? Awesome.
| [deleted]
| jhp123 wrote:
| > Performance is higher with the server because the HTML is
| already generated and ready to be displayed when the page is
| loaded.
|
| but the page is loaded later because you have to wait for the
| server to perform this work. There is no reduction in total work,
| probably an absolute increase because some logic is duplicated.
| If there is a speed improvement it is because the server has more
| clock cycles available than the client, but this is not always
| true.
|
| > Complexity is lower because the server does most of the work of
| generating the HTML so can often be implemented with a simpler
| and smaller codebase.
|
| Huh? It takes less code to build a string in a datacenter than it
| does in a browser?
| macspoofing wrote:
| >but the page is loaded later because you have to wait for the
| server to perform this work.
|
| Client-side rendering isn't immune to this. The server APIs
| they hit have to render the response in JSON after hitting the
| same kinds of backend resources (e.g. DB).
| clcaev wrote:
| > There is no reduction in total work, probably an absolute
| increase because some logic is duplicated
|
| The server is either building a JSON (or some other message
| format) response, or, it could just build the relevant HTML
| fragment. In many cases, there is no real increase in actual
| work on the server.
|
| Conversely, the client side doesn't need to parse JSON and
| convert it to a DOM fragment.
|
| There's solid reasons for both approaches, depending upon the
| context.
| [deleted]
| [deleted]
| IanCal wrote:
| > but the page is loaded later because you have to wait for the
| server to perform this work. There is no reduction in total
| work
|
| Removing or shortening round trips absolutely removes work.
| Sending you a page, letting you parse the JavaScript, execute
| it to find out the calls to make, sending that to the API, the
| API decoding it and pulling from the database, rendering the
| JSON and returning that, you parsing the JSON, executing the
| JavaScript and modifying the DOM
|
| Vs
|
| Pulling from the JSON and rendering the HTML, sending it to you
| to render
|
| Seems like the latter has less total work.
| jhp123 wrote:
| yes, reducing round trips is very important for web
| performance. It can be done via a server-side architecture
| where external resources are sent immediately as prefetch
| headers, then the page is generated and sent after database
| calls etc are made. Or via a client-side architecture where
| API calls needed for initial render are either sent via
| prefetch headers, or included inline in the HTML response.
|
| If you don't need page interactivity then a pure server-side
| approach works best because you do not need to send, parse,
| or execute any page logic. For highly interactive pages you
| tend to need all the logic to rerender each component on the
| frontend anyway, so client-side rendering makes sense as a
| simpler approach without significant performance costs.
| Isomorphic approaches are more complex and brittle, they tend
| to hurt time to full page interactivity because of duplicated
| work, but can be needed for SEO. Reducing overall page weight
| and complexity and lazy-loading where possible, and getting
| rid of the damn tracking pixels and assorted third-party
| gunk, are often more effective directions for optimization
| than worrying about where HTML is generated.
| HideousKojima wrote:
| >but the page is loaded later because you have to wait for the
| server to perform this work.
|
| What is caching?
| nicoburns wrote:
| Caching also works for client side rendering of course (you
| can usually cache the entire client side app so that the
| browser doesn't have to hit the network at all to start
| running client side code).
| runako wrote:
| > you can usually cache the entire client side app so that
| the browser doesn't have to hit the network at all to start
| running client side code
|
| This is also true for Web apps that do not have meaningful
| amounts of client-side code.
|
| > Caching also works for client side rendering
|
| There are obviously a lot of differences in how caching
| works, but client-side caching is generally strictly worse
| than doing so on the server. Using the e-commerce example
| in TFA, every browser has to maintain their own cache of
| the product information, which may include cache-busting
| things like prices, promotional blurbs, etc.
|
| The server can maintain a single cache for all users, and
| can pre-warm the cache before any users ever see the page.
| Adding fragment caching, which allows parts of a page to be
| cached, a server-side caching strategy will typically
| result in less total work being done across the user
| population, as well as less work done at request time for
| each visitor.
| nicoburns wrote:
| As with SSR vs CSR in general, I think which is best
| depends on how much interactivity there is on the page.
| And also how much can be done entirely on the client side
| (it is possible to cache data client side too and make
| the app work entirely offline).
|
| As an extreme example, something like
| https://www.photopea.com/ would be a nightmare to use if
| it was server-side rendered. Or consider something like
| Google Maps. For things like ecommerce that are mainly
| focussed on presenting information I agree that client
| side rendering doesn't make a whole lot of sense. But
| that isn't the whole web.
| runako wrote:
| > how much interactivity there is on the page
|
| Yes, and also how much interactivity is better served by
| a thick browser-based client than by a round-trip to the
| datacenter. In practice, many Web applications we
| encounter daily have relatively low interactivity (where
| something like Google Maps or the Spotify Web player
| score as "high"). And then they are implemented using
| thick frameworks that are frequently slower than a round-
| trip to a server for re-rendering the entire page was
| even as far back as 10 or 20 years ago.
|
| Your extreme examples, plus applications like Figma, are
| absolutely places where I would expect to see thick
| client-side Javascript. However, most Web applications
| that we encounter frequently are more like e-commerce,
| blogs, recipe websites, brochureware sites, landing pages
| and the like that absolutely are primarily about
| presenting information. Using thick browser clients is a
| sub-optimization for most of those Web uses.
| nicoburns wrote:
| > However, most Web applications that we encounter
| frequently are more like e-commerce, blogs, recipe
| websites, brochureware sites, landing pages and the like
| that absolutely are primarily about presenting
| information. Using thick browser clients is a sub-
| optimization for most of those Web uses.
|
| I mean sure (although I'd probably make a distinction in
| the terminology and call those websites as opposed to web
| applications). I don't see many of those kind of websites
| using client side rendering though. I think the grey area
| is sites like Gmail which do have quite a bit of
| interactivity but would also be workable with SSR.
| Personally I think they're generally better using CSR. If
| done badly as the current gmail is then it makes things
| slow, but if done well (like the older gmail!) then it's
| faster.
| runako wrote:
| > no reduction in total work
|
| This does not 100% track with observed client-side performance.
| Another poster mentioned caching, which obviously reduces total
| work. I would also add shifting the work via pre-computation as
| another commonplace way to improve performance.
|
| > It takes less code to build a string in a datacenter than it
| does in a browser?
|
| The string build in a datacenter might be happening in a
| warmed-up JIT of some language, on a machine with enough
| capacity to do this effectively. By contrast, the browser is
| possibly the slowest CPU under the most outside constraints
| (throttling due to power, low RAM, multitasking, etc.). It is
| generally going to be better to do the work in the datacenter
| if possible.
| seydor wrote:
| I think it's a bit ridiculous to call it "server side rendering".
| It is called HTTP
| [deleted]
| _visgean wrote:
| Depends on what exactly it is. If you for example take a react
| app that was doing rendering on user side and change it so that
| it is "pre-rendered" on the server it makes sense to call it
| server side rendering..
| superkuh wrote:
| It is ridiculous. It's pretty much newspeak. Like calling
| installing applications "sideloading" when you're not using
| some megacorp's walled garden. Also, I'd say "HTML" not "HTTP".
| What's HTTP(/3) these days is not what HTTP(1.1) was in the
| past.
| rightbyte wrote:
| Ye ... I get flashbacks from coding jsp-pages in Java with
| FancyBeans.
| Existenceblinks wrote:
| It's called sending HTML from server.
| seydor wrote:
| aka a protocol for transferring hypertext from server
| Existenceblinks wrote:
| I get what are saying, basically all the MIME types of body
| is under the grand scheme of server side rendering. Fine.
| runako wrote:
| In theory, the "modern" frontend frameworks could be useful for a
| subset of applications. In practice, they are wildly overused,
| largely (IMHO) because front-end developers have forgotten how to
| build without them.
|
| If I gave this as an example, people would say I'm being unfair
| to the front-end folks. But since Deno posted it, I think it's
| fair say that it's overkill to use a front-end framework like
| React (mentioned as a comparator in TFA) to implement add to cart
| functionality on an e-commerce site. And that for users with slow
| browsers, slow/spotty Internet, etc., an architecture that uses a
| heavy front-end framework produces a worse overall experience
| than what most e-commerce sites were able to do in 1999.
|
| Edit: IMHO all of this is an artifact of mobile taking a front
| seat to the Web. So we end up with less-than-optimal Web
| experiences due to overuse of front-end JS everywhere; otherwise
| shops would have to build separate backends for mobile and Web.
| This, because an optimal Web backend tends to produce display-
| ready HTML instead of JSON for a browser-based client application
| to prepare for display. Directly reusing a mobile backend for Web
| browsers is suboptimal for most sites.
| revskill wrote:
| There's always a "what if".
|
| What if you don't just stop at "adding a add to cart button" ?
| runako wrote:
| Correct. The design tradeoff is dependent on knowing how much
| of a Lisp interpreter you need to build. For most sites, the
| answer is "none" and it's not worth degrading user
| experiences just in case your e-commerce site ends up needing
| the ability to also serve as a designer for Minecraft levels.
|
| (Even if it does, there is no requirement to ship the heavy
| JS needed for the Minecraft editor to all the e-commerce
| product description pages.)
| teaearlgraycold wrote:
| IMO the big value add from React and friends is _all_ of your
| rendering logic is in the same language and the same code base.
| I do not want to go back to templated HTML from Ruby
| /Java/PHP/whatever combined with ad hoc JS to handle whatever
| parts need to be dynamic. If you know your UI can be almost
| completely static (like with HN) then the trade-off from the
| old way is acceptable. But if you don't know where your site's
| going to go because you're a startup then it's hard to buy into
| old school SSR. NextJS, when done right, can be an acceptable
| 3rd option.
| [deleted]
| horsawlarway wrote:
| > And that for users with slow browsers, slow/spotty Internet,
| etc., an architecture that uses a heavy front-end framework
| produces a worse overall experience than what most e-commerce
| sites were able to do in 1999.
|
| I think this is heavily dependent on company focus (and to some
| extent - the data requirements of the experience)
|
| Basically - I think you can create a much stronger, more
| compelling experience on a site for a person with a bad/slow
| connection with judicious usage of service_workers and a solid
| front end framework.
|
| But on the flip side... Making that experience isn't trivial,
| requires up front planning, and most companies won't do it.
| devjab wrote:
| > In practice, they are wildly overused, largely (IMHO) because
| front-end developers have forgotten how to build without them.
|
| I've been a "back-end" developer who sometimes does "front-end"
| stuff for a long time. Both with web tech going back to classic
| asp, web-forms and those Java beans for JSF or whatever it was
| called, and, with various gui-tools for C#, Java and Python,
| and I think one of the reasons people use the "front-end" tools
| you're talking about in 2023 is because all those other tools
| really sucked.
|
| I guess NextJS can also be server side rendering, but even when
| you just use it for React (with Typescript and organisation-
| wide linting and formating rules that can't be circumvented)
| it's just sooooo much easier than what came before it.
|
| Really, can you think of a nice application? Maybe it's because
| I've mostly worked in Enterprise organisations, but my oh my am
| I happy that I didn't have to work with any of the things
| people who aren't in digitalisation have to put up with. I
| think Excel is about the only non-web-based application that
| I've ever seen score well when they've been ranked. So there is
| also that to keep in mind.
| 411111111111111 wrote:
| I started my webdev time with Django and flask, switched to
| Spring boot at the next job with various templating languages,
| depending on the artefact and some laravel sprinkled in.
|
| Finally, the employer decreed that moving forward all frontends
| had to be done in Angular (version 6 or 7 at that time) and I
| have to say... I don't understand the point you're trying to
| make.
|
| The frontend stacks aren't particularly more complex then the
| equivalent application done with html templates and varying
| ways to update the DOM.
|
| Personally I'd say they're easier, which is why UX also started
| to demand state changes to be animated etc, requests to be
| automatically retried and handle every potential error
| scenario, which was never even attempted with pure backend
| websites
|
| Nowadays I prefer using Typescript for anything html related
| and would not use backend templates unless the website is not
| going to be interactive
| runako wrote:
| > I don't understand the point you're trying to make.
|
| > The frontend stacks aren't particularly more complex
|
| I'm not making a point about programmer experience at all.
| I'm saying that for most uses of most sites, the fact that
| Angular (or similar) is running in the user's browser is
| making the user experience worse. Performance is worse,
| accessibility can be worse, and so forth. And (again, for
| most uses of most sites) there is no benefit to the end user.
|
| Consider the blogs, brochureware sites, landing pages, and
| e-commerce product pages that absolutely don't need something
| like Angular that today nonetheless do include it. Most Web
| apps are much closer to those than to Google Earth, Facebook,
| or Spotify's Web player.
| bigmattystyles wrote:
| If you don't do server side rendering, you don't (almost)
| automatically get a set of nice REST endpoints that return
| JSON/XML/ETC? I get that the abstraction might be nice for
| security, but at least for corporate intranet applications, a
| nicely structured, secured (e.g ODATA) webapi you query for
| client side rendering has the added benefit that it can be
| invoked programmatically with REST by other authorized parties.
| Obviously, you want the standard DDoS and security protections,
| but this fact alone, has turned me off server side rendering
| alone. Isn't it also nice from a computation cost standpoint to
| let the client do the rendering? I suppose UX could suffer and
| for external facing apps, this is likely of the utmost
| importance. Happy to be educated if I'm unaware of something
| else.
| ttymck wrote:
| What difference does it make, with respect to security, whether
| the server returns html or json that needs to be formatted into
| html?
|
| The computation for rendering (in every case I've seen, and I
| have to speculate in 80% of cases ever) is so trivial compared
| to the actual retrieval of the data to be rendered.
| zelphirkalt wrote:
| > Isn't it also nice from a computation cost standpoint to let
| the client do the rendering?
|
| Aside from all other implications, letting each client render
| the same stuff is a massive waste of energy and compute.
| guhidalg wrote:
| No? Each client may be receiving the same document, but based
| on their device, view port, preferences, etc... the rendered
| result may be different.
|
| Either way, the measurement of Joules/page is likely to be
| such an astronomically small number compared to the constant
| cost of simply having a server at all IMO.
| mattnewton wrote:
| If it was truly the same computation then it could be a
| static site generator, but with typical server side rendering
| you are still doing a new render per user no?
| TrispusAttucks wrote:
| Depends but most server side frameworks cache render
| outputs of varying granularity. So you can cache components
| that are static.
| the_gastropod wrote:
| As an example, Ruby on Rails makes this relatively trivial
| using respond_to
| https://apidock.com/rails/ActionController/MimeResponds/resp...
| TrispusAttucks wrote:
| Maybe I misunderstand but it's pretty trivial to render
| different server side output based on context and is common in
| most frameworks.
|
| Just pass a header for request content type json. Then the
| server returns data in json format as opposed to html.
| brap wrote:
| >ctrl+f "State"
|
| >ctrl+f "Effect"
|
| >0 results
|
| I only skimmed through the post, but seems like it's ignoring the
| main reasons why CSR is needed?
| Glench wrote:
| Shoutout to Sveltekit which does SSR and client-side navigation
| by default! https://kit.svelte.dev/
| advisedwang wrote:
| This is why I'm really excited about htmx [1]. No need to write
| isomorphic javascript at all. You can still use server side
| templates but have interactive web pages.
|
| [1] https://htmx.org/
| silver-arrow wrote:
| It really is so terrific. After using it for over a year, I
| agree with the creator's of htmx when they say that this is how
| web development would have been if HTML as hypermedia was
| continually improved all these years.
|
| When you start using htmx, you raise your eyebrows and think -
| hmmm this could be something interesting. When you use it for
| many months, you then open your eyes very wide and think - this
| is something special! In hindsight is so damn obvious, why
| didn't it happen much earlier?!?!
| Existenceblinks wrote:
| It's a banger for docs type of site. All html partial pieces
| are pre-generated and put in cdn. Sort of like how search
| utilizes index stuff.
| [deleted]
| seti0Cha wrote:
| I just started playing around with it after fumbling around
| with Vue for a bit. I really like that there is so much less
| magic involved, no getting lost in a twisty maze of proxies. A
| real breath of fresh air. But then I haven't done real frontend
| development since JSPs were hot, so I'm not sure my liking it
| is a good thing.
| ChewFarceSkunk wrote:
| [dead]
| szastamasta wrote:
| I think that biggest issue with page size is not due to client
| side rendering, but rather thanks to bundling and idea that you
| need to download the same minified Lo-Dash on each and every
| page. Why can't we just use public CDNs is beyond my
| understanding.
|
| I really like client side apps. They are so much more responsive.
| The only problem is with bundle sizes.
| IanCal wrote:
| > Why can't we just use public CDNs is beyond my understanding.
|
| Privacy issues iirc. I think you can use the setup to introduce
| tracking/reveal information about your history.
| afandian wrote:
| It could have worked with a trusted, open broker. I'm sure
| there could be a compatible sustainability model. I feel like
| a lot of potential trust was broken by the likes of Facebook
| and Google.
| IanCal wrote:
| I'm not sure. I think the issue was not the provider but
| that if you visited pages it was possible for that page to
| gain information about your history based on whether
| resources were cached.
| pictur wrote:
| It is tragicomic that the slogan "We should do things on the
| server instead of the browser" is becoming popular again as
| browser technologies evolve.
| dragonelite wrote:
| And the cycle starts anew...
| gfodor wrote:
| If you're interested in server side rendered multiplayer 3D
| worlds, my project webspaces[1] lets you render HTML and get a 3D
| World.
|
| [1] https://webspaces.space
| majestic5762 wrote:
| Very cool!
| w4eg324g wrote:
| Idk Im not so much into web development but isn't ssr rendering
| much more expesive? I just move all the processing/calculation to
| my/server side instead of the clients. This means for a business
| with many clients I have to pay for the stuff that the clients
| themselves could have done instead...
| sharemywin wrote:
| Not to mention if you send the whole app down in the form of a
| SPA and they only use one or 2 pages that's alot of overhead.
| ttymck wrote:
| How much more expensive is it? 10%? 150%?
|
| How much is the additional compute cost? $10 a month? $1000 a
| month?
|
| How much more productive are your developers? Does it offset
| the cost?
| ketralnis wrote:
| Is it though? Is composing a response into a graphql or JSON or
| XML format that much more expensive than into an HTML format?
| Is {"comment": {"body: "lol"}}
|
| notably expensive than <Comment body="lol"/>
| cafebean wrote:
| Web page design will have to fundamentally change to
| accommodate surgically updating web pages, for the large
| overhead to disappear.
| TrispusAttucks wrote:
| It already exists for several popular frameworks/languages.
|
| One example is [1] Laravel livewire.
|
| [1] https://laravel-livewire.com/
| w4eg324g wrote:
| I assume a modern webpage has some logic to it which besides
| the redering also needs to be processed and if you apply that
| to a scale of billions x years I guess yes. But as I said I'm
| not an expert in the field nor have I any numbers. It's just
| what I thought.
|
| Edit:
|
| Thinking of having only to serve a state once and having each
| action processed on the client side instead of making for
| each a call the backend which has to return a fully rendered
| page.
|
| Maybe I have a misconception goin on here!
| cmoski wrote:
| The server doesn't [have to] return the entire page on a
| change. Return small chunk of HTML or small chunk of JSON.
| There will be a small cost to do the HTML on the server but
| there is also a cost to do the HTML on the client: sending
| them 50000000000kb of JavaScript initially.
| theappsecguy wrote:
| Compute power is absurdly cheap these days, especially for
| something as simple as SSR of web pages
| tlarkworthy wrote:
| It's obviously nonsense. The lowest latency cache and state
| storage is clientside. You can piss around with multi regions and
| SSR to minimize latency but that's just placing a lot of regional
| caches _near_ your users. The nearest place is in their actual
| browser - > offline first is the future
| [deleted]
| timw4mail wrote:
| Client-side storage sucks, especially if you want to visit the
| same site from multiple devices. Without more code to sync this
| data, it's not great.
| tlarkworthy wrote:
| Being dependant on a reliable internet connection sucks,
| especially when travelling. SSR just won't work for mobile.
| With offline first, the client is the lowest latency server
| possible. Yes you should sync too. Offline first, not offline
| only
| [deleted]
| MentallyRetired wrote:
| It depends on what you're building. Choose the best tool for the
| job. Every time. Don't just default to your favorite.
| pookha wrote:
| WASM-side rendering. :)
| The_Colonel wrote:
| Getting shivers, sounds like reincarnation of Flash websites.
| thorncorona wrote:
| Still haven't replaced half the functionality of flash
| websites. So many flash games are gone forever.
| silver-arrow wrote:
| Much more prefer the htmx way of SSR parts of the page
| dynamically. Also, totally server-side agnostic, so we can use
| what we prefer. Clojure in our case.
|
| https://htmx.org/
| BizarreByte wrote:
| HTMX has to be my favourite thing web related. I never really
| got React and always found it a bear to setup and use, but HTMX
| and server ride rendering? Easy and extremely productive for a
| non-frontend guy like me.
|
| I really hope it or something like it becomes popular long
| term.
| satyrnein wrote:
| Can the same server side code render that fragment, regardless
| of whether it's part of the initial page load or a subsequent
| update? You need an additional route for the Ajax call, right?
| Just curious how this gets structured.
| silver-arrow wrote:
| Yes it can. Right, your server-side routing would have those
| routes set-up. Naming them can get interesting :)
| [deleted]
| robertoandred wrote:
| So any change in the UI means sending a request to the server,
| waiting for it to render, and waiting for that response with
| the new markup?
| silver-arrow wrote:
| Yes indeed! The core aspect, however, is that your server is
| returning fragments of html that htmx places in the DOM
| extremely quickly. They have pretty good examples on their
| site illustrating some "modern" UI patterns.
|
| As an example, you may have an HTML table on your page that
| you want to insert a new row for some reason on let's say a
| button click. You place some attributes that htmx understands
| on your button that will call for the TR HTML chunk from the
| server. You can imagine replacing all the rows for a paging
| click etc.
|
| Again check out the example for cool stuff.
| robertoandred wrote:
| Sounds pretty slow. At least a second before the UI
| responds to any action?
| BizarreByte wrote:
| It's much faster than that in practice, but of course it
| comes down to how well your backend is written. I've been
| using HTMX lately and I can blink and miss the UI
| updates.
|
| I wish I had numbers, but in my experience it's far
| better than you'd expect. Basically take the length of a
| REST call you'd have to make anyway and add a few
| milliseconds for the rendering.
|
| It won't be the right choice in all cases, but it's a
| great option in many.
| silver-arrow wrote:
| You for sure wouldn't want to build a spreadsheet type
| app with htmx because of that aspect. Many other types of
| web apps can benefit from the simplification of the
| architecture, however. And like my paging example, many
| times you need to go to the server anyway. But sure, like
| anything, I wouldn't use htmx for every situation.
| silver-arrow wrote:
| True BizarreByte. I love how htmx let's you easily add
| animation indicators on actions, because many times it's
| too fast due to the small chunks coming back and getting
| placed so quickly in the DOM
| dsego wrote:
| I built a tiny deno+htmx experiment, you can check it out at
| https://ssr-playground.deno.dev, it's all server-side
| rendered html with htmx attributes sprinkled around.
| mehphp wrote:
| Yes, this is how Elixir Phoenix works. The caveat is that
| only returns exactly what needs to be changed, so it's a
| small diff.
|
| It's not suitable to everything, but it works really well.
| I'm not advocating switching to it right now, but it's
| looking very promising.
| the_gastropod wrote:
| Versus the overwhelmingly common SPA counterexample where any
| change in the UI means sending a request to the server,
| waiting for it return your json response, parsing that json
| response, building html out of that json response, and
| updating the dom.
| robertoandred wrote:
| You don't need to wait for a response if you're sending
| data, or reorganizing data, or doing something that doesn't
| rely on data.
| the_gastropod wrote:
| Yep. DOM manipulation can be done to server-rendered
| views to do that kind of thing, just fine. No SPA js
| framework required.
| robertoandred wrote:
| So now you've got two templates? How do you keep
| modifications in sync?
| greg_tb wrote:
| Completely agree.
|
| For me the biggest advantage is eliminating the need to learn,
| debug, and maintain components on an additional frontend
| framework (Angular/React/Vue).
|
| I just built a rough toy project [0] that was my first time
| with FastAPI and HTMX and it was fun and fast.
|
| [0] https://www.truebuy.com/ (like rotten tomatoes for product
| reviews but just for TVs right now)
| [deleted]
| recursive wrote:
| I love the concept, but for my case, I'd need more granular
| filters. In particular, I only buy TVs that have analog audio
| outputs so I can hook them up to any speakers. That's a
| minority of TVs these days, but there are still a few around.
| Finding the good ones would be useful to me.
| 0xdeadbeefbabe wrote:
| It's consumer reports but faster :)
| greg_tb wrote:
| Thanks. Speed and quality content is what I've been focused
| on. I was tired of Google search spam and Amazon review rot
| so built this to try expose good/trusted/authentic content.
| lbriner wrote:
| The thing that went wrong with front-end frameworks imho was that
| instead of being what was promised: you could update UI elements
| with NO NEED to contact the server at all, only posting back when
| something needed persisting, instead it became an excuse that
| every action on the front-end needed to call an API or 3 so we've
| ended up with over-complicated apps that instead of not relying
| on the backend are relying on it more than ever.
|
| Any little glitch, slowdown or unavailability is affecting you
| not only once on page load but potentially with every single
| interaction. To make it worse, a lot of backend interactions are
| not made interactively or synchronously where the user might
| expect to wait a little while, they are made in the background
| causing all manner of edge cases that make apps somewhere from
| very slow to virtually unuseable.
|
| I guess it's that old adage that people will make use of whatever
| you offer them, even if they go too far.
| [deleted]
| brundolf wrote:
| I love Deno, I hope it succeeds, but I'm disappointed to see them
| so confidently publishing a broad assertion like this that's very
| weakly argued, and heavily biased towards promoting their own
| position in the stack
|
| > Compatibility is higher with server-side rendering because,
| again, the HTML is generated on the server, so it is not
| dependent on the end browser.
|
| Excuse my bluntness, but this is complete nonsense. Browser
| incompatibility in 2023 is mostly limited, in my experience, to
| 1) dark corners of CSS behavior, and 2) newer, high-power
| features like WebRTC. #1 is going to be the same regardless of
| where your HTML is rendered, and if you're using #2, server-side
| rendering probably isn't an option for what you're trying to do
| anyway. I can confidently say browser compatibility has roughly
| _zero_ effect on core app logic or HTML generation today.
|
| > Complexity is lower because the server does most of the work of
| generating the HTML so can often be implemented with a simpler
| and smaller codebase.
|
| This, again, is totally hand-wavy and mostly nonsensical. It's
| entirely dependent on what kind of app, what kind of
| features/logic it has, etc. Server-rendering certain apps can
| definitely be simpler than client-rendering them! And the
| opposite can just as easily be true.
|
| > Performance is higher with the server because the HTML is
| already generated and ready to be displayed when the page is
| loaded.
|
| This is only partly true, and it's really the only partly-valid
| point. Modern statically-rendered front-ends will show you the
| initial content very quickly, and then will update quickly as you
| navigate, but there is a JS loading + hydration delay between
| seeing the landing page content and being able to interact with
| it at the beginning. You certainly don't need "a desktop...with a
| wired internet connection" for that part of the experience to be
| good, but I'm sure it's less than ideal for people with limited
| bandwidth. It's something that can be optimized and minimized in
| various ways (splitting code to make the landing page bundle
| smaller, reducing the number of nested components that need to be
| hydrated, etc), but it's a recurring challenge for sure.
|
| The tech being demonstrated here is interesting, but I wish
| they'd let it stand on its own instead of trying to make sweeping
| statements about the next "tock" of the web trend. As the senior
| dev trope goes, the answer to nearly everything is "it depends".
| It shows immaturity or bias to proclaim that the future is a
| single thing.
| [deleted]
| adeon wrote:
| +1 agreed.
|
| There are some pretty crappy bloated client-side apps but when
| it's done well and it is appropriate for the app in question,
| it's amazing.
|
| I've been playing novelai.net text generation and I think their
| app is mostly client-side. It's one of the most responsive and
| fast UIs I've seen.
|
| Also, the article has this sentence: "Performant frameworks
| that care about user experience will send exactly what's needed
| to the client, and nothing more. " Ironically, a mostly client-
| side app that's only loaded once, cached, and is careful about
| when to request something from the server, might be more
| bandwidth friendly than a mostly server-side app.
| JohnFen wrote:
| If web sites have to so dynamic, I much prefer that the
| computation involved is done on their machine than on mine. I
| simply don't trust random web sites enough to let them run code
| on my machines.
| rektide wrote:
| What is it you dont trust? This Fear Uncertainty & Doubt
| clashes heavily with the excellent security sandbox the web
| browser is. What is the harm you are afraid of? What are you
| supposing the risk is/what's in jeapordy here?
| rightbyte wrote:
| JS allows for fingerprinting. I only run JS on a opt-in basis
| on like my bank and some pages I trust. You don't miss much
| really.
|
| https://amiunique.org/fp
| JohnFen wrote:
| Relying on sandboxes seems unwise to me. They're a useful
| backstop, but shouldn't be the primary defense. The primary
| defense is to minimize the exposure to risk in the first
| place.
|
| As to what harm I'm avoiding, it's mostly around tracking --
| which is something that browsers have a very difficult time
| preventing, especially if sites are allowed to run code in
| them.
| MrOwnPut wrote:
| So lets say a resume generator website, or a document
| converter, etc.
|
| You trust uploading your personal information to a server
| to generate the pdf/image/whatever vs doing it in solely in
| the browser?
|
| Doing more on the server would lead to more tracking, not
| less.
| JohnFen wrote:
| Well, I wouldn't use such a website anyway (especially a
| document converter -- that is better done using a real
| application), regardless of where the processing was
| done, unless I was very certain that the website was
| trustworthy. For one thing, even if the website purports
| to not move my data to their servers, how do I know
| they're being truthful without going to extremes such as
| sniffing traffic?
|
| There have been plenty of sites that have lied about such
| things.
| MrOwnPut wrote:
| You can swap out my examples for anything really.
|
| The point is the more work the server does, the more data
| you have to send them to do that work.
|
| As far as trusting it's client-side only, opening the
| network tab in devtools would suffice.
|
| If you think they broke the sandbox (Google would pay
| millions for that!), yes sniffing would be the next step.
|
| At least you have a sandbox on web, you usually don't
| have that for native apps.
|
| But that's all better than willingly sending data to
| another entity's server and trusting them to not
| abuse/leak it.
| JohnFen wrote:
| With a couple of necessary exceptions, I don't use
| websites to store or process personal data, so that's not
| really the use case I have in mind.
|
| What I have for native applications that I don't for the
| web is the ability to firewall off the native
| applications.
| tiagod wrote:
| > What I have for native applications that I don't for
| the web is the ability to firewall off the native
| applications.
|
| There you're placing trust on the firewall's sandbox. Are
| you sure the application can't communicate with the
| outside at all? DNS exfliltration for example?
| gavinray wrote:
| The article seems to contradict itself:
|
| The first example shows the server rendering a handlebars
| template and then sending that as a response to the client --
| it's then stated that this "isn't true SSR"
|
| Then the same thing is done without a template language, using
| strings instead, and this is some different kind of SSR
| altogether and the "true SSR".
|
| Which also seems to insinuate that only JS/TS are capable of SSR?
| Server-side rendering! Well, kinda. While it is rendered on the
| server, this is non-interactive. This client.js file
| is available to both the server and the client -- it is the
| isomorphic JavaScript we need for true SSR. We're using the
| render function within the server to render the HTML initially,
| but then we're also using render within the client to render
| updates.
| [deleted]
| brundolf wrote:
| I'm not sure it's a "contradiction" so much as a weird
| bending/re-branding of the term "server-side rendering". One of
| many issues with the article
| underbluewaters wrote:
| Mindshare will go towards rendering javascript components on the
| server since that's another complex problem that's fun to solve.
| That's good! We shouldn't have to give up the productivity gains
| of tools like React to improve time-to-interactive and other
| performance stats.
|
| That said... I'm not going to pretend it's an urgent need and
| will wait for these tools to mature.
| ArcaneMoose wrote:
| The issue I have with SSR is that it offloads processing power
| onto the server. That means I have to pay more as the host
| instead of relying on user's browser to handle the compute "for
| free".
| superkuh wrote:
| I'm probably not in your target demographic but when a website
| pushes computation to me for simple things like displaying text
| and images I close the tab.
| ranger_danger wrote:
| so basically you only view plain text files in a browser?
| that sounds a little extreme to me.
| forgotpwd16 wrote:
| >when a website pushes computation to me for simple things
| like displaying text and images I close the tab
|
| How will you even know without looking at the source or
| blocking JS across the web? Like, sure, if they've fancy
| animations across all elements from the moment you open the
| page should be obvious. But what about something like
| https://rhodey.org/? It opens instantaneously in my ancient
| laptop connected to a terrible internet line. Check source.
| Only a single empty div in body. Everything is rendered with
| JS.
| superkuh wrote:
| I block JS across the web by default. For some sites I'll
| learn the miminum set of domains to allow for temporary
| whitelisting. But most aren't worth the effort.
| ehutch79 wrote:
| And congrats on getting fired for refusing to use the
| companies internal tools. Not all web sites are brochure-
| ware. Sometime the target demographic is a limited number of
| internal employees who open the app once and keep it open.
|
| Never mind that i don't know how you would display images
| server side.Your client needs to decode that image and render
| it to screen at some point
| onion2k wrote:
| If a website is just showing text and images it shouldn't
| really be dynamically rendering anything anywhere. Write the
| content to static files during deployment and serve them.
| wizofaus wrote:
| Surely most of the compute cycles for turning a web page into
| pixels happen on the client anyway? I'm not convinced the
| server necessarily has to do massively more work to return HTML
| over JSON (though it would obviously depend on how the HTML-
| generation was coded. If you're trying to use the client-style
| page rendering techniques on the server, issuing API calls over
| the network and interpreting Javascript code, then you have a
| point).
|
| Edit: my "most" claim is probably too strong on reflection:
| while there's still a lot of work to do to convert an in-memory
| DOM into pixels, it's likely to be highly optimized code (some
| of it handled at the GPU level) that uses minimal compute
| cycles. And while the V8 engine may be similarly optimised, it
| still has to interpret and execute arbitrary JS code supplied
| to it, plus handle all the necessary sandboxing. It'd be
| interesting to get a breakdown of what compute cycles are used
| converting a typical SPA into pixels, and of course a
| comparison with how much time is spend waiting for data to come
| across the network.
| trashburger wrote:
| The problem with this idea is that the user's browser's compute
| is not "free". Offloading the computing means the users have a
| worse experience, which will affect your userbase and page
| rankings.
| ArcaneMoose wrote:
| That sort of depends. If the compute needs to happen
| regardless and now you add a layer of shipping data to and
| from the server, that could add even more latency and make
| for a worse experience.
| Havoc wrote:
| >Offloading the computing means the users have a worse
| experience
|
| Does it though? Loading a webpage barely registers in cpu
| usage etc on a reasonably modern device
| kitsunesoba wrote:
| Depends on the complexity of the page. A lot of sites with
| heavier JS and SPAs eat a lot of memory which can cause
| problems for users of the many laptops out there with 4/8GB
| of RAM, as well as many smartphone users who have 2GB of
| RAM or less. In the case of the latter visiting a heavy
| website can be enough to prompt the OS to kill some other
| app to make memory available, which means in that situation
| one's site is in direct competition with other things the
| user may be needing more than the site.
| Dalewyn wrote:
| That's a problem with the dingus that wrote all that
| JavaShit, not a problem with the end user's computer.
|
| We have 20+ core consumer-grade CPUs and double- to
| triple-digit RAM and the internet at-large (read: Web2.0)
| still runs like it's the 1970s.
| jasonlotito wrote:
| The argument against this is the cost of a user's bandwidth.
| If I have all the computation done on the server side, then I
| have to wait for the round trip every single time just to
| download the results. In this case, the browser's compute is
| more free, as the cost to send a remote request is more than
| likely higher.
|
| Like most things, there is no simple right answer, and it
| depends on what you are doing. But blindly assuming the
| experience will be worse using CSR is as silly as assuming
| SSR will always be worse as well.
| dbbk wrote:
| So just cache it and then have it be processed once every
| minute or whatever.
| BackBlast wrote:
| > Performance is higher with the server because the HTML is
| already generated and ready to be displayed when the page is
| loaded.
|
| If you can reasonably cache the response, SSR wins, no question.
| On the first page dynamic render "it depends", can be SPA or SSR.
| 2nd page render a well built SPA wins.
|
| "it depends....." Server CPU cores are slower than consumer cores
| of similar eras. They run in energy efficient bands because of
| data center power concerns. They are long lived and therefore are
| often old. They are often on segmented and on shared
| infrastructure. And if the server is experiencing load, seldom an
| issue on the client's system, you have that to deal with also.
| Your latency for generating said page can easily be multi-second.
| As I've experienced on many a dynamic site.
|
| Using the client's system as a rendering system can reduce your
| overall cloud compute requirements allowing you to scale more
| easily and cheaply. The user's system can be made more responsive
| by not shipping additional an markup page for a navigation and
| minimizing latency by avoiding network calls where reasonable.
|
| On dynamic pages, do you compress on the fly? This can increase
| latency for the response. If not, page weight suffers compared to
| static compressed assets such as a JS ball that can be highly
| compressed well ahead of time at brotli -11. I never brotli -11
| in flight compression. Brotli -0 and gzip -1.
|
| This is for well built systems. Crap SPAs will be crap, just as
| crap SSR will similarly be crap. I think crap SPAs smell worse to
| most - so there's that.
|
| > Compatibility is higher with server-side rendering because,
| again, the HTML is generated on the server, so it is not
| dependent on the end browser.
|
| If you use features the end client doesn't support, regardless of
| where you generate the markup, then it won't work. Both servers
| and clients can be very feature aware. caniuse is your friend.
| This is not a rule you can generalize.
|
| > Complexity is lower because the server does most of the work of
| generating the HTML so can often be implemented with a simpler
| and smaller codebase.
|
| Meh. Debatable. What's hard is mixing the two. Where is your
| state and how do you manage it?
|
| If you're primarily a backend engineer the backend will feel more
| natural. If you're primarily a front end engineer the SPA will
| feel more natural.
| Animats wrote:
| I'm always amused to hear web types speak of grinding HTML, CSS,
| and JavaScript down to somewhat simpler HTML, CSS, and JavaScript
| as "rendering". Rendering, to graphics people, is when you make
| pixels.
| pvg wrote:
| It's consistent with the use of 'render' or 'paint' to describe
| what a UI component does to, well, render itself. For most UI
| systems this has involved higher level APIs than directly
| pushing pixels for a long time.
| [deleted]
| [deleted]
| computing wrote:
| "The future of the Web is what suits our business model" /s
|
| But in all seriousness, the web has websites, it has apps, it has
| games. Pick a tool that's appropriate for the job and forget
| about what is the past/present/future.
| spiffytech wrote:
| The rise of metaframeworks is interesting because it brings
| nuance to this. The line between site and app can be blurry.
|
| For example, my app has a main screen that needs to be client
| rendered. It also has a user settings screen that could be
| implemented as a traditional server rendered page with no
| JavaScript, except it's a lot more practical to build
| everything inside the same project and technology. Apps and
| their marketing pages are often put on different subdomains for
| the same reason.
|
| Metaframeworks that blend rendering modes help users get a
| lighter page load where appropriate, with less developer
| effort.
| arcanemachiner wrote:
| Pardon my ignorance, but what do you mean by metaframework?
|
| I just learned about Astro the other day. It allows you to
| blend components from SPA frameworks together. Is that what
| you mean?
| spiffytech wrote:
| 'Metaframework' is a term for frameworks that wraps React
| or Vue or similar. Next.js, Nuxt, Gatsby, etc. I think
| Astro is considered a metaframework too.
|
| They're sometimes called stuff like "a React framework",
| depending on whether the speaker considers React a library
| or a framework.
| computing wrote:
| Agreed.
|
| https://example.com/app might be a SPA and
| https://example.com/profile might be server rendered and
| https://example.com/blog might be a static site. And that's
| great!
|
| Pick the right tool for the job :)
| [deleted]
| andrewstuart wrote:
| I really don't like server side rendering.
|
| I like my react apps to be static files served from a plain HTML
| server.
| dbbk wrote:
| Depends what you're building. If it's a dashboard app gated
| behind a user login, sure, have it be a static HTML file. SEO
| is irrelevant and you wouldn't be server rendering anything
| anyway.
|
| If it's a public site and you want people to find it (ie SEO)
| you really should be server rendering and caching on a CDN.
| [deleted]
| timw4mail wrote:
| Server-side rendering is so much simpler from platforms that
| were designed to do it.
|
| As a user, I despise seeing a white screen with a spinner.
| FpUser wrote:
| The future is not to stick to a single religion but to apply
| one's brains when architecting solution as it all depends on
| multiple factors and there are no silver bullets in this
| universe.
| recursivedoubts wrote:
| I may be misunderstanding this, but isomorphic SSR sounds an
| awful lot like the Java Server Faces concept of a server side DOM
| that is streamed to the client. JSF was largely dropped by java
| developers because it ended up scaling poorly, which makes sense
| since it violates one of the main constraints that Roy Fielding
| proposed for the webs REST-ful architecture: statelessness.
|
| An alternative approach is to retain the statelessness of the
| first option they outline (I don't understand why it isn't "true"
| SSR): use normal, server-rendered HTML, but improve the
| experience by using htmx (which I made) so that you don't need to
| do a full page refresh.
|
| This keeps things simple and stateless, so no server side
| replication of UI state, but improves the user experience. From
| what I understand of the isomorphic solution, it appears much
| simpler. And, since your server side doesn't need to be
| isomorphic, you can use whatever language you'd like to produce
| the HTML.
| deniz-a wrote:
| In the sample code, there is no "streaming" going on -- the
| server simply uses the client code as a template to generate
| HTML and sends it as a normal HTTP response. In pseudocode:
| import "client.js" as client on request:
| document = new ServerDOM(), client.render(document,
| data), respond with document.toHtmlString()
|
| > I don't understand why it isn't "true" SSR This article seems
| to be using the term SSR exclusively in the frontend framework
| sense, where client code is run on the server. It's not how I
| use the term but it is a common usage.
|
| Another possible reason that the htmx approach isn't discussed:
| the any-server-you-want nature of htmx is terrible for selling
| Deno Deploy :]
| recursivedoubts wrote:
| Ah, I thought there was some sort of diff-and-send going on
| to the client.
|
| I do know there are folks using htmx and deno (we have a
| channel on our discord) so I don't want to come across as
| oppositional! Rather, I just want to say that "normal" SSR
| (just creating HTML) can also be used in a richer manner than
| the plain HTML example given.
| kaba0 wrote:
| Slightly off topic, but I found JSF the most productive out of
| any framework. It has some not so nice edge cases, but when you
| are "in the green" and you don't need to scale to infinity
| (which, let's be honest, is the case most often than not) it
| really is insanely fast to develop with. For internal admin
| pages I would hardly use anything else.
| quechimba wrote:
| I think JSF was ahead of its time.
|
| I'm working on a server side framework and someone told me it
| reminded them of Java Server Faces. I think the approach works
| really well and latency is low enough when you can deploy apps
| all over the world. Also they didn't have HTTP2 or websockets
| back then... What I'm doing is basically a clone of Preact, but
| server side Ruby, streaming DOM-patches to the browser...
| pcmaffey wrote:
| Client-side rendering needs to rebrand as local-first. Then the
| cycle will start anew.
| [deleted]
| EVa5I7bHFq9mnYK wrote:
| Next step: server rendered PNGs. Browser not required.
| [deleted]
| ravenstine wrote:
| The future of the web is most web developers losing their jobs
| for failing to be good stewards of their platform.
| [deleted]
| Existenceblinks wrote:
| There are 2 things that are orthogonal in current trend. This SSR
| buzz is not actually selling Server Side Rendering, they are
| selling 'one language to rule them all' (they call this dumb name
| "isomorphic").
|
| Therefore, they are not solving all the problems of client-server
| + best UX constraints. Basically the problems we have all this
| time comes from: 1) There's a long physical
| distance between client and server 2) Resource and its
| authorization have to be on server. 2) There's the need for
| fast interaction so some copy of data and optimistic logic need
| to be on client.
|
| The "isomorphic" reusable code doesn't solve [latency + chatty +
| consistent data] VS [fast interaction + bloat client +
| inconsistent data] trade-off. At this point I don't know why they
| think that is innovation.
| [deleted]
___________________________________________________________________
(page generated 2023-02-03 23:00 UTC)