[HN Gopher] 300ms Faster: Reducing Wikipedia's total blocking time
___________________________________________________________________
300ms Faster: Reducing Wikipedia's total blocking time
Author : janpio
Score : 380 points
Date : 2023-05-29 12:49 UTC (10 hours ago)
(HTM) web link (www.nray.dev)
(TXT) w3m dump (www.nray.dev)
| exabrial wrote:
| The better question is: Why use Javascript at all for a static
| website?
| janpio wrote:
| The article mentions the functionality that the Javascript is
| used for: Expanding and collapsing sections, adding the media
| viewer on thumbnails shown in the page. Both seems like
| reasonable use cases for interactivity that is (better)
| achieved using Javascript.
| l5870uoo9y wrote:
| And not to forget preview popups when hovering a link
| (thought it would be even if they implemented preview modals
| like https://gwern.net).
| gwern wrote:
| The sections could be collapsed by CSS by default without
| render-blocking JS, and adding a media viewer to random media
| bits and pieces can definitely wait until later. So time-to-
| interactivity is unnecessarily hampered here.
| exabrial wrote:
| None of those things actually enhance the website. The only
| time people "use" them is by accident.
| shepherdjerred wrote:
| Is Wikipedia really a significant offender here?
| muspimerol wrote:
| Hard disagree, I use the hover-to-preview-article feature
| all the time. Sometimes I just want to peek at a page to
| see what it is instead of opening it.
| janpio wrote:
| I don't think that is true.
|
| On Wikipedia's mobile view, collapsed sections are super
| useful (as the table of contents is not visible via the
| sidebar) and media viewer makes it possible to view details
| of an image/thumbnail without navigating away from the
| page.
| hutzlibu wrote:
| "Both seems like reasonable use cases for interactivity that
| is (better) achieved using Javascript."
|
| Only if you want fancy animations. I sometimes do, but I
| think wikipedia can do without(and they don't) and use
| <details>
|
| https://developer.mozilla.org/en-
| US/docs/Web/HTML/Element/de...
|
| And media viewer I would naturally do with js as well, but I
| am certain you can also do it easily with CSS. (Many ways
| come to mind)
| extra88 wrote:
| I _love_ details /summary and want them to succeed but
| current implementations have some issues. A big one is
| VoiceOver for iOS currently what they are or their state,
| something that's very straightforward and reliable when
| making a custom disclosure widget.
|
| Good article about some issues (with a link at the top to
| his previous article about them).
| https://www.scottohara.me/blog/2022/09/12/details-
| summary.ht...
| Someone1234 wrote:
| Not supported on IE11[0] which Wikipedia supports[1]
|
| [0] https://caniuse.com/?search=details
|
| [1] https://www.mediawiki.org/wiki/Compatibility#Browser_su
| pport...
| rickstanley wrote:
| CSS has come a long way, I would expect that these things
| would be ease to achieve with just CSS and HTML, for
| instance: media viewer -> dialog (I remember this being a
| HTML thing), collapsing sections -> details/summary (?)
|
| But I guess it's not there yet.
| ajkjk wrote:
| There is always going to be a user interaction that is
| sufficiently complex as to require JS. Arbitrarily limiting
| to just CSS severely limits what you can do.
| onion2k wrote:
| It's not arbitrary though. It's a choice to save
| bandwidth shipping unnecessary javascript rather than
| making it easier to develop the website. At the scale of
| Wikipedia that isn't unreasonable.
| TheAngush wrote:
| If you replace JavaScript with CSS you aren't saving
| bandwidth; you're trading it.
|
| In many scenarios I'd argue CSS would require more
| bandwidth. It can get quite verbose.
| ajkjk wrote:
| It's not unnecessary if the feature is something you
| want?
|
| There's this pattern on HN: people value a feature as
| having 0 utility and then become annoyed that someone has
| paid time/performance/money for them. Well duh, if you
| discount the value of something to 0, it will always be a
| bad idea. But you're never going to understand why people
| are paying for it if you write off their values.
|
| At my last job there were countless pieces of UX to make
| things smoother, more responsive, better controlled by
| keyboard or voice reader, etc.. that required JS. It was
| not possible to make our site as good as possible with
| CSS, and it certainly wasn't worth the tradeoffs of
| loading a big faster (not that it couldn't have had it's
| loading time improved--just, cutting JS was a
| nonstarter).
| onion2k wrote:
| _It 's not unnecessary if the feature is something you
| want?_
|
| The js is unnecessary if you can achieve the same result
| with plain css.
| ajkjk wrote:
| And you often can't.
| uoaei wrote:
| Fairly certain that's literally the point of simplifying
| interfaces. Do what you need with what you have. Don't
| try to shove a racehorse into a VW Beetle.
| blowski wrote:
| As the old saying goes "simple as possible but no
| simpler". There are likely some uses of JavaScript that
| make the UI simpler.
| znpy wrote:
| My guess is that it's going to be a tradeoff between
| functionality and compatibility.
|
| Wikipedia is consulted every day by a lot of people, i
| guess that a large number of those people are running older
| browsers.
| starkparker wrote:
| Mediawiki itself is built to support 10-year old phones
| (which is why the Moto G makes an appearance in the post
| - it's the official low-end Android benchmark) and older
| desktop operating systems.
| https://www.mediawiki.org/wiki/Compatibility#Browsers
| tommy_axle wrote:
| Makes sense since the Moto G is also what Lighthouse and
| a lot of tools driven by it use. So PageSpeed Insights
| (pagespeed.web.dev), the Lighthouse developer in Chrome
| tab and even external services like
| https://totalwebtool.com all generally evaluate mobile
| performance using it to simulate a slower experience.
| pier25 wrote:
| > _media viewer - > dialog _
|
| <dialog> has only been available in Safari for about a
| year.
|
| https://caniuse.com/dialog
|
| Wikpedia is one of the most popular sites on the internet.
| It needs to be as compatible as possible so that means
| using JS.
| manquer wrote:
| You should be able to detect user agent and determine
| capability and send the poly-fills for backward
| compatibility ?
| pier25 wrote:
| Sure, there are many different ways to tackle this.
| Mystery-Machine wrote:
| "Better interactivity" is subjective. You could argue that
| faster is better. Expanding and collapsing sections can be
| achieved faster and with zero JavaScript (which also makes it
| work on browsers with disabled JS) with a hidden
| input[type="checkbox"]. Aa for the media viewer, it could be
| a great exercise to try and make it in a similar manner with
| input[type="checkbox"] for opening/closing media viewer and
| input[type="radio"] for switching to prev/next image. This
| one probably requires browsers to support the `:has` CSS
| selector.
|
| Also, if you want to further speed up your site, just like
| you said, the fastest way to speed up the site is to delete
| JavaScript, get rid of jQuery.
| 6510 wrote:
| https://developer.mozilla.org/en-
| US/docs/Web/HTML/Element/de...
|
| <details> is completely supported.
| toast0 wrote:
| Oooh, that's lovely. I have a page with ~50 lines of
| javascript to do this, and it looks like I can make it
| zero lines of javascript instead.
| 6510 wrote:
| One could use js to push the state into the url.
| toast0 wrote:
| It's just opening and closing detail panes (personal
| recipie site), there's no reason to put the open/closed
| state in the url.
|
| But if I can do it with all html, that's better than a
| function to add / remove a css class and document.onclick
| = a function to find which recipie you clicked on and
| adjust the css class.
| 6510 wrote:
| Probably more useful for a nested menu. The default thing
| almost works out of the box. You need some css to set a
| proper cursor and style the clickable text properly.
| janpio wrote:
| You are not wrong.
|
| I assume the author of the blog post just wanted to
| optimize the current situation, not completely change how
| these features work (which would most probably be a much
| more elaborate change).
|
| Same for dropping jQuery - that will probably be a few
| weeks or months of work in a codebase the size of
| Wikipedia/Mediawiki.
| kome wrote:
| good question that most developers cannot answer.
| madeofpalk wrote:
| At least, the ones that don't read the article cannot answer.
| whaleofatw2022 wrote:
| To dynamically display data to guilt you into donations?
| EMM_386 wrote:
| > The better question is: Why use Javascript at all for a
| static website?
|
| I've been developing websites professionally since 1996.
| HTML/CSS/JS and SQL.
|
| I am still amazed there is the crowd out there who is "anti-
| JavaScript". They run NoScript to only allow it in certain
| places, etc.
|
| It's 2023, a small amount of JavaScript on a page isn't going
| to hurt anyone and will (hopefully) improve the UX.
|
| For the record, the last site I deployed as a personal project
| had 0 JavaScript. It was all statically generated HTML on the
| server in C#/Sqlite that was pushed to Github Pages. So I get
| it, it's not necessary.
|
| For _my_ little personal site. I 'm also the senior lead on an
| enterprise Angular project.
|
| JavaScript is fine, it's not going anywhere.
|
| And yes, there are _way_ too many React (specifically) websites
| that don 't even need a framework at all, but it's become the
| go-to. That annoys me too. But some JavaScript in 2023 is fine.
| haburka wrote:
| Wikipedia probably wants to support every user of the internet,
| which means even very old browsers. You can't rely on
| relatively new CSS features when supporting browsers that are
| 10 years out of date.
|
| The fact they're still using JQuery, probably for similar
| compatibility reasons is good evidence of that.
|
| Now there are ways to use polyfills that only load when
| necessary but just about everything is very difficult at
| wikipedias scale. We can't solve their problems from an
| armchair.
| lumb63 wrote:
| I continue to wonder how much computing power and human time is
| wasted on grotesquely sub optimal code.
| floatboth wrote:
| Seeing "Sizzle" in that profile made me feel shocked. Uhhh
| Wikipedia actually still uses an ancient build of jQuery that
| does not take advantage of native querySelector at all?!?
| MatmaRex wrote:
| No, jQuery falls back to Sizzle when a selector is not
| supported by native querySelector, either due to using jQuery-
| specific extensions, or due to lacking browser support.
| Reason077 wrote:
| I've noticed that Wikipedia is, now days, extremely fast if I'm
| _not_ logged in. Just about any article loads near-
| instantaneously, it might even be faster than Hacker News!
|
| But if I'm logged in its _much_ slower - there 's perhaps a
| second or so lag on every page view. Presumably this is because
| there's a cache or fast-path for pre-rendered pages, which can't
| be used when logged in?
| bawolff wrote:
| Logged out users get served by varnish cache on a server geo-
| located near you.
|
| Logged in users still get cached pages, but the cache is only
| part of the page, its not as fast a cache, the servers are not
| geolocated (they are in usa only for main data center), and
| depending on user prefs you may be more likely to get a cache
| miss even on the partial cache.
| flangola7 wrote:
| What's a cache _miss_?
| Rapzid wrote:
| The cache takes a swing at the request. If it hits it
| returns, if it misses it gets caught by the origin.
| jeltz wrote:
| A well established term which is trivial to google. It is
| when you do a cache lookup and do not find anything.
| Paul-Craft wrote:
| Let's be nice to the lucky 10000 who get to learn what a
| cache miss is today.
|
| https://xkcd.com/1053/
| PKop wrote:
| The resource you are requesting is not in the cache, thus
| requiring a full load from whatever the source is.
| tuukkah wrote:
| I wonder if you could fetch the logged-out page first and
| then re-render client-side what's missing for a logged-in
| user.
|
| Also, Firefox container tabs might be a nice solution:
| normally browse in a container where I'm logged out, then to
| edit, change the tab to my Wikipedia container where I'm
| logged in.
| undefinedzero wrote:
| This is what client side apps do (e.g. React). They come
| with their own set of challenges which when not solved
| correctly turn into downsides. Probably still easier for
| Wikipedia to do than fixing their backend.
| adrr wrote:
| Thats what we did a former company. You cache the main page
| on the edges of a CDN. When logged in, you'll get JSON blob
| that is also cached on the edges using oauth/jwt header as
| a dimension for the cache key. JSON blob has the users
| information like username, name etc and its just filled in.
| Makes the pages extremely performant since everything is
| cached on edges and protects your origin servers from load.
| You could also shove stuff into web storage but private
| mode for some browsers won't persist the data.
| robin_reala wrote:
| This is fairly typical, and applies to HN too - you'll see dang
| suggesting that people log out when there's breaking news that
| gets a lot of engagement here.
| remram wrote:
| The difference is that HN stays very fast. Being logged in is
| more costly for the backend, but doesn't degrade your
| experience as a user. On Wikipedia you really suffer for
| being logged in.
| electroly wrote:
| When HN is slammed, you just get the "sorry, HN is having
| problems" message. This is when dang suggests logging out.
| On many occasions that turns not being able to see the site
| _at all_ into the site working quickly.
| znpy wrote:
| If you're not logged you're getting the pre-rendered, cached
| page.
|
| If you're logged, a number of things have to be recomputed
| before the page can be rendered.
|
| I run mediawiki at home, the difference is even more stark (i
| have a small home server).
| jonatron wrote:
| Wikipedia/Wikimedia are quite open about their infrastructure.
| You can even see the Varnish Frontend Hitrate on their Grafana
| server: https://grafana.wikimedia.org/d/wiU3SdEWk/cache-host-
| drilldo... (currently 87.6%, min 69.9%, max 92.2%)
| starkparker wrote:
| Wikipedia uses extensive caching when logged out, and caches
| much less when logged in to facilitate editing and user account
| functionally.
| https://wikitech.wikimedia.org/wiki/Caching_overview
| kibwen wrote:
| Surely there must be a better way to do this, not just for
| Wikipedia but for all websites with optional logins. It seems
| like the only difference in the vast majority of cases is to
| change a single link in the corner of the page from "login" to
| "the name of my account", which is a silly reason to miss the
| cache.
| swyx wrote:
| this is literally the goal of 20 year old AJAX patterns.
| progressively enhanced html with javascript on top.
| absolutely a solvable problem. perhaps wikipedia could look
| into using Astro or SvelteKit.
| jesprenj wrote:
| The link hower popups were incredibly slow on my older laptop on
| palemoon. I just disabled javascript on wikipedia because it's
| written quite poorly performance wise in my opinion.
| winrid wrote:
| heh, event delegation was like my first interview question in
| software. Amazing that I've I've using Wikipedia all this time
| with this issue :P
| robin_reala wrote:
| Incidentally, if you're interested in the performance cost of
| jQuery, GOV.UK posted last year an analysis of the performance
| gain seen when they removed jQuery in favour of vanilla JS:
| https://insidegovuk.blog.gov.uk/2022/08/15/the-impact-of-rem...
| zapt02 wrote:
| I would like to see a similar blog post, but for React.
| mtmail wrote:
| HN discussion of that blog post
| https://news.ycombinator.com/item?id=32480755
| pachico wrote:
| > Remember, the fastest way to speed up a site is to remove
| JavaScript.
|
| I'm not entirely sure this can be so axiomatic...
| KerrAvon wrote:
| Pretty sure the cases where this isn't true are the scenarios
| that are impossible to create without JavaScript, like infinite
| scrolling. Do you have counter-examples?
| gwern wrote:
| There's a demo somewhere of infinite-scrolling without JS.
| The hack is you just never close/finish the
| request/connection so it keeps sending new content!
| pachico wrote:
| Server side issues? Images being too big? Maybe slow CSS, if
| there's such thing?
| bawolff wrote:
| > Maybe slow CSS, if there's such thing?
|
| That can definitely be a thing. However JS is usually a
| much bigger issue.
| 6510 wrote:
| My motto is: It isn't fast until you have to disable gnu zip
| dcj4 wrote:
| Just disable javascript like any sane person. Wikipedia, like
| most websites, has no use case for javascript whatsoever.
| pfg_ wrote:
| Citation previews are nice, much more convenient than clicking
| the citation and then trying to find the back button. Link
| previews on desktop are useful too.
| aiff308 wrote:
| [flagged]
| cutler wrote:
| Can someone please do the same for Hacker News? For a text-based
| site it's dog slow.
| lkbm wrote:
| Now this is interesting...I just and it was 106ms for the main
| document, and ~60ms for the rest (including favicon).
|
| But then I tested in Incognito, and it was 350ms for the main
| document.
|
| Disabling cache doesn't make a difference. I tried logging in
| on Incognito and boom, fast again. Somehow the backend is 3x
| the speed when logged in. I'm guessing anon users have to go
| through some extra bot checking or something.
| zodzedzi wrote:
| Unrelated, but this style he draws the timelines in: is it hand-
| written/hand-drawn, or software-generated?
|
| It certainly has the overall feel and appeal of being done by
| hand, but I'm not sure.
|
| If it's software, does anyone know which software, what's the
| name of this style, etc?
| giraffe-flavor wrote:
| https://towardsdatascience.com/make-the-cutest-chart-in-pyth...
| e63f67dd-065b wrote:
| 300ms is on a low-end android phone, which is nice, but I wonder
| what's the performance impact on something more modern, like a 2
| year-old iPhone.
| silvestrov wrote:
| On high end phones it is more of a "battery impact" than a
| "delay impact".
| lucgagan wrote:
| Unrelated, but I love how fast your website loads. I looked at
| the source code, and it looks like it uses Astro.
|
| I debated whether to go with Astro or Next.js for my new blog,
| but because I've not had experience with the new React server
| components, I decided to try Next.js. I've had a goal of making
| it have the least bloat possible, so it is kinda interesting to
| compare these side by side now.
|
| I picked random two posts that have similar amount/type of
| content:
|
| * https://www.nray.dev/blog/using-media-queries-in-javascript/
| [total requests 13; JavaScript 2.3 kB; Lighthouse mobile
| performance 100]
|
| * https://ray.run/blog/api-testing-using-playwright [total
| requests 19; JavaScript 172 kB; Lighthouse mobile performance
| 100]
|
| It looks like despite predominantly using only RSCs, Next.js is
| loading the entire React bundle, which is a bummer. I suspect it
| is because I have one 'client side' component.
|
| One other thing I noticed, you don't have /robots.txt or
| /rss.xml. Some of us still use RSS. Would greatly appreciate
| being able to follow your blog!
| Xeoncross wrote:
| I wish any company on the web that spends 6-7 figures a year on
| their website would hire either of you to make a website that
| actually loads before I click away.
| lucgagan wrote:
| I worked for many of those and... it is more complicated.
|
| When you are a small team, it is easy to create a performant
| site because you work in a tight circle with shared
| conventions, rules, etc. and it is always the same people
| that work on the project.
|
| When you are a large organization (think Facebook blog),
| there will be hundreds of engineers/designers/copywriters/...
| that are exposed to the codebase over the course of many
| years. Each will need to make a 'quick change' that over time
| compounds into the monstrosities that you are referring to.
|
| The best you can do is add automated processes into CI/CD
| that prevent shipping anything that does not meet certain
| criteria. This might hold for a while... but as soon
| something borks the shipping velocity and some higher-up
| starts asking for names, those checks will be removed
| "temporarily" to unblock what that's burning.
|
| As an engineer who takes pride in developing accessible and
| performant software, this killed the drive/joy for me of
| working for large orgs.
| changethe wrote:
| the fact that you can use whatever component framework you want
| and mix and match is pretty awesome. been hooked on astro ever
| since trying it out once :)
|
| another low-hanging fruit for optimisation that nray.dev is not
| using, is the new inline-stylesheets option:
| https://docs.astro.build/en/reference/configuration-referenc...
|
| using the 'auto' option usually reduces it to a single global
| sheet and page-specific inlined css. especially if you are
| writing a lot of short style blocks in your components, this is
| very very handy to reduce the amount of served css files.
| Rauchg wrote:
| Do note that while Astro is a fine choice, your blog has much
| snappier page transitions with speculative prefetching and pre-
| rendering, while maintaining an equally good Lighthouse score.
|
| As projects scale this matters more. For example, hard page
| transitions will accrue the cost of re-initializing the entire
| application's scripts for every click.
| undefinedzero wrote:
| The initial request size is not a great way to compare frontend
| frameworks. Next goes into a lot of effort to optimize
| navigation around your site, and gives you full usage of React
| in your frontend so you can show amazing things like charts,
| forms etc.
|
| If the JS files are set to load asynchronously, the initial
| load should be almost equally as fast, and React should load in
| the background. Afterwards, additional navigation should be
| near instant.
| austinpena wrote:
| Depends how pages are generated. If they are using
| getStaticProps this is not true because the HTML needs to be
| "hydrated" with the React runtime which requires javascript.
| Astro will prebuild the static pages.
|
| Look for a tag like this: <script id="__NEXT_DATA__"
| type="application/json">
| undefinedzero wrote:
| The page will render perfectly fine before it is hydrated
| when using getStaticProps. To verify , simply run
| LightHouse in Chrome on a page with cache busting on and
| look at the screenshot timeline. It will render in a
| fraction of a second, long before React gets loaded.
| aiff308 wrote:
| [flagged]
| est wrote:
| Google's homepage, just a simple text box with a search button is
| over 1MB.
| Twirrim wrote:
| Decades ago they relentlessly optimised that page, shaving it
| down smaller and smaller. Then suddenly they stopped caring,
| round about the time they gave people customised landing pages
| with extra junk littering the page. It must have made a
| palpable difference to their connectivity and serving costs
| when they did that. That we're back down to a "paltry" 1.9MB
| (from a quick test I just did) is remarkable.. and yet so much
| more than it used to be.
| tlug wrote:
| It's unbelievable. I used too work for Google back in 2005
| and they had a custom to welcome all Nooglers (new Googlers)
| on Friday general meeting, where Larry/Sergey were present an
| people could ask them any questions.
|
| So one guy stood up and asked why our main page is not
| compliant with W3C guidelines (doesn't pass HTML validator
| test). L&S answered that they need to shave off any
| unnecessary byte, so that it loads faster. That's why they
| didn't care about closing tags etc.
|
| How the world and perception has changed since that time...
| It's just sad that one needs a huge JS framework just to
| build a simple website these days.
|
| Oh and the size of the homepage was in the order of 20 KB
| then, IIRC.
| wolrah wrote:
| If the author or anyone else with the ability to update
| Wikipedia's code itself is reading this, I have a suggestion for
| some low-hanging fruit that wastes a lot more than 300ms for all
| sorts of users, not just those on low-end mobile devices.
|
| Wikipedia, like many web sites, makes it really easy for mobile
| users to get redirected to a mobile-specific version running on a
| mobile-specific domain.
|
| The problem is that this mobile-specific version is not good for
| browsing on a full computer. It's not even great on a tablet. But
| there's no easy way to switch back when I've been sent a mobile-
| specific link other than editing the link by hand. Mobile links
| end up everywhere thanks to phone users just copy/pasting
| whatever they have, and desktop users suffer as a result.
|
| Please, anyone who develops web sites, stop doing the mobile-
| specific URL nonsense. Make your pages adaptable to just work
| properly on all target devices.
|
| If you insist on doing things this way with mobile-specific URLs,
| at least make it equally easy for desktop users to get off the
| mobile URLs as you make it for phone users to get on them.
| orbisvicis wrote:
| Since the redesign the expandable categories of the mobile
| Wikipedia website expand no more than once on Android Firefox
| beta, so I constantly must reload the page to read new
| categories.
|
| Fixing this might be a good way to reduce bandwidth.
| namtab00 wrote:
| I'd add shareable links, with some "copy link" ui for headings
| in articles, please!
| yorwba wrote:
| Scroll down to the bottom, click "Desktop". Not easy to notice
| that this is an option, not too hard to use once you've found
| out.
| lloydatkinson wrote:
| And yet practically very little effort for Wikipedia to serve
| the correct version
| jvm___ wrote:
| But how do you determine the users intent?
|
| They asked for the mobile version - but they're on desktop.
|
| Do you serve what the asked for? Or what you think they
| should want?
|
| What if they actually want the mobile version - if you
| always send desktop to desktop, then they can't get
| mobile...
| remram wrote:
| On mobile, they asked for the desktop version, and got
| redirected to mobile. Does intent not matter there? If
| you are going to redirect automatically, then redirect
| back automatically.
| smallnix wrote:
| You do what's best for the majority of users.
| marginalia_nu wrote:
| This is the opposite of accessibility.
| hutzlibu wrote:
| Not necessarily. You can optimize for the majority, but
| still keep things functional for everyone else.
| jakear wrote:
| Not sure why the downvotes, this is a reasonable
| question. (As evidenced by all the other answers...)
|
| The simple answer is that you don't encode device
| specific intent in the URI, you put in in style sheets
| where it belongs.
| bornfreddy wrote:
| Not a downvoter, but I can understand them. GP's comment
| assumes that there need to be different versions of the
| page, while HTML allows degradable experience. Just have
| the same page and make it work properly everywhere.
| jlund-molfese wrote:
| You're getting downvoted, but this is a totally
| legitimate comment. A sizable minority of people prefer
| the mobile WP interface and find it cleaner _even on a
| non-mobile device_.
|
| m.wikipedia isn't the default, so it's different from
| regular wikipedia.com redirecting a mobile user to the
| mobile version.
|
| It's easier for them to go to m.wikipedia.com than to
| change their user agent, especially if they aren't that
| technical.
| munk-a wrote:
| Modern browsers all have buttons to toggle user agent
| preferences in their dev tools. This is a highly
| technical request and the solution really only needs to
| be accessible to highly technical users... it's much
| _much_ more likely that less technical users with strong
| style preferences will use one of the dozens of pre-
| configured alternative CSS extensions or just write up
| their own stylesheet.
|
| I love user customization but this is extremely niche.
| cubefox wrote:
| There are plenty of websites where you can switch between
| mobile and desktop websites without using multiple URLs.
| jlund-molfese wrote:
| The point is that the user can go to `m.wikipedia.com`
| and not do anything else. They don't have to hunt around
| for a footer option, they don't have to do multiple page
| loads (which can cost some people money!), they don't
| have to keep some cookies around or use the same device
| for their preference to be maintained.
|
| Note also that WP isn't alone in their choice. Facebook,
| another website with a large non-Western user base, also
| maintains the same behavior. Go to
| https://m.facebook.com, you won't be redirected to
| https://facebook.com .
|
| There are tradeoffs either way, and no matter what WP
| does, they'll be making some users unhappy. Not all of
| these users fit the same profile, either. Wikipedia has a
| global user base, so what's best for most American sites
| isn't necessarily what's best for Wikipedia. Ultimately,
| it's not the case that one option is right, and the other
| is wrong.
| cubefox wrote:
| I think the right option is to use whatever is best for
| most users. Most users (by far) will be fine with being
| served the mobile/desktop website automatically,
| depending on whether they use a mobile/desktop browser.
| fbdab103 wrote:
| m.wikipedia also requires less network bandwidth which
| may be exactly what you want in some situations.
|
| - Desktop: 388kB
| (https://en.wikipedia.org/wiki/Rickrolling)
|
| - Mobile: 221kB
| (https://en.m.wikipedia.org/wiki/Rickrolling)
| crazygringo wrote:
| No there is no asking for mobile version in the link, the
| links are the same for both.
|
| You rely on saving the user display preference as a
| cookie that persists across sessions.
|
| They declare their preference on a device a single time
| on the site and that's remembered. None of this has
| anything to do with which URL is being used.
|
| (Not to mention that following a link on a page like
| "en.m.wikipedia.org" isn't even _user_ intent, it 's
| _author_ intent at most. But usually not even that,
| because the author just copied the link without even
| thinking whether it had an "m." in it or not.)
| munk-a wrote:
| You send them what their browser is asking for - if the
| user wants a different screen format they're tech savvy
| enough that they can manually edit their user agent
| request headers. It's actually quite easy and most
| browsers (if you pop open dev tools) will have an option
| to switch the render area to a variety of mobile devices
| and doing so will cause the request headers to be
| adjusted appropriately.
| ufo wrote:
| Yeah, but that takes longer than 300ms ;)
| strictfp wrote:
| Hot take: the mobile site also isn't good for cellphones
| TexanFeller wrote:
| Agree! I despise almost all mobile sites, even from my phone.
| "responsive" sites invariably suck too. Give me the full
| desktop site every time.
| munk-a wrote:
| Are you so sure you want that 300px left panel on your
| 480px screen?
| whym wrote:
| There has been an RFC about it (for years).
|
| "Remove .m. subdomain, serve mobile and desktop variants
| through the same URL" https://phabricator.wikimedia.org/T214998
| throw0101b wrote:
| There problem is when your auto-detection gets it wrong and
| you keep getting served the mobile version even if you are
| not (looking at you www.arista.com).
| dspillett wrote:
| This is a good use of a session cookie. Or even a stored
| cookie.
|
| It would be considered a non-tracking essential site
| function value too, so you wouldn't need to beg permission
| (contrary to what people who want us to be against privacy
| legislation will claim), and the site is probably already
| asking for permission anyway for other reasons so even that
| point is moot.
| saagarjha wrote:
| Unfortunately "non-tracking cookies" are no longer a
| thing in most browsers.
| dspillett wrote:
| I was meaning non-tracking essential cookies, as defined
| by privacy legislation that requires permission for
| things that are not essential for site features.
|
| Or are you suggesting mainstream browsers are blocking
| same-origin session-level cookies by default now? I'm not
| aware of any. And if you have a browser that is blocking
| such things, the worst that will happen is the current
| behaviour (repeated mis-guesses because the preference
| isn't stored) continues.
| saagarjha wrote:
| Safari drops first-party cookies (and all other storage)
| on sites that have not seen interaction in 7 days.
| jonatron wrote:
| https://davidwalsh.name/event-delegate _By David Walsh on March
| 7, 2011_
|
| _One of the hot methodologies in the JavaScript world is event
| delegation, and for good reason._
| cubefox wrote:
| I'm generally not picky with page load times (some cookie nags
| are far worse), but sometimes even my patience gets tested.
|
| The worst offender is substack combined with many comments. I
| tried opening a Scott Alexander blog post in mobile Chrome on my
| aging mid range phone:
|
| https://astralcodexten.substack.com/p/mr-tries-the-safe-unce...
|
| It's not that it's rendering slowly, it apparently doesn't finish
| rendering at all. At first a few paragraphs render, then
| everything disappears. I basically can't read articles on this
| blog.
|
| This website says the above has a total blocking time of 15.7
| seconds:
|
| https://gtmetrix.com/reports/astralcodexten.substack.com/3y8...
|
| That's on more modern hardware I assume.
|
| I also tried https://pagespeed.web.dev/ but unfortunately it
| timed out before it finished. Great.
| post-it wrote:
| > https://astralcodexten.substack.com/p/mr-tries-the-safe-
| unce...
|
| This doesn't load fully in Safari on an M1 Pro either. I scroll
| down and the content just ends. It hijacks the scroll bar too.
| soperj wrote:
| Interestingly I don't have an issue on Firefox (on a
| Thinkpad), it renders the content immediately, then a second
| or 2 later it tells me there's a script slowing down the
| page, i pressed stop and it rendered all of the comments
| fine. Whatever that script is doing, it's not useful for
| actually rendering content.
| refulgentis wrote:
| Fine on M2 Pro: forced obsolescence! :P
|
| EDIT: _not_ fine on M2 Pro: Chrome works, Safari doesn't.
|
| As long as I'm here...so much ink has been spilled on Safari
| and it's a bitter debate.
|
| I really, really, really wanted to make Safari work for a web
| app and had to give up after months of effort.
|
| So many workarounds and oddities that simply aren't present
| in other browsers. After 15 years orbiting Apple dev...I've
| finally acquiesced to the viewpoint that carefully filing &
| maintaining a backlog of bug reports, checking for fixes, and
| providing a broken experience isn't worth it to me or users
| toast0 wrote:
| > As long as I'm here...so much ink has been spilled on
| Safari and it's a bitter debate.
|
| Kind of silly to debate over such things. On desktop, the
| built in browser's only job is to let you download a better
| browser.
| tux3 wrote:
| I believe it may be actually rendering extremely slowly.
|
| I use Firefox mobile, if I scroll too fast on substack I
| sometimes have to let it sit 10+s before the blankness renders
| pfg_ wrote:
| > it apparently doesn't finish rendering at all
|
| What's happening here is the browser did a first paint, and
| then the javascript started eating up all the CPU. You can
| scroll because of asynchronous pan/zoom, but when you get to
| the edge of the rendered region, it asks to render more and it
| never gets rendered because the page is stuck executing
| javascript. Recording: https://share.firefox.dev/3C2DE9C
| cubefox wrote:
| I wrote them an email now.
|
| (Annoyingly, they don't show any general contract opportunity
| on their website. Luckily in the Google Play Store they have to
| supply a support email address. Got it from there.)
| samsquire wrote:
| From the sounds of the article, that 300 milliseconds wasn't much
| addition or math or number crunching but essentially book keeping
| and querying.
|
| A lot of that crunching is either whittling through DOM
| traversals.
|
| I've heard that rule of thumb that there are 5-9 machine
| instructions before JMP or CALL which you can find if you search
| "instructions per branch cpu"
|
| If I had a database of 4,000 links and instantiated a function
| event handler to each one, that would be slow. But if I could
| invert the problem and test if the clicked object is inside an
| active query resultset, that could be fast. It could be
| automatic.
| flaburgan wrote:
| But is event delegation such a good idea? Sure, you're not paying
| the cost of attaching an event to every thumbnail. But then
| you're going to execute some code for every single click on the
| page, even if totally unrelated. Sounds inefficient to me.
| tuukkah wrote:
| If the user clicks on something where nothing should happen, it
| does not matter if nothing happens 1ms slower.
|
| OTOH if something else should happen, that event handler can
| call event.stopPropagation(). This stops the event from
| reaching the delegated event handler, so no inefficiency there.
___________________________________________________________________
(page generated 2023-05-29 23:00 UTC)