[HN Gopher] Removing HTTP/2 Server Push from Chrome
___________________________________________________________________
Removing HTTP/2 Server Push from Chrome
Author : msoad
Score : 135 points
Date : 2022-08-19 16:17 UTC (6 hours ago)
(HTM) web link (developer.chrome.com)
(TXT) w3m dump (developer.chrome.com)
| treve wrote:
| The reason I'm a little disappointing about this, is because I
| was really hoping for a future where we improve sending
| collections of items from APIs.
|
| One issue with the typical API formats is that all the data for
| each resource is shipped with the collection, which means that
| client caches are unaware of the 'transcluded'/'embedded'
| resources.
|
| Server push, if implemented well could have allowed us to just
| push every collection member down the wire. Push lets you solve
| the Q+1 issue because all members can be generated together.
|
| I did some work trying standardize a simple version of this:
| https://datatracker.ietf.org/doc/html/draft-pot-prefer-push-...
|
| Also, at some point there were talks about a browser request
| header that would send the browsers' cache state as a bloom
| filter to the server, which would let the browser have a pretty
| good idea of what not to send.
|
| When the push removal was originally announced, I wrote a bit
| more about this: https://evertpot.com/http-2-push-is-dead/
|
| Anyway, I'm disappointed this happens. I get that people had
| trouble putting push to good use, but in my opinion this just
| needed more time to brew. Once Chrome announced the removal, all
| of this died.
| nine_k wrote:
| I'm afraid per-item responses are not very realistic, even with
| HTTP/2 and efficient message formats. The items then begin to
| be comparable, or smaller, than the client identity token sent
| with each request. Its size is limited by cryptography.
|
| Caching is also notoriously hard to get right. Apps (including
| web apps) usually prefer to depend on their own caching: it
| saves them more in development and support expenses.
| sidcool wrote:
| Is Websockets a viable option to HTTP2 server push?
| detaro wrote:
| Websockets serve entirely different purposes.
| luhn wrote:
| You might be confusing Server Push with Server-Sent Events.
| https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...
|
| Server Push just deals with eagerly loading assets like CSS,
| JS, and images.
| unlog wrote:
| What we need is somehow the ability to serve the html/js and json
| at the same time without having to wait for the client to get the
| html and then do the fetch request for that json. This will save
| a full round trip for loading content while keeping things
| simple.
| cyral wrote:
| If you mean avoiding the dreaded loading spinner when loading a
| single page app, you can add the JSON data in some script
| within the HTML. This is what server-side rendering frameworks
| like Next.JS do. In it's simplest form, just include something
| like:
|
| <script> window.initialData = { isLoggedIn: true, email:
| 'example@example.com', todos: [ ... ] } </script>
| yread wrote:
| I just generate the json when loading the page and embed it as
| script with a nonce. Of couse I lose the caching of the
| "static" page but it's worth it.
| GordonS wrote:
| Hmm, that's an interesting idea - perhaps the browser first
| downloads some kind of "manifest" file, which lists the
| required URIs, then the browser can request them all at once?
| jefftk wrote:
| I think you're asking for https://developer.mozilla.org/en-
| US/docs/Web/HTTP/Status/103
|
| Mozilla has it as "worth prototyping"
| https://mozilla.github.io/standards-positions/#http-early-hi...
| and their tracker entry is
| https://bugzilla.mozilla.org/show_bug.cgi?id=1407355
| johnny_canuck wrote:
| This sort of sounds like what 103 Early Hints aims to resolve
|
| https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/103
| riskable wrote:
| An alternative is to have a minimal HTML that has basically
| nothing except a <script> tag that connects to a WebSocket then
| have all content delivered over that. I've done testing with
| this in the past and it definitely outperforms HTTP/1.1 but
| maybe not HTTP/2 or HTTP/3 (never tested against those--this
| was several years ago).
|
| I even wrote a web server with bundled framework (as an
| academic exercise) whose whole purpose was to work like this.
| It was basically made for delivering any given bundles of
| assets over a WebSocket and had some cool features like the
| ability to auto-encode, auto-compress, or translate from one
| file type to another (say, turning a PNG into a JPEG or
| minifiying JS/CSS) as a simple matter of modifying the
| configuration.
| dmw_ng wrote:
| Put the HTML/CSS inside the JS, make that whole bundle
| cacheable, and make your "HTML" response basically be one
| <script> tag starting the JS and another containing the JSON
| response.
|
| Folk have been using this technique for a decade or more
| toast0 wrote:
| If you're going to serve all the js all the time, put it
| inline.
| pca006132 wrote:
| It is not faster in case the js can be cached or serviced
| through a faster CDN comparing to your own server.
| kastagg wrote:
| Is it slower?
| bullen wrote:
| We already have Server Sent Events over HTTP/1.1 so yes this was
| completely unneccessary.
|
| The TCP head-of-line problem makes HTTP/2 a solution ONLY for
| people that abuse the bandwidth in the first place (read big
| centralized coorporations that have large bills of wasted
| eletricity) making it a non-solution across the board for the
| distributed (hosting HTTP from home) humanity.
|
| The reason for "eternal growth" mindset is job security:
|
| "What are all these people going to do now that we have the final
| HTTP/1.1 working perfectly across the globe on every machine?"
| Depreceate and "improve"...
|
| In my mind the second version (not always named 2) is always the
| final version, see half-life and starcraft for other stories that
| tell you a similar progression: you make 1.0 as well as you could
| and then take a pause and realize you need some final additions
| so 1.1 completes the product/protocol. (both HL and SC where
| rewritten in a large manner after they where "done").
|
| 2.0 is often a waste of resources, see HL2+ and Starcraft 2,
| where the electricity prices are going now and for eternity you
| wont be able to play them anyway!
|
| Complexity has a price, HTTP/1.1 is simple!
| detaro wrote:
| Server Sent Events and Server Push don't really have anything
| to do with each other... (SSE is still a thing with HTTP2/3
| too)
| [deleted]
| rubenv wrote:
| Server Sent Events and H2 Server Push serve completely
| different purposes, one is about application messages, the
| other about loading resources.
|
| But it's obviously much easier to criticize if you don't
| actually dive deeper than the name of things.
| stagas wrote:
| I posted this at the Google group, though it is pending approval.
| So this is a shorter version of it from memory:
|
| I have a dev http/2 server that is serving my app locally,
| amounting to 271 module dependencies. It takes about 400-500ms to
| load everything and startup the app. The implementation is
| trivial and without errors, every dependency can locate its
| assets because it sits at its real path, sourcemaps etc.
|
| Switching to Early hints/Link: it becomes 4-5s and even with all
| of the modules cached, it doesn't get less than 2-3s. So this
| could be an implementation difficulty and might be improved, but
| it is still a non-trivial overhead.
|
| Now the only viable solution becomes bundling, even with esbuild
| the improvement over http/2 push is marginal. But now there is an
| extra build step that before was handled at the HTTP layer by
| "push". The files are bundled to a single file, so they no longer
| can access their assets relatively to their path, since they have
| changed paths. Workers etc. have the same problem,
| import.meta.url no longer works and so they can't be located. Not
| without various steps of transformations and assets handling, and
| there'll always be edge cases where those are going to fail.
| Certainly not composable across domains/projects since by
| bundling they become hardwired to their environment.
|
| Push doesn't have these problems. So, it's a step backwards IMO
| to remove it. Without "push", the only viable solution becomes to
| use 3rd party bundling tools for any non-trivial work that
| exceeds a handful of dependencies, where with "push" you could
| deploy the exact structure of the code as it was. Bundling makes
| source code and the web less open and less accessible and with
| this change there is no other solution. Early hints are fine when
| dependencies are less than a dozen, but it becomes much worse as
| they grow, leaving you with no choice than build steps/bundle
| steps, instead of simply uploading the directory of your source
| code to the http/2 push server leaving the structure intact.
| a-dub wrote:
| it was a cool idea, but i don't remember even seeing it
| implemented in spdy which is where i believe it came from.
| NightMKoder wrote:
| They mention this in the doc, but it seems like the future
| solution to preloading resources will involve HTTP Status 403
| [1]. That seems like a great alternative to server push.
|
| Currently it seems like the best option is to use Transfer-
| Encoding: chunked and make sure you flush out an HTML header that
| includes the preload directives before you spend any time
| rendering the web page. This is a pain - especially if something
| in the rendering fails and you now have to figure out how to
| deliver...something to the client. It also limits other headers
| significantly - e.g. cookies.
|
| [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/103
| chrisshroba wrote:
| In case anyone is confused, I believe the parent poster means
| 103, not 403.
| orangepanda wrote:
| > solution to preloading resources will involve HTTP Status 403
|
| "Access forbidden, here have this css file instead"
|
| Or did you mean HTTP Status 103?
| nousermane wrote:
| Are they going to kill Cache API [0] as well? because if not, one
| can still (kinda) "server-push" stuff through that.
|
| [0] https://developer.mozilla.org/en-US/docs/Web/API/Cache
| judah wrote:
| Certainly not. Cache API is a useful API that can enable faster
| web apps by skipping network calls altogether. It also enables
| offline-capable web apps. I regularly use this API to speed up
| my web apps -- both personal and professional. I recently wrote
| about using Cache API to make a web app work entirely offline:
| [0].
|
| Also, I think you may be mistaken about being "(kinda) server-
| push" - Cache API is entirely client-driven: the server can't
| push items into that cache; the client (usually a service
| worker) must choose to do so. I don't know what you mean when
| you say it's like server push.
|
| [0]:
| https://debuggerdotbreak.judahgabriel.com/2022/07/01/offline...
| nousermane wrote:
| Cache API being "useful" and "entirely client-driven" - both
| great points, but how is does it contradict my statement
| about same API being usable for server-push (i.e. force-
| feeding some objects to client it didn't ask for), may I ask?
|
| Remember, "client" (assuming user didn't disable javascript)
| is a dumb machine that executes whatever (code downloaded
| from) server tells it to, within some (pretty generous)
| limits. Imagine index HTML containing this:
| <script> const cache = await caches.open('push');
| cache.put('/resource.json', new Response('{"foo":
| "bar"}')); cache.put('/resource.gif', new
| Response(atob('R0lGODlh...')); </script>
|
| That, of course, assumes that rest of code would use
| cache.match() instead of fetch API or XHR. Or, more
| realistically, a wrapper that tries both.
| judah wrote:
| I don't deny that Cache API was usable from HTTP/2 push.
| I'm responding to your question asking whether Chrome will
| obsolete the Cache API because it was usable from HTTP/2
| server push.
|
| My answer is no, of course not, because Cache API is
| unrelated to HTTP/2 push. It's useful for storing resources
| regardless of whether they're HTTP/2 push'd from the server
| or fetched from the client. Indeed, the primary use of
| Cache API is storing resources fetched from a client's
| service worker.
| ThalesX wrote:
| Huh, I just moved my NSFW scraper to serve me content through
| HTTP2 server push. Guess it's back to the drawing board.
|
| [le: curious about the downvotes (-3), for anyone willing to shed
| some light]
|
| (le2: turns out I'm using Server Side Events and _not_ HTTP2
| server push; sry for the noise)
| infensus wrote:
| You can use HTTP3... unless they find out it also has a design
| flaw and it's time to move to the next thing
| ocdtrekkie wrote:
| This is one of the reasons HTTP/1.1 will still be reliable
| and workable decades from now, and every HTTP/2 and HTTP/3
| client will be long gone. Google has just taken their
| ridiculous deprecation policies and applied them to web
| standards bodies.
| coolspot wrote:
| Can you elaborate, for my friend, please.
| ThalesX wrote:
| Not all that complicated, I have some places on the interwebz
| where my kind of NSFW content is posted so I scrape those and
| maintain an SQLite database of direct urls.
|
| Over that database I have a tiny Koa server that checks if
| URL is still active, processes image in memory and serves it
| through HTTP2 over to a Preact app that handles displaying
| them.
|
| I do this so I can share this with some people, and allow
| them to share _some_ stuff without exposing myself to
| liability for hosting.
| cmeacham98 wrote:
| > I do this so I can share this with some people, and allow
| them to share some stuff without exposing myself to
| liability for hosting.
|
| I don't know what country you live in, but here where I
| live (in the US) you definitely aren't avoiding any legal
| definition of hosting by only relaying.
| detaro wrote:
| I'm fairly certain you are confusing Server Push with
| something else, because as far as I can tell Server Push is
| not useful for doing what you describe.
| ThalesX wrote:
| Clients connect to server, connection kept open and
| server push the base64 data to all connected clients.
| detaro wrote:
| that's not Server Push, but Server-Sent Events probably
| ThalesX wrote:
| Thanks for this! You are right. I'll update my posts and
| perhaps this also answers my question of why I am
| downvoted! I'm using SSE indeed.
| rektide wrote:
| As I commented yesterday[1], accelerating page loading was only
| one use case for http Push frames. Alas it's the only ude case
| Chrome ever acknowledged or cared about or have ever seemingly
| recognized (in spite of it under-the-hood powering Push API).
|
| Push also could have been useful to replace theess of non-web
| non-resourceful ad-hoc protocols people have endlessly kept
| reinventing atop WebSockets/WebTransport/SSE/Long-polling to push
| things like new chat messages to the browser.
|
| The author cites Jake Archibald's struggle using Push for page
| loading. But Jake also was interested in expanding past this
| incredibly limited use case, and threw together a mini-proposal
| to let Fetch hear of incoming Pushed resources[2]. Chrome though
| largely have never acknowledged this desire, even though it's
| been asked for for ~7 years now.
|
| So the web is still request & response & there's still no natural
| obvious way for the browser to hear of new resources that are
| ocurring. Chrome didnt listen to developers, never let us
| actually use the feature, and now they're killing it, and we're
| stuck where we were two decades ago, scabling to figuring out
| what to do to get some bidirectionalism beyond request-response.
|
| [1] https://news.ycombinator.com/item?id=32514159
|
| [2] https://github.com/whatwg/fetch/issues/607
| chrismorgan wrote:
| > _in spite of it under-the-hood powering Push API_
|
| The Push API <https://w3c.github.io/push-api/> is a completely
| different thing with absolutely nothing in common with HTTP/2
| PUSH frames (Server Push), just like Server Push is nothing to
| do with Server-Sent Events. PUSH frames would be unsuitable for
| the Push API, which is all about _not_ having to keep a
| connection alive.
|
| (The rest of your comment is still valid, though I reckon you
| overstate the practical usefulness/superiority of Server Push
| somewhat--sure, the ability to push multiple streams instead of
| just one _is_ useful in some cases, helping avoid head-of-line
| blocking or subprotocol complexity, but in practice I suggest
| that most scenarios can't actually use that power _at all_.)
| rektide wrote:
| The only standardized implementation of Push API is Web Push
| Protocol, which uses HTTP Push frames to push new message
| resources.
|
| https://datatracker.ietf.org/doc/html/draft-ietf-webpush-
| pro...
|
| You've listed some fine advantages of Push frames, but the
| one most near & dear to me is that it is supported by HTTP
| itself & delivers http resources. This is an incomparable
| advantage versus everything else: slotting nicely into the
| existing ecosystem.
|
| With QUIC based semi-connectionless HTTP3, the advantagea
| would only have been further accelerated, no longer even
| needing a TCP connection to be maintained.
| chrismorgan wrote:
| Better link, https://datatracker.ietf.org/doc/html/rfc8030.
|
| I see now that I misunderstood what you were speaking of by
| "under-the-hood". I was thinking about the developer-facing
| parts of the Push API, but you actually meant the bit in
| the browser's control, opaque to the developer. I retract
| my quibble.
| tehbeard wrote:
| That's not a standardized implementation it's a draft spec
| that's years old and expired.
| EarthLaunch wrote:
| HTTP/2 Server Push being impractical matches my experience on
| getting my game client[0] to load quickly (2.5 - 3.5 seconds via
| Starlink, for cold - warm cache). What slowed it down:
|
| 1. Fetching many resources simultaneously, 162 resources just for
| terrain data, a couple hundred more for meshes. I am currently
| serving with Express, so the solution here was to put Cloudflare
| in front of it as a reverse proxy with a short cache TTL.
| Cloudflare then serves them with HTTP/3. Also, requesting all of
| these simultaneously rather than sequentially. The header
| overhead isn't a problem with HTTP/3 since that uses a hash or
| something rather than re-sending all the same headers in each
| request.
|
| 2. Resource byte size. Cloudflare does Brotli but from my
| testing, I would need to combine terrain zones into a single
| binary blob for this to reduce byte size by half. (Compressing
| things that are a few hundred bytes achieves very little.) But
| combining these would mean that any change to the data of one
| zone would require re-compressing all zones. (Mesh compression
| with Draco[1] helped though.)
|
| 3. ROUNDTRIP. Especially with connection latency. This is where
| Server Push comes in. I actually experimented with it. But it
| basically only saves one roundtrip (~50-500ms depending on
| connection); the server establishes a connection then can push.
| Without server push, the server establishes a connection, then
| waits for the client to request resources.
|
| The client requesting resources has a few advantages. 1) Server
| doesn't need to figure out what data the client needs, the client
| can do those calculations, and the server can simply verify
| (which is a lighter SQL query than discovery). 2) Client can not-
| request resources which it already has (in cache or storage)! 3)
| Resources can be cached by Cloudflare at the edge.
|
| 0: earth.suncapped.com
|
| 1: github.com/google/draco
| ryantownsend wrote:
| You've got a fairly heafty JS file on the critical rendering
| path (350kb), so one option to help your roundtrips 3 is to
| preload the other files it uses so it's not dependent on the JS
| download and executing before they are discovered. See:
| https://web.dev/preload-critical-assets/
| EarthLaunch wrote:
| Thanks! That JS file is the whole minified application. The
| application is what knows what needs to be loaded - so I
| would have to move that logic out into the page. It would
| help but I am not sure it's worth the cost of maintaining
| that extra logic, in this case (being indie).
| jefftk wrote:
| 2020-11 announcement with technical details on why they're
| removing it: https://groups.google.com/a/chromium.org/g/blink-
| dev/c/K3rYL...
|
| No one as far as we know, inside or outside Google, ever figured
| out how to use server push to consistently speed up loading
| pages.
| msoad wrote:
| Push is a bad design. The client should manage the resources on
| the client not the server.
|
| Stuff like resource media queries for instance. How the server
| can know if user wants dark mode or light mode CSS file?
| Koffiepoeder wrote:
| Use css variables for colors, then load the relevant set with
| a 'prefers-color-scheme' media query. No need to serve 2 CSS
| assets: the difference should be minimal.
| jefftk wrote:
| A stronger version of msoad's point would be deciding
| whether to push a dark or light image, based on the user's
| theme.
| msoad wrote:
| You can solve this using the <picture> element. As Github
| does:https://github.blog/changelog/2022-08-15-specify-
| theme-conte...
| mikewhy wrote:
| That sounds exactly like "The client should manage the
| resources on the client not the server"
| nousermane wrote:
| It _is_ bad design, but more importantly, server push is a
| (subpar) solution to a problem we shouldn 't have in the
| first place.
|
| How exactly modern web ended up in a situation where, first
| time browser is displaying a single page (with maybe 3
| pictures and 2 paragraphs of text), it has to download 300
| resources from 12 different servers, including 2 megabytes of
| minified javascript?
| jupp0r wrote:
| > How exactly modern web ended up in a situation where
| [...]
|
| > it has to download 300 resources from 12 different
| servers,
|
| > including 2 megabytes of minified javascript?
|
| This is "best practice" that is used widely to work around
| HTTP 1.1 Head of Line Blocking [1]. HTTP/2 and to a greater
| degree HTTP/3 (as it also alleviates TCP head of line
| blocking) are fixing the underlying problem making the
| former best practice an anti pattern (but only for those
| users able to use modern HTTP implementations).
|
| [1] https://en.wikipedia.org/wiki/Head-of-
| line_blocking#In_HTTP
| divbzero wrote:
| "12 different servers" and "minified JavaScript" were
| indeed workarounds for head-of-line blocking, but I think
| the GP's main point is that "300 resources" and "2
| megabytes" should not be required for displaying a web
| page.
| nine_k wrote:
| They are not required to display a web page. They are
| mostly required to make money off that web page, and, to
| a smaller degree, to observe how users interact with it
| and thus improve it (also aligns with making more money).
|
| Fortunately, Reader Mode is available in a few user-
| friendly browsers.
| [deleted]
| vbezhenar wrote:
| How many files touches some typical desktop program to
| display 3 pictures and 2 paragraphs of text? How many
| megabytes its installation occupies? My bet is that numbers
| will be comparable. That's the way people write software.
| Standards and tools should adapt. It makes no sense to
| complain about it as nothing changes. Every popular JS
| build tool helps with good practices by building a single
| minified bundle. Popular JS frameworks are tiny portion of
| those 2 megabytes. Developers just don't care and you can't
| do anything about it. You can build better protocols and
| better tools to make those applications work faster. And
| developers will leverage those to make their apps even
| slower. Who cares about 2 MB with 5G?
| mtoohig wrote:
| As much as I agree in general, the last part affects me
| and many others. I only get 2.5Mb/s and I work for the
| WISP.
|
| The wife's family living away from the main city have
| intermittent connection and slower speeds. We have only
| one submarine cable and poor infrastructure further out.
| So heavy, bloated sites eat up people's pre-paid internet
| packages quickly or have trouble loading altogether.
| Prepaid is around $1.25 for 500MB or $10 for 5.5GB.
|
| I think lower bundle sizes should be a goal. Or lower
| bandwidth websites should be available such as a 'm.'
| Domain. IMO
| aaaaaaaaaaab wrote:
| _Yawn..._
|
| Strawman argument.
|
| I just want to read a damn article. Why does it need to
| download hundreds of resources from dozens of origins?
| Reading a PDF article on my desktop touches exactly one
| file, the PDF. Ok, maybe some dylibs from the OS related
| to PDF rendering. But surprise! Browsers already have
| everything built-in for HTML rendering! No need to
| download an HTML rendering engine, because it's already
| there: it's the browser!
| vbezhenar wrote:
| Basically because site owner wants to monetise your
| visit. Those who don't want to monetise you, usually
| don't put loads of trackers, ads and other stuff which is
| main reason of those hundreds of requests.
| staticassertion wrote:
| Why is multiple files worse than one? PDF is an insanely
| complex format, I don't think the argument of "I can just
| read a PDF file" is strong.
| okasaki wrote:
| Even if so, the javascript doesn't replace the OS
| "bloat", it adds to it.
| nousermane wrote:
| > How many files touches some typical desktop program to
| display 3 pictures
|
| Not sure about typical, but here is rough estimate for
| the rock bottom of it: "feh" image viewer on Ubuntu/X11:
| $ strace -e openat -o >(wc -l) feh -F \ one.jpg
| two.jpg three.jpg 105
| the8472 wrote:
| 63 for me, and most of those are libraries which should
| already be in memory. Which is nanoseconds away from the
| CPU while servers are dozens to hundreds milliseconds
| away.
| [deleted]
| jchw wrote:
| That's not the whole story of course: there's an X server
| that feh is communicating with over a domain socket and
| usually shared memory, and X implements significant
| functionality on its end. Then there's drivers, and the
| stuff between these things.
| nousermane wrote:
| Sure. And bloated websites aren't dropping from the race
| here, either. They are communicating, over HTTP (and
| maybe WS/SSE), with fair number of servers, that also
| implement significant functionality. Then, there are
| microservices, databases, network between those...
| doliveira wrote:
| > Who cares about 2 MB with 5G
|
| Statements like this reminds me we live in different
| planets
| dwheeler wrote:
| > How exactly [has the] modern web ended up in a situation
| where, first time browser is displaying a single page (with
| maybe 3 pictures and 2 paragraphs of text), it has to
| download 300 resources from 12 different servers, including
| 2 megabytes of minified javascript?
|
| Ads.
| edude03 wrote:
| Also bundlers without treeshaking - I personally once
| delivered 70MB of compressed JS to clients in production
| skyde wrote:
| When working on the Bing web crawler, we tried to take
| screenshot of webpage to use as thumbnail.
|
| What you describe was a huge pain, most page forced us to
| download 15MB of data and load it in memory just to display
| 2 small images and some text.
|
| A format like PDF would have been much better because we
| could have read enough byte until we are able to render the
| visible part of the document.
|
| But instead we had to download and execute 30 javascript
| files.
| riedel wrote:
| Push would IMHO allow nice pub sub if combined with SSE.
| Particularly for short lived objects one could actually
| considerably reduce overhead. However, it has been build with
| the design goal to be an optional feature (as there is no
| client side API) : I guess it fulfilled at least that goal.
| jefftk wrote:
| I agree on removing it, since it turned out not to work well
| despite lots of people trying.
|
| On your particular question, there's the client hint Sec-CH-
| Prefers-Color-Scheme: https://web.dev/user-preference-media-
| features-headers/
| msoad wrote:
| > The Sec-CH-Prefers-Color-Scheme client hint header is
| supported on Chromium 93. Other vendors' feedback, namely
| WebKit's and Mozilla's, is pending.
|
| I don't see this header being sent in Chrome 104!
| panopticon wrote:
| Works on Chrome 104 for me: https://sec-ch-prefers-color-
| scheme.glitch.me/
|
| Note that the server has to request that header using
| accept-ch.
| msoad wrote:
| curl --head https://sec-ch-prefers-color-
| scheme.glitch.me/ HTTP/2 200 date:
| Fri, 19 Aug 2022 18:55:43 GMT content-type:
| text/html; charset=utf-8 content-length: 936
| x-powered-by: Express accept-ch: Sec-CH-Prefers-
| Color-Scheme vary: Sec-CH-Prefers-Color-Scheme
| critical-ch: Sec-CH-Prefers-Color-Scheme etag:
| W/"3a8-iEB3drxZIB7EZYHQL534qwomDuI"
| fzfaa wrote:
| > How the server can know if user wants dark mode or light
| mode CSS file?
|
| Cookies? Custom header?
| therein wrote:
| Yeah, that's the thing. So very many years ago when we started
| using HTTP/2 at LinkedIn, we simply just couldn't find a use
| for server push. It was a fascinating meeting. We got this
| technology, no matter how much we tried to use it, couldn't
| come up with a use for it. I still remember how awkward that
| meeting was to this day.
| skyde wrote:
| How is this different from rel="preload" in HTML to preload
| (css javascript ... ) Ex: https://developer.mozilla.org/en-
| US/docs/Web/HTML/Link_types...
|
| The webserver could just parse the HTML and send http push
| for each dependency! What was the problem with doing this?
|
| To me this seem very similar to "open document format" are
| storing html with all its dependency in a ZIP package. So
| with HTTP/2 push you just push the whole package and the
| client tell the server if it already have some of the files.
| coder543 wrote:
| > The webserver could just parse the HTML and send http
| push for each dependency! What was the problem with doing
| this?
|
| Well... the client knows which resources are already in its
| cache. The server does not. You just suggested that the
| server should always send resources that client almost
| certainly already had cached, which is wasteful for
| everyone involved.
|
| > So with HTTP/2 push you just push the whole package and
| the client tell the server if it already have some of the
| files.
|
| That doesn't make sense. Once the files have been pushed,
| the bandwidth and time has already been wasted. The client
| doesn't get to tell the server anything in that scenario.
| skyde wrote:
| a push is an HTTP2 stream. the stream header contain
| everything the client need to know if it already have
| that file.
|
| If it does already have that file, client simply close
| the stream it doesn't need to download the file or send
| request to server to say it doesn't need the file.
| coder543 wrote:
| Even then, the average case would still be worse because
| the server would be sending all these unnecessary stream
| headers and forcing the client to send so many
| unnecessary RST_STREAM frames back. Each PUSH_PROMISE is
| supposed to contain the full set of headers for each
| resource.
|
| I guess I hadn't realized that clients could RST_STREAM
| on these pushes, but it doesn't change the outcome here.
|
| What you describe isn't a win for anyone except a client
| with a cold cache, and then they start losing immediately
| after that. That's why it isn't done. That's why HTTP/2
| Push is going away.
| Matthias247 wrote:
| Even if a client does a RST_STREAM, the origin server
| might already have done a lot of additional work (e.g.
| request the pushed file from an upstream if it's a
| proxy/CDN), and probably stuffed
| min(TCP_SEND_BUFFER_SIZE, STREAM_FLOW_CONTRO_WINDOW) of
| data into the stream. Which then also means all of that
| work might get billed if the server is a managed service.
| It's really quite some waste of resources compared to the
| client sending an additional request (which might even be
| a conditional request).
| 0x457 wrote:
| I guess the client could tell server that "I already have
| that file in cache", but it's still weird, might require
| another round-trip (server asking if client needs this
| file).
| nousermane wrote:
| Wasn't originally intended use for server push, something like
| this?
|
| 1) Load page through javascript-enabled browser (headless
| chrome, etc), and record resources accessed from the same
| server;
|
| 2) Save this list somewhere, where server can read it, keyed by
| URL;
|
| 3) When user requests same URL, push resources from the list.
| jefftk wrote:
| That's the idea that led to implementing server push. Lots of
| people it, and it turns out not to work very well. There's
| too much variation in what people will already have in their
| caches and the server is not great at predicting the order in
| which clients will need the resources.
| [deleted]
| vbezhenar wrote:
| Client can send bloom filter with first request.
| habibur wrote:
| How? The client needs to know what the resources are to
| create a bloom filter. It doesn't know that before 1st
| request.
| vbezhenar wrote:
| First request is empty filter, server will push
| everything mentioned in HTML. For subsequent requests
| browser will fill filter with known hashes, server will
| push changed resources (and false positives).
| skyde wrote:
| your browser can have a bloom filter for each subdomain.
| saurik wrote:
| I'm just still shocked that this wasn't obvious to more
| people before: it isn't like you even need to have
| implemented this feature to do the instrumentation required
| to know whether this would work, and it frankly had always
| seemed like a long shot.
| mmis1000 wrote:
| Ans it turns out the way this was implemented didn't really
| works well. There just isn't a universal format that you can
| generate the list and upload it to server/cdn/whatever. No
| one ever figure out how to actually use this list to generate
| a http2 server push from a common platform/cdn.
|
| The alternative (Service worker/<meta> preferch/preload tags)
| on other hand are much easier to handle although have one
| extra round trip. Because they are just text files that you
| need to upload to the server.
| 0x457 wrote:
| I think it was something much simpler: oh, you're navigating
| to `index.html` without session? You're going to need this
| CSS, JS and PNG files as well.
|
| And the idea is that client would already have those files by
| the time browsers parses `<head>` it already has them all.
| Except...browser cache exists, so who cares.
| hackbinary wrote:
| It would be great if Google stopped making nasty Urls with the
| nasty "Scroll To Text Fragment" feature.
|
| It is the biggest tech retrograde since MS started putting UUIDs
| in to their Urls. Which they largely stopped except for
| sharepoint documents.
| jksmith wrote:
| This is forcing a solution to fit the problem.
| chrsig wrote:
| this really strikes me as evidence that something is wrong with
| the http standardization process. we should know if something's a
| good idea before immortalizing it in a standard.
|
| It may be removed from chrome, but it's still in the standard.
| isodev wrote:
| I would say what's wrong is that Chrome decided to remove a
| standard feature, and due to their market share, nobody can
| oppose it. There was a time when a single company controlled
| the web. That was Microsoft. Today, it's Google.
| BrainVirus wrote:
| _> this really strikes me as evidence that something is wrong
| with the http standardization process._
|
| The overall decision-making around how the web works is insane
| and getting more insane by the year. We have fairly trivial and
| absolutely universal problems unsolved for decades, while
| browsers get crammed full of features that aren't used by 99.9%
| of websites.
|
| Worse, some problems are finally solved in such a half-assed
| way that it's almost worse than having no solution at all.
| (Input type="date", meter element, dialog element etc.)
|
| This is not at all what I imagined the field would look like
| when I entered it nearly two decades ago.
| pixl97 wrote:
| >We have fairly trivial and absolutely universal problems
| unsolved for decades, while browsers get crammed full of
| features that aren't used by 99.9% of websites.
|
| Why is this surprising? Universal problems are hard to solve
| otherwise it wouldn't be a universal problem in the first
| place. Add to that large numbers of groups will have their
| own set of opinions on what the best solution is, and it will
| conflict with some ideas.
| BrainVirus wrote:
| _> Universal problems are hard to solve otherwise it
| wouldn't be a universal problem in the first place._
|
| This is completely backwards. Universal problems on the web
| are routinely solved by everyone designing a standard
| website. They often have fairly standard solutions. Browser
| vendors routinely fail to generalize the experience of run-
| of-the-mill web developers. It's not about engineering.
| It's about misaligned incentives and operating from a bad
| frame of reference.
| modeless wrote:
| > We have fairly trivial and absolutely universal problems
| unsolved for decades
|
| This describes most areas of applied computer science,
| honestly. Language design, build systems, operating systems,
| etc. When standards and/or widely used systems are involved,
| nothing is trivial. And the web is the most widely used and
| most standards-based system out there.
| tehbeard wrote:
| You want to explain your examples?
|
| Because what I see is:
|
| - A working element that only got half assed/ignored by one
| provider (Apple, but who's surprised?) which meant polyfills
| for years to fix it. - A niche element, there are plenty of
| those? - A relatively new element that solves a lot of issues
| with Z layer, accessibility etc and is a good basis for other
| components/libraries to use for their styled/enhanced modal
| dialogs.
| gunapologist99 wrote:
| Agreed, it should have received much more testing.
|
| However, in this case it being in the HTTP/2 standard is moot,
| since it's just a (premature) optimization and thus its removal
| results in a reasonable fallback, which is simply not using
| that particular optimization, and also because HTTP/3 is so far
| along already.
|
| Not to get too tangential, but this seems similar to fetch
| removing support for reponse bodies for POST/PUT requests; a
| login POST, for example, might return information about the
| user's login or the state of the server, or even a per-tab/per-
| page token that isn't a cookie header, but fetch simply refuses
| to support it (even though XMLHttpRequest still does). Fetch
| removing this means that additional round-trips are needed,
| even over a multiplexed connection, and certain app designs are
| simply not possible. The solution seems to be to buck the trend
| and just use XHR instead of fetch, especially since fetch
| doesn't seem to be available on older Android Chrome (possibly
| 5-15% marketshare even now.)
| Thiez wrote:
| > this seems similar to fetch removing support for reponse
| bodies for POST/PUT requests
|
| Wait what? I have never heard of that, do you have a link
| where this removal is documented?
| chrsig wrote:
| > it's just a (premature) optimization and thus its removal
| results in a reasonable fallback, which is simply not using
| that particular optimization
|
| this is a salient point. if it's just an optimization and not
| implementing it (or removal) doesn't change behavior of sites
| that have adopted it, then it's benign.
| nousermane wrote:
| Well, IETF can always publish an HTTP/2.1 with that removed,
| no?
| chrsig wrote:
| an HTTP/2.1 doesn't stop HTTP/2 from existing, it just
| creates _another_ standard.
| samwillis wrote:
| HTML5 doesn't stop XHTML from existing, however it's
| collectively considered a bad idea and no one uses it.
|
| Sometimes specs are proved to be wrong, this is just one of
| those occasions.
| jraph wrote:
| What's wrong with XHTML 1.0 or 1.1? (assuming you are
| speaking about this)
|
| What significant feature in XHTML is not supported by
| HTML5?
|
| if you are only speaking about the syntax (so your
| statement includes XHTML5), I don't follow you neither: I
| don't see what's wrong with the XML version of HTML5.
| 0x457 wrote:
| The issue is that if the page is an invalid XHTML - it
| supposed not render anything at all, which is very
| undesirable.
|
| `<b><p></b></p>` wouldn't stop HTML from rendering, but
| would stop XHTML. Plus the whole "XML parsers are very
| unsafe"
| Thiez wrote:
| That issue is considered a feature by many. Don't send
| out broken web pages. If you don't build your pages using
| string concatenation you have already eliminated most
| problems (it's just like SQL, in that respect).
|
| XML parsers are not that bad when you disable custom
| entities, which browsers could easily do.
| codedokode wrote:
| No. This is actually wrong that browser by default
| doesn't report errors in HTML, CSS or JS - because of
| this nobody notices them and you cannot understand why
| the button is not working. Instead, in case of any error
| a browser should show a big red bar above or below the
| page. So that the user immediately understands that the
| page is broken and it makes no sense to try to enter
| anything.
|
| Hiding errors is always a poor choice. Only if you are
| low paid developer not interested in making a quality
| product then probably you like browser's behaviour.
| chrsig wrote:
| > however it's collectively considered a bad idea and no
| one uses it
|
| No one uses it going forward. I'd hate to venture a guess
| at how many existing sites there are that use xhtml.
| Browsers are still expected to parse and render them
| properly.
| samwillis wrote:
| True, although in this case no one is using HTTP/2 Push,
| so removing it is no harm and only makes it easer to
| maintain browsers.
| vbezhenar wrote:
| Who said noone uses it? XHTML is awesome.
| geofft wrote:
| But the only real way to be sure if something is a good idea,
| especially in complex distributed systems such as the web
| platform, is to try it out. And when you try it out at scale,
| standardization is very important. The alternative is to have
| everyone using new ActiveXObject("Microsoft.XMLHTTP") because
| you have a non-standardized thing that's rapidly turning out to
| be a good idea.
|
| The purpose of a standardization process isn't to stamp
| approval on good ideas; it's to ensure interoperability. Server
| Push was a _plausible_ idea. It very much could have been a
| good idea. It turned out not to be, after we tried it in the
| real world for many years, but that 's okay. And we know one
| thing for sure - the reason it didn't work wasn't that people
| were afraid to use a nonstandard X-Chrome-Server-Push
| extension.
| haroldp wrote:
| That kind of seems like the difference between top-down and
| bottom-up standards. MS added a weird extension to IE to
| support their proprietary webmail app, and... actually it was
| great so people wanted it pulled in to every browser as a
| standard. Many other features from Netscape's blink to IE's
| scrolling marquee died natural deaths.
|
| HTTP/2 was what? Something no one wanted, imposed by Google?
| Is that too strong?
|
| Is bottom-up maybe better in practice?
___________________________________________________________________
(page generated 2022-08-19 23:00 UTC)