[HN Gopher] Downgrade User Agent Client Hints to 'harmful'
       ___________________________________________________________________
        
       Downgrade User Agent Client Hints to 'harmful'
        
       Author : ronancremin
       Score  : 131 points
       Date   : 2021-07-13 10:40 UTC (12 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | jrochkind1 wrote:
       | I'm late to the ballgame, but what does "Sec-" mean as a HTTP
       | header prefix anyway? I am failing at googling.
        
         | banana_giraffe wrote:
         | It means the browser is in control of the header, and not some
         | script. From https://datatracker.ietf.org/doc/html/rfc8942 :
         | Authors of new Client Hints are advised to carefully consider
         | whether        they need to be able to be added by client-side
         | content (e.g.,        scripts) or whether the Client Hints need
         | to be exclusively set by        the user agent.  In the latter
         | case, the Sec- prefix on the header        field name has the
         | effect of preventing scripts and other application
         | content from setting them in user agents.  Using the "Sec-"
         | prefix        signals to servers that the user agent -- and not
         | application content        -- generated the values.  See
         | [FETCH] for more information.
         | 
         | As near as I can tell, the bit they're talking about in the
         | Fetch standard is just this:                   These are
         | forbidden so the user agent remains in full control over them.
         | Names starting with `Sec-` are reserved to allow new headers to
         | be minted          that are safe from APIs using fetch that
         | allow control over headers by          developers, such as
         | XMLHttpRequest.
        
           | sdflhasjd wrote:
           | Does it stand for something? Why the letters 'Sec'?
        
             | banana_giraffe wrote:
             | I don't think I've ever seen it called out, but I always
             | assumed it's "Secure" in the sense it hasn't been modified
             | by a script.
             | 
             | But that's 100% a guess on my part.
        
               | herpderperator wrote:
               | Great, so now we have the HttpOnly flag for cookies which
               | differs from the Secure flag for cookies, while the
               | Secure in the Sec headers has the same meaning as
               | HttpOnly.
        
               | billyhoffman wrote:
               | And we have SameSite in Cookies, and Allow-Origin in
               | headers!
        
       | fnord77 wrote:
       | > Sec-CH-UA-Model provides a lot of identifying bits on Android
       | and leads...
       | 
       | intentional?
        
         | mort96 wrote:
         | Is there a typo or a pun or something I'm not seeing?
         | 
         | Knowing the exact make and model of an Android device is a lot
         | higher entropy than knowing the exact make and model of an
         | iPhone.
        
       | justshowpost wrote:
       | > UA Client Hints proposes that information derived from the User
       | Agent header field could only be sent to servers that
       | specifically request that information, specifically to reduce the
       | number of parties that can passively fingerprint users using that
       | information. We find that the addition of new information about
       | the UA, OS, and device to be harmful as it increases the
       | information provided to sites for fingerprinting, without a
       | commensurate improvements in functionality or accountability to
       | justify that. In addition to not including this information, we
       | would prefer freezing the User Agent string and only providing
       | limited information via the proposed NavigatorUAData interface JS
       | APIs. This would also allow us to audit the callers. At this
       | time, freezing the User Agent string without any client hints
       | (which is not this proposal) seems worth prototyping. We look
       | forward to learning from other vendors who implement the "GREASE-
       | like UA Strings" proposal and its effects on site compatibility.
       | 
       | https://mozilla.github.io/standards-positions/#ua-client-hin...
        
       | [deleted]
        
       | theandrewbailey wrote:
       | I would rather have all this information (along with whatever is
       | being inferred from them) be exposed through a Javascript API
       | instead of having browsers indiscriminately flood global networks
       | with potential PII.
       | 
       | Chrome came up with this? Figures. Stay evil, Google.
        
         | esprehn wrote:
         | Can you explain the attack vector where encrypted HTTPS network
         | traffic is vulnerable but a JS API isn't?
        
           | theandrewbailey wrote:
           | Your browser opens an encrypted connection to somewhere you
           | don't want it to (e.g. loads an image or iframe, JS not
           | required). How many connections and resources does a normal
           | web page load? 100? More? Almost nobody has time to audit all
           | of them. Not technically inclined? You're screwed.
           | 
           | My secondary concern is that there would be more traffic
           | going around the internet that isn't being used 99+% of the
           | time.
        
       | csmpltn wrote:
       | > "User Agents MUST return the empty string for model if
       | mobileness is false. User Agents MUST return the empty string for
       | model even if mobileness is true, except on platforms where the
       | model is typically exposed." (quoted from
       | https://wicg.github.io/ua-client-hints/#user-agent-model)
       | 
       | Honestly now - who drafts and approves these specs? Not only does
       | it make no sense whatsoever to encode such information this way -
       | it also results in unimaginable amounts of bandwidth going to
       | complete waste, on a planetary scale.
       | 
       | This is just plain incompetence. How did we let the technology
       | powering the web devolve into this burning pile of nonsense?
        
         | dmitriid wrote:
         | Drafts: Google
         | 
         | Approves: no one.
         | 
         | Chrome just releases them in stable versions with little to no
         | discussion, and the actual specs remain in draft stages.
         | 
         |  _Edit_ : grammar
        
       | dmitriid wrote:
       | > I'm not sure why you used such an old Chrome version to test
       | this.
       | 
       | That quote from the first comment on the issue is just a cherry
       | on top.
       | 
       | Chrome 88 was released in December 2020. 7 months ago.
        
         | oefrha wrote:
         | Because when you're implementing a new spec that is still in
         | "draft" status and constantly being updated, things could have
         | changed drastically in 7 months and 4 major versions?
        
         | ThePadawan wrote:
         | I'm going to cut them some slack since December 2020 feels both
         | 2 weeks and 4 years ago.
        
       | admax88q wrote:
       | Serving different content for the same URI based upon various
       | metadata fields in the request goes completely against the spirit
       | of a URI.
        
         | hypertele-Xii wrote:
         | No it doesn't? Ever heard of Accept or Lang headers? Or cookies
         | for that matter? _Dynamic content?_
        
           | billyhoffman wrote:
           | Agreed, and thanks for bring up the Accept header. The author
           | seems uninformed about HTTP's built in Content Negotiation.
           | They write about servers using the User-Agent header,
           | specifically talking about WebP. Accept: "image/webp" works
           | just fine for the major CDNs regardless of the UA.
        
         | ocdtrekkie wrote:
         | This is unfortunately the world of web apps, where a URI just
         | gets you to the app, and the content within is dynamic.
        
           | admax88q wrote:
           | Even with web apps, you can serve the same app from the same
           | URI. URI doesn't imply static content.
           | 
           | Serving a slightly different web app from the same URI based
           | upon other random metadata on the other hand. Makes caching
           | all the more complicated.
        
             | ocdtrekkie wrote:
             | I get that. I do think by and large, the user's agent (the
             | browser) should be making display and format decisions
             | based on itself, rather than the server serving different
             | content. Though I think the exception is mobile, where we
             | probably shouldn't serve the client endless garbage it
             | doesn't need.
             | 
             | I mostly think the replacement for user agent should be a
             | boolean of mobile or not mobile. And everything else should
             | be dynamically handled by the client.
        
               | admax88q wrote:
               | Honestly though, if its enough content for mobile, its
               | enough content for desktop as well.
               | 
               | The "garbage" we don't want to serve mobile, is often
               | also garbage for desktop, autoplay videos, too many
               | tracking scripts, etc. If we force people to optimize
               | their site for mobile and desktop then maybe we'll
               | actually get good desktop sites.
        
               | ocdtrekkie wrote:
               | Eh, navigation layout should definitely be different for
               | mobile, and we shouldn't ship the desktop navigation to
               | phone browsers, and I still think it's reasonable to
               | offer phones smaller/more compressed image sizes and
               | stuff by default.
               | 
               | I agree tracking scripts and the like should be blocked
               | and removed across the board. But I think there's
               | probably a suitable amount of visible UI and content that
               | should be shipped differently or less to phones, because
               | of how they're interacted with.
        
       | Ajedi32 wrote:
       | > Moving stuff around (from User-Agent to Sec-CH-UA-*) doesn't
       | really solve much. That is, having to request this information
       | before getting it doesn't help if sites routinely request all of
       | it.
       | 
       | I think this is sort of ignoring the whole point of the proposal.
       | By making sites _request_ this information rather than simply
       | always sending it like the User-Agent header currently does,
       | browsers gain the ability to _deny_ excessively intrusive
       | requests when they occur.
       | 
       | That is to say, "sites routinely request all of it" is precisely
       | the problem this proposal is intended to solve.
       | 
       | There are some good points in this post about things which can be
       | improved with specific Sec-CH-UA headers, but the overall
       | position seems to be based on a failed understanding of the
       | purpose of client hints.
        
         | marcosdumay wrote:
         | Well, if the browsers can just deny those requests, then they
         | can just drop the information entirely. (And they are dropping
         | them from the UA.)
         | 
         | From the two non-harmful pieces, one is of interest of all
         | sites, and the other one has the implementation broken on
         | Chrome, so sites will have to use an alternative mechanism
         | anyway. If there's any value on the idea, Google can propose
         | them with a set of information that brings value, instead of
         | just fingerprinting people.
        
           | Ajedi32 wrote:
           | I think the idea is that there are _some_ legitimate uses for
           | UA information that they don 't want to eliminate entirely,
           | otherwise yeah they could just deprecate the User-Agent
           | header and be done with it.
        
             | marcosdumay wrote:
             | Yes, I got that from your post. It's just that for Google,
             | proposing it again with harmless content is very easy, but
             | for anybody else to filter the bad content once the Google
             | proposal gets accepted is almost impossible. (Although, if
             | I was working on Firefox, I would just copy the most common
             | data from Chrome, adjusting for those 2 fields that matter.
             | That would create problems, but it's the less problematic
             | choice.)
             | 
             | So, no, it should be rejected. Entirely and severely. It
             | doesn't mean that contextual headers are a bad practice,
             | it's just that this one proposal is bad.
        
             | ocdtrekkie wrote:
             | I think most of the legitimate uses could be solved in a
             | simple statement: Let users know whether the device is
             | mobile or desktop, and then expect websites to send all of
             | the logic to handle the rest client-side, so the server
             | does not need to know.
             | 
             | I'd love to see browser metrics being absolutely devastated
             | as an analytic source: It just is used today as an excuse
             | to only support Chrome.
        
               | blowski wrote:
               | Risk-based authentication can use a change in user agent
               | as an increased risk factor.
        
               | Thiez wrote:
               | It could, but as someone who has spoofed user-agents in
               | the past (primarily to get Chrome-only websites to
               | cooperate) I would prefer if it wouldn't. If the baddies
               | can snoop my https traffic or directly copy the auth
               | cookies from my machine then _also_ copying my user-agent
               | isn 't that big of a step for them. One might argue that
               | detecting changes in user agents could be part of some
               | kind of defense in depth strategy, but as a user I
               | imagine I'm already so boned in that scenario that I
               | doubt it would save me. So overall such a mechanism would
               | bring me more inconvenience than security.
        
               | blowski wrote:
               | That's the whole point of RBA, though. That two requests
               | have the same user agent doesn't tell me much, but if you
               | have two different user agents from two different IPs
               | that may sound really risky (use case dependent, of
               | course).
        
               | ocdtrekkie wrote:
               | Unless someone is sitting at their desktop computer with
               | their phone connection to 4G...
               | 
               | Privacy initiatives will probably make some risk-based
               | authentication tricks break, but they probably weren't
               | robust methods anyways.
        
         | grishka wrote:
         | Having to request it is a terrible idea to begin with. If I
         | want to use different templates for mobile vs desktop, I need
         | to know, on the backend, whether the device is a mobile device,
         | and I need it on the very first request. Having to request
         | these headers explicitly is an unnecessary complication that
         | would slow down the first load.
         | 
         | However it _is_ nice that there 's now a separate header that
         | gives a yes or no answer on whether it's a mobile device.
        
           | hypertele-Xii wrote:
           | Why would you need different templates for mobile/desktop?
           | CSS is quite capable responding to any screen orientation.
        
             | jenscow wrote:
             | You're not wrong. However, there are times when CSS isn't
             | enough. For example:
             | 
             | - The Mobile vs Desktop design differences are too great.
             | 
             | - The site was originally created without considering
             | mobile, and retrofitting mobile support is unfeasible.
        
               | hypertele-Xii wrote:
               | Can you expand on the design differences?
        
             | grishka wrote:
             | Yes it is. Except you can't use the same markup for both
             | because the input devices, and thus interaction paradigms,
             | are so radically different. Mice are precise and capable of
             | hovering over things, so it makes sense to pack everything
             | densely and add various tooltips and popup menus.
             | Touchscreens are imprecise and don't have anything
             | resembling hovering, so UI elements must be large, with
             | enough padding around them, and with menus appearing on
             | click.
        
         | 1vuio0pswjnm7 wrote:
         | "By making sites request this information rather than simply
         | sending it like the User-Agent header currently does..."
         | 
         | This is also true with respect to SNI which leaks the domain
         | name in clear text on the wire. The popular browsers send it
         | even when it is not required.
         | 
         | The forward proxy configuration I wrote distinguishes the sites
         | (CDNs) that actually need SNI and the proxy only sends it when
         | required. The majority of websites submitted to HN do not need
         | it. I also require TLSv1.3 and strip out unecessary headers. It
         | all works flawlessly with very few exceptions.
         | 
         | We could argue that sending so much unecessary information as
         | popular browsers do when technically it is not necessary _for
         | the user_ is user hostile. It is one-sided.  "Tech" companies
         | and others interested in online advertising have been using
         | this data to their advantage for decades.
        
           | billyhoffman wrote:
           | How would this work?
           | 
           | SNI is sent by the client in the initial part of the TLS
           | handshake. If you don't send it, the server sends the
           | wrong/bad cert. The client _could_ retry the handshake using
           | SNI to get the correct cert but:
           | 
           | - This adds an extra RTT, on the critical path of getting the
           | base HTML, hurting performance.
           | 
           | - A MITM could send back an invalid cert, causing the browser
           | to retry with SNI, leaking it anyway (since we aren't talking
           | about TLS 1.3 and an encrypted SNI).
           | 
           | I suppose the client could maintain a list of sites that
           | don't need SNI, like the HSTS preload list, but that seems
           | like a ton of overhead to avoid sending unneeded SNI,
           | especially when most DNS is unencrypted and would leak the
           | hostname just like SNI anyways.
        
             | 1vuio0pswjnm7 wrote:
             | "I suppose the client could maintain a list of sites that
             | don't need SNI."
             | 
             | That list would be much larger than the list of sites that
             | do require SNI.
             | 
             | Generally, I can determine whether SNI is required by IP
             | address, i.e., whether it belongs to a CDN that requires
             | SNI. Popular CDNs like AWS publish lists of their public
             | IPs. I use TLSv1.3 plus ESNI with Cloudflare but they are
             | currently the only CDN that supports it. Experimental but
             | works great, IME.
             | 
             | The proxy maintains the list not the browser. The proxy is
             | designed for this and can easily hold lists of 10s of 1000s
             | of domains in memory. That's more domains than I visit in
             | one day, week, month or year.
             | 
             | Is it not a question of whether this is possible. "How
             | would this work". I have already implemented it. It works.
             | It is not difficult to set up.
             | 
             | Why this works for me and would unlikely work for others.
             | 
             | I am not a heavy user of popular browsers, I "live on the
             | command line". Installing a custom root certificate with
             | appropriate SANs to suppress browser warnings is a nusiance
             | that would likely dissuade others since they are heavy
             | users of those programs. However I generally do not use
             | those browsers to retrieve content from the web.
        
               | marcosdumay wrote:
               | I don't think you can ever determine that a site doesn't
               | need SNI using HTTP alone. All you can have is that it
               | doesn't or you don't know.
        
             | [deleted]
        
             | [deleted]
        
         | jefftk wrote:
         | Yes, I wish they would engage with how this fits into the rest
         | of the Privacy Sandbox proposal
         | (https://www.chromium.org/Home/chromium-privacy/privacy-
         | sandb...). My understanding is it's:
         | 
         | 1. Move entropy from "you get it by default" to "you have to
         | ask for it".
         | 
         | 2. Add new APIs that allow you to do things that previously
         | exposed a lot of entropy in a more private way.
         | 
         | 3. Add a budget for the total amount of entropy a site is
         | allowed to get for a user, preventing identifying users across
         | sites through fingerprinting.
         | 
         | Client hints are part of step #1. Not especially useful on its
         | own, but when later combined with #3 sites now have a strong
         | incentive to reduce what they ask for to just what they need.
         | 
         | (Disclosure: I work on ads at Google, speaking only for myself)
        
           | ocdtrekkie wrote:
           | I think pretty much all browsers and a lot of web platforms
           | made it clear in their response to FLoC that everyone except
           | Google (and Twitter, I guess?) considers Privacy Sandbox to
           | be harmful as a whole.
        
             | jefftk wrote:
             | Objections to FLoC are basically about what should be
             | included in #2. I don't understand why people would be
             | opposed to #1 or #3 though?
        
               | dathinab wrote:
               | IMHO #3 is fundamentally flawed as I just can't imagine
               | browsers improving to a point where you couldn't cross
               | reference such "fixed" entropy budges to clearly identify
               | the user.
               | 
               | The only IMHO reasonable technical solution is to reduce
               | entropy as much as possible, even below any arbitrary set
               | entropy limit.
               | 
               | Through in the end I think the right way is a outright
               | (law based) ban of micro targeting and collecting of
               | anything but strongly, transparently and decentralized
               | anonymized metrics.
               | 
               | Also I don't seen Google fully pulling through, e.g. one
               | area where chrome is massively worse then Firefox wrt.
               | entropy is the canvas (at least last time I checked).
               | It's an area where there are known reliable ways to
               | strongly hinder fingerprinting of the canvas. But I don't
               | see Google using them as it would be in conflict with
               | Flutter Web rendering animations in the canvas (which
               | inherently has problems and is technically sub-par
               | compared to how the browser could render web animations
               | (and does in case of Firefox)).
        
               | jefftk wrote:
               | There are really only two ways this can go:
               | 
               | A. Browsers successfully reduce available entropy to
               | where users cannot reliably be tracked across sites.
               | 
               | B. Browsers fail at this, and widely available JavaScript
               | libraries allow cross-site tracking. If it's possible to
               | extract enough bits, they will be extracted.
               | 
               | The thing is, if you can't get all the way to (A) then in
               | removing bits you're just removing useful functionality
               | and adding work for browser developers and web
               | developers. Fighting fingerprinting is only worth it if
               | you have a serious chance of getting to (A).
               | 
               | If you think (A) is off the table then I agree a
               | regulatory solution is the best option. Even then, #1, as
               | exemplified by UACH, is still helpful because it makes
               | tracking more visible. If every piece of information you
               | collect requires active work, instead of just receiving
               | lots of bits by default, then it's much easier for
               | external organizations to identify excessive collection.
               | 
               | (Still speaking only for myself)
        
               | zenexer wrote:
               | Why not both (A) and a regulatory solution? I see no
               | reason to avoid the regulatory route.
        
               | jefftk wrote:
               | Legislation prohibiting fingerprinting would be great!
               | 
               | (Though potentially a bit tricky to craft and enforce)
        
               | ocdtrekkie wrote:
               | It's a fundamental disagreement on the very idea:
               | 
               | Google's position is that it's okay for a website to know
               | X amount of data about a user, you know, as long as it
               | doesn't, in total, cross the creepy line.
               | 
               | Everyone else's position is that if the data isn't
               | required to operate, you don't need it. If we accept that
               | the User Agent, as it is going to be frozen, is going to
               | be served anyways to avoid breaking the legacy web, very
               | little of this proposal adds value, and much of it adds
               | harm. It isn't practical to move to not serving the User
               | Agent, so any replacement for the data in it is pointless
               | at it's very best. The frozen UA provides enough to
               | determine if someone is mobile, the only real need for UA
               | strings. And when most browsers are looking at reducing
               | the tools for websites to fingerprint, Google is
               | introducing new ones.
               | 
               | So Firefox's position on Privacy Sandbox as a whole is
               | pretty logical: If it's optional enough to be requested,
               | why offer it at all? The _entire premise_ of Privacy
               | Sandbox is that it wants sites to have access to some
               | amount of information about the user, and the position of
               | every non-Google-browser is that they want to give sites
               | as close to no data at all as possible.
               | 
               | This is the core of the problem with a single company
               | being legally permitted to operate a web browser and an
               | ad company. _Every single browser developer that doesn 't
               | own an Ads and Analytics suite_ is opposed to Privacy
               | Sandbox.
        
               | jefftk wrote:
               | _> Google 's position is ... Everyone else's position
               | is..._
               | 
               | I don't think this categorization is accurate. For
               | example, Apple built
               | https://webkit.org/blog/8943/privacy-preserving-ad-click-
               | att...
               | 
               |  _> if the data isn 't required to operate, you don't
               | need it_
               | 
               | This is simple, but it's also wrong. Some
               | counterexamples:
               | 
               | * Learning from implicit feedback: dictation software can
               | operate without learning what corrections people make, or
               | a search engine can operate without learning what links
               | people click on, but the overall quality will be lower.
               | Each individual piece of information isn't required, but
               | the feedback loop allows building a substantially better
               | product.
               | 
               | * Risk-based authentication: you have various ways to
               | identify a user, some of which are more hassle for them
               | than others. A login cookie is lowest friction, asking
               | for a password adds more friction, email / SMS / OTP
               | verification add even more. You don't want to ask all
               | users to go through the highest-friction approach on
               | every pageview, but you also don't want to let a
               | fraudster who gets access to someone's cookiejar/leaked
               | password/old device/etc impersonate the user. If you have
               | a small amount of information about the current user's
               | browsing environment, in a way that's hard for a
               | fraudster to imitate, you can offer much lower friction
               | for a given level of security.
               | 
               | * Incremental rollouts: when you make changes to software
               | that operates in complex environments it can be very
               | difficult to ensure that it operates correctly through
               | testing alone. Incremental rollouts, with telemetry to
               | verify that there are no regressions or that relevant
               | bugs have been fixed, produces better software. You're
               | writing as if your position is Firefox's but even they
               | collect telemetry by default:
               | https://support.mozilla.org/en-US/kb/telemetry-clientid
               | 
               |  _> the position of every non-Google-browser is that they
               | want to give sites as close to no data at all as possible
               | ... Every single browser developer that doesn 't own an
               | Ads and Analytics suite is opposed to Privacy Sandbox._
               | 
               | I cited Apple's conversion tracking API above, but
               | another example of this general approach is Microsoft's
               | https://github.com/WICG/privacy-preserving-
               | ads/blob/main/Par... I don't know where you're getting
               | that they're trying for "close to no data at all", as
               | opposed to improving privacy and preventing cross-site
               | tracking?
               | 
               | (Still speaking only for myself)
        
               | barneygale wrote:
               | > Learning from implicit feedback: dictation software can
               | operate without learning what corrections people make, or
               | a search engine can operate without learning what links
               | people click on, but the overall quality will be lower.
               | Each individual piece of information isn't required, but
               | the feedback loop allows building a substantially better
               | product.
               | 
               | That sounds cool. How do I opt into it?
        
               | ocdtrekkie wrote:
               | I would highlight that both Microsoft and Apple (to a
               | lesser extent, mind you) also operate their own ad
               | platforms. Don't get me wrong, I'd be happy to see a
               | blanket ban on web browsers and ad companies being
               | related, and have it apply to all three. I'm an equally
               | opportunity antitrust breakup advocate. ;)
               | 
               | Regarding risk-based authentication, I see a lot of value
               | in it, but I think the cost may be too high, and often
               | less robust methods it uses are a poor metric anyways. I
               | gave an example elsewhere that someone might be using a
               | wired PC and a wireless phone on two different carriers
               | with vastly different user agents at the same time, for
               | instance.
               | 
               | I think there's some merit in some _very_ rough Geo-IP
               | based RBA, but I 'm not sure how many other strategies
               | for that I find effective. The fact that Outlook and
               | Gmail seem equally happy to let someone who's never
               | signed in from outside the United States get logged into
               | in Nigeria seems like low-lying fruit in the risk-based
               | authentication space. ;)
        
               | jefftk wrote:
               | _> I would highlight that both Microsoft and Apple (to a
               | lesser extent, mind you) also operate their own ad
               | platforms._
               | 
               | Do you mean that before when you said "every single
               | browser developer that doesn't own an Ads and Analytics
               | suite" you meant to exclude nearly all the browser
               | vendors? Google, sure, but also Apple, and Microsoft. And
               | then Opera, UC Browser, Brave, DDG, ... I think maybe
               | everyone but Mozilla and Vivaldi has an ads product?
        
         | jsbdk wrote:
         | >By making sites request this information rather than simply
         | always sending it like the User-Agent header currently does,
         | browsers gain the ability to deny excessively intrusive
         | requests when they occur.
         | 
         | Browsers can just not send a UA header
        
           | tremon wrote:
           | I tried this. It breaks a surprisingly large number of sites
           | (or perhaps not-so-surprisingly), and good luck trying to
           | beat Google's captcha without a User-Agent header.
        
             | avian wrote:
             | Good luck trying to beat ReCaptcha if you're doing
             | _anything_ that puts you outside of the normal web browser
             | behavior as imagined by Google 's Algorithm.
             | 
             | If User Agent Client Hints become the new normal, I'm sure
             | anyone excessively denying requests will be flagged in the
             | same way.
        
         | Svip wrote:
         | > browsers gain the ability to deny excessively intrusive
         | requests when they occur
         | 
         | But Set-Cookie kind of proves what happen to that kind of
         | feature. If at first sites gets used to be able to request it
         | and get it, then the browsers that deny anything will simply be
         | ignored. And then those browsers will start providing
         | everything, because they don't want to be left out in the cold.
         | 
         | That's what happened to User-Agent, that's what happened to
         | Set-Cookie, and I can't see why it won't happen to Sec-CH-UA-*.
         | Which the post hints at several times. Set-Cookie was supposed
         | to have the browser ask the user to confirm whether they wanted
         | to set a cookie. Not many clients doing that today.
         | 
         | To be honest, I feel the proposal is a bit naive if it thinks
         | that websites and all browsers will suddenly be on their best
         | behaviour.
        
           | kijin wrote:
           | Yes, this looks like DNT all over again. Just another header
           | that quickly becomes meaningless, wasting terabytes of
           | bandwidth all over the world for no good reason.
        
             | wolverine876 wrote:
             | DNT does nothing technically, but it has political power
             | and that's where privacy happens to a great degree. When
             | 70% of users say 'do not track me', it is hard to claim
             | that they don't care about privacy.
        
               | m45t3r wrote:
               | Unless a big vendor (coff Microsoft coff) decides to
               | enable it by default, them it becomes meaningless.
        
               | Analemma_ wrote:
               | It was meaningless from the beginning: DNT was always
               | nothing but an Evil Bit. You're getting mad at Microsoft
               | for pointing out that the emperor had no clothes.
        
               | Dylan16807 wrote:
               | There were people promising to implement it. That's a lot
               | better than nothing.
        
               | lupire wrote:
               | It was an Evil Bit becaut it didn't have the force of law
               | behind it. Now we have cookie laws.
        
               | wolverine876 wrote:
               | Yes, but it's not hard to ignore DNT on Microsoft user
               | agents, which are a small part of the population.
        
           | thaumasiotes wrote:
           | > Set-Cookie was supposed to have the browser ask the user to
           | confirm whether they wanted to set a cookie. Not many clients
           | doing that today.
           | 
           | No worries, that's why we have laws to make the website do in
           | the content what the browser no longer wants to do in the
           | viewer. ;D
        
             | notriddle wrote:
             | Having the browser explicitly prompt for cookies is neither
             | necessary nor sufficient to do what strong, consistently-
             | enforced privacy laws can do, because the browser can't
             | tell a tracking cookie (which needs a prompt) apart from a
             | settings cookie (which does not).
        
               | dathinab wrote:
               | And the law also only requires you to ask the user if
               | they want to be spied on.
               | 
               | It's not tightly bound to cookies in any way.
               | 
               | And vastly misunderstood.
               | 
               | There was a predecessor which was somehow tied to cookies
               | but even then you didn't need to ask for setting purely
               | functional cookies.
               | 
               | But somehow everyone ended up interpreting it as such.
               | 
               | Maybe because most sites don't have many purely
               | functional cookies or fingerprinting, as they always
               | track you for other purposes, too.
        
               | notriddle wrote:
               | I'm convinced that a lot of the really annoying cookie
               | prompts are the result of two things:
               | 
               | * paranoia, from small websites that are understandably
               | worried about massive fines that could actually put their
               | one-man-show into the poor house
               | 
               | * retaliation, from large websites that intentionally
               | want to turn public sentiment against privacy laws
        
               | [deleted]
        
               | ajsnigrutin wrote:
               | But browsers could disable third party cookies, and
               | autodelete first party cookies on page/tab close by
               | default.
               | 
               | There would be a "keep cookies for this site" button
               | somewhere near the address bar, and at each login, the
               | browser would also ask you if you want to save your
               | password and/or save cookies for that domain.
               | 
               | 99% of websites don't require persistant storage, and
               | those who do, 99% of them are sites you're logged into
               | and already prompt the user, asking if they want to save
               | the password.
        
               | ipaddr wrote:
               | That's private browsing currently. Why not use a private
               | window?
        
               | marcosdumay wrote:
               | Because software is supposed to make our lives easier,
               | not to insist we keep making the same choices again and
               | again, and undo everything as soon as we make a mistake.
        
               | lupire wrote:
               | That would be an extension or fork of Set-Cookie.
        
               | notriddle wrote:
               | Of course a web server could report which cookies are for
               | tracking, and which are for authentication or
               | configuration, instead of doing it within the content.
               | 
               | But so what? The browser has no way to tell if it's
               | lying.
        
       ___________________________________________________________________
       (page generated 2021-07-13 23:01 UTC)