[HN Gopher] Ask HN: Intercepting HTTPS - How can we trust anything?
       ___________________________________________________________________
        
       Ask HN: Intercepting HTTPS - How can we trust anything?
        
       The proxies like Squid can do HTTPS intercepting so I was wondering
       what's the point of TLS anyway? What if a nation state is
       determined to intercept all traffic of its internet users or even a
       major ISP - can't they get a trusted CA colluding with them in such
       a way that they can generate certificates on the fly and hence
       replacing the SSL certificate of every website that's get visited,
       decrypt and encrypt back?  Cryptographically speaking, that's
       possible? Wouldn't it be possible for certain states hostile to
       their citizens to pay off some trusted CA to get a wide open
       arrangement of that sorts? Now someone thinking they're talking to
       gmail could be first talking to a data collection island in the
       middle?  Similarly, other vectors of attack are the IP routing and
       DNS. I do not understand the Noise protocol but couldn't an ISP or
       a government pretend to be man in the middle, between let us say a
       Signal user and its servers?  EDIT: Added IP and DNS aspects plus
       typos
        
       Author : wg0
       Score  : 56 points
       Date   : 2021-12-25 09:45 UTC (13 hours ago)
        
       | judge2020 wrote:
       | Certificate transparency effectively solves this because certain
       | browsers (currently only Safari and Chrome) require all new
       | certificates be submitted to multiple certificate transparency
       | lists - if it encounters a certificate that isnt, it'll show a
       | warning page before establishing the TLS session. More info[0].
       | 
       | This doesn't stop interception, but the first time it happens
       | that some huge company notices a certificate issuance they didn't
       | authorize and/or should be blocked by their CAA records, it'll be
       | a large event with disastrous consequences for the CA, likely
       | triggering immediate (<48 hours) removal from publicly trusted CA
       | lists.
       | 
       | Of course, if a country wants to intercept, they still can by
       | intercepting all traffic with their own (new) CA - Kazakhstan
       | tried this and asked all their citizens to install it if they
       | wanted internet[1], but it shows that these efforts won't go
       | unnoticed and browser developers might fight back against
       | government surveillance.
       | 
       | 0:
       | https://chromium.googlesource.com/chromium/src/+/refs/heads/...
       | 
       | 1: https://news.ycombinator.com/item?id=25324951
        
         | georgyo wrote:
         | Does this actually work and in use?
         | 
         | I worked on a transparent MITM proxy, a proxy that is
         | configured in the firewall not at the application; IE
         | applications were unaware they were using a proxy.
         | 
         | Simply adding the proxies CA key to the system trust store was
         | all that was needed to make all applications, including
         | browsers, to trust the certificates we were manufacturing.
         | 
         | There was no indication besides inspecting the certificate that
         | the certificate was ours. Definitely no calls go check
         | certificate transparency that we could see.
        
           | hulitu wrote:
           | That's how it's done in corporate environments to monitor
           | trafic.Http or https you have no privacy.
        
           | LinuxBender wrote:
           | Maybe on some browsers. I've MITM my connections on Firefox
           | for a very long time and have never run into anything related
           | to transparency logs. There must be some setting I have
           | disabled in the browser. The only way I can tell that the
           | cert is not legit is looking at the fingerprint and the only
           | sites I run into issues with are the few still using HPKP.
           | I've never used Chrome. Maybe Chrome checks transparency logs
           | and submits violations?
        
             | gchambert wrote:
             | It's not 100% related but certificate pinning (HKPK) is
             | only enforced for CA trusted by browser. It is ignored if
             | the leaf certificate is signed by a user-imported CA (or
             | deployed by enterprise policies). Maybe the same applies
             | for SCT?
        
           | mavhc wrote:
           | The users can see the certificate in their browser though
        
           | benjojo12 wrote:
           | If the MITM CA is installed (maybe only via Group Policy)
           | then browsers relax their requirements for CA's. But some
           | domains (like google) are strictly locked down to a specific
           | set of CA's in Chrome.
           | 
           | Since TLS Interception is critical for some places to achieve
           | their duties (either auditing in banking, or content
           | filtering in schools) we will always have some kind of legit
           | use for MITM CA boxes.
           | 
           | It gets slightly worse if you look at how AntiVirus generally
           | installs CA's to do malware content filtering. Cloudflare
           | publishes stats on what % of traffic they think has been
           | MITM'd: https://malcolm.cloudflare.com/
        
             | miki123211 wrote:
             | Why are we okay with content filtering in schools?
             | 
             | In Poland where I live, parental control basically doesn't
             | exist. Children routinely watch movies, play games and
             | visit websites that they're theoretically not supposed to
             | visit. Parents know about it and don't really see an issue
             | with it. I've seen several boarding school networks, none
             | of them had any filtering whatsoever. The only filters I've
             | seen were on computers in IT classrooms, but nobody uses
             | those for non class-related activities anyway. As far as I
             | know, we don't even have a law banning selling x-rated
             | content to minors.
        
               | ossusermivami wrote:
               | different countries different morales different way of
               | educating childen, ymmv at the outcome!
        
         | miki123211 wrote:
         | It feels like merkle trees[1] could help here.
         | 
         | We could split each transparency log into blocks, issuing a new
         | block every 24 hours. Every block header would contain a hash
         | of the root node of it's Merkle tree, the hash of the previous
         | header and a digital signature. Storing 1000 latest headers for
         | 5 transparency logs would take less than a megabyte, so most
         | browsers would be fine with this.
         | 
         | When sending a certificate, the server would also send the
         | header hashes for the block the certificate appears in, as well
         | as the contents of any intermediate merkle tree nodes.
         | 
         | In the rare case where we encounter a certificate issued in the
         | latest 24 hours, we would directly ask the transparency log for
         | a hash list of all certificates issued in a given 1 minute
         | period (taken from the issuance date in the certificate).
        
           | nayuki wrote:
           | Yes it would help, and you have described Bitcoin's
           | Simplified Payment Verification (SPV) protocol:
           | https://bitcoin.org/bitcoin.pdf#page=5
        
         | ivanr wrote:
         | There are some nuances to Certificate Transparency (CT) that
         | are worth highlighting:
         | 
         | - Technically, certificates are not required to be recorded in
         | CT logs. If not submitted, they're still perfectly valid, but
         | they won't be accepted by clients who insist on CT (e.g., most
         | browsers). They will work perfectly well in all other
         | situations.
         | 
         | - At the point of use, certificates must be accompanied by
         | proofs of submission to several CT logs. These proofs are
         | better known as Signed Certificate Timestamps, or SCTs. SCTs
         | are _promises_ by CT logs to publish, but there is no way to
         | know if the certificates actually had been published. You have
         | to trust the CT logs. This is largely where we're currently
         | with CT.
         | 
         | The main improvement is that a certificate must now be endorsed
         | by min. 3 parties for it to work in a browser (CA + 2 CT log
         | operators at present; 2 parties if a CA also operates one of
         | the logs). At this time, Google must operate one of the CT logs
         | [for the certificate to work in Chrome].
         | 
         | The missing piece (currently in progress) is SCT auditing,
         | where a portion of observed SCTs are continuously checked for
         | presence in CT logs. I wrote a little about it here:
         | https://www.hardenize.com/blog/certificate-transparency-sct-...
        
       | speedgoose wrote:
       | The EFF had a project to collect certificates:
       | https://www.eff.org/observatory
       | 
       | It was able to spotlights some inconsistencies.
       | 
       | But yes if you don't trust your government and its allies, HTTPS
       | is not going to help a lot.
        
       | beermonster wrote:
       | TLS mitigates against someone evil on the network to _some
       | extent_. It is possible to snoop inside TLS connections in a
       | variety of ways for a variety of reasons.
       | 
       | If you care about your data you should client side encrypt it
       | before you send it.
        
       | exfil wrote:
       | Most of the nations have global TLS transparency solution in
       | place. And CA's sign those intercept certificates by law.
        
         | wg0 wrote:
         | It's just like encrypting EBS/block storage on AWS and clouds.
         | You generate the keys and those keys are with the cloud
         | provider nevertheless so it's not encrypted at all pretty much
         | for all practical purposes.
        
           | igetspam wrote:
           | But it's good enough for compliance work. :)
        
             | wg0 wrote:
             | Yeah, that's about it. It's just plain unencrypted
             | otherwise. :)
        
       | laumars wrote:
       | There's a rabbit hole of what ifs one can descend down regarding
       | this:
       | 
       | - what if a nation state has access to a CA
       | 
       | - what if a nation state creates a backdoor in TLS and the pull
       | request gets merged
       | 
       | - what if a nation state creates a backdoor in your OS
       | 
       | - what if a nation state creates a backdoor in your hardware
       | 
       | All are possible but more likely is they'll just use their
       | influence to grab your data directly from the service providers
       | themselves. Eg Google / Microsoft / Meta / Apple etc.
       | 
       | You could get around it by self hosting and using PGP but one has
       | to ask what your personal risk is. With greater security comes
       | greater hurdles and inconvenience
        
         | angio wrote:
         | Aren't CloudFlare and other CDNs an easy target for that? They
         | have to decrypt and then re-encrypt all trafic going through
         | them.
        
         | number6 wrote:
         | > All are possible
         | 
         | All happened already:
         | 
         | 1) Hacked Mongolia CA
         | 
         | 2) Dual_EC_DRBG
         | 
         | 3) Eternal Blue
         | 
         | 4) "NSA Interception In Action? Tor Developer's Computer Gets
         | Mysteriously Re-Routed To Virginia"
         | 
         | Ok technically not backdoors and most of them conspiracy myths.
         | 
         | Yeah there is a lot out there to go deep down the rabbit hole
        
           | wg0 wrote:
           | Consider this - All download URLs of top browsers get
           | intercepted first and the binary fixed binary response gets
           | returned which is actually patched version of the browser (as
           | most major browsers are open source) containing additional
           | CAs to be trusted.
           | 
           | From that point onwards, every HTTPS interaction is just
           | plain text.
        
             | landemva wrote:
             | 'containing additional CAs to be trusted.'
             | 
             | I continue to be surprised that most people seem OK to keep
             | the junk CAs which are preloaded on phones.
        
       | cesarb wrote:
       | > can't they get a trusted CA colluding with them in such a way
       | that they can generate certificates on the fly and hence
       | replacing the SSL certificate of every website that's get
       | visited, decrypt and encrypt back?
       | 
       | As others have commented, it's a huge risk for that CA; once
       | they're caught (and all that's needed as proof is the invalid
       | certificate, which they have just sent to the user's browser),
       | it's a death sentence for their CA business.
       | 
       | > Similarly, other vectors of attack are the IP routing and DNS.
       | I do not understand the Noise protocol but couldn't an ISP or a
       | government pretend to be man in the middle, between let us say a
       | Signal user and its servers?
       | 
       | That used to be the case in the past, but nowadays most protocols
       | (including Signal's) use some kind of cryptographic
       | authentication, often the same TLS used by HTTPS. And since they
       | are using a custom client instead of a generic browser, they can
       | use some extra tricks like allowing only certificates signed by a
       | couple of specific CAs, or even allowing only a specific set of
       | trusted certificates, or using mutual certificate authentication
       | (which breaks MITM since the proxy cannot forge the client's
       | certificate to the server, even if the client trusts the forged
       | server certificate).
        
       | jhoelzel wrote:
       | Well thats an intersting thing with high availability no?
       | 
       | If you do TLs termination at the load balancer you can easily use
       | any certificate down the line that you would like and thus this
       | is a Feature.
       | 
       | I think the general gist of "i am connected to a server that I
       | trust" is a mood point in general if you do not control the
       | network that you are on. All you can really make sure is that
       | your outgoing packet contains a desired destination.
       | 
       | I liked what mega and others have done, encrypting the data in
       | the browser before sending it across the wire. Its an additional
       | step but it almost solves the problem.
       | 
       | The way i solve this for me is to use a direct wireguard
       | connected jump host in the cloud and a router at home that uses
       | openwrt and at least can be checked completely.
        
         | wg0 wrote:
         | > All you can really make sure is that your outgoing packet
         | contains a desired destination.
         | 
         | Exactly. I think that's about it. Nothing else can be
         | guaranteed.
        
       | fulafel wrote:
       | Not sure "nation states" are that interested or capable on doing
       | this, but corporations actually routinely do this against
       | employees and in some places a major portion of people even find
       | it socially acceptable. (Nation state an interestingly often used
       | term in these discussions btw, by chance it doesn't match the US
       | or UK, which are prominent espionage actors on the world stage)
       | 
       | It involves subverting TLS by just the CA trick you mention, just
       | with a private CA preinstalled in company supplied equipment.
       | Worringly browser vendors seem to turn a blind eye to this,
       | because it's not illegal in some countries. (Government espionage
       | is also often legal but fortunately browser vendors haven't bent
       | over there yet at least publicly)
        
         | mysterydip wrote:
         | If you're on company-owned equipment during hours the company
         | is paying you to work, why wouldn't it be acceptable? Without
         | such capabilities, people could hide any kind of traffic (from
         | "I'll just check my social media while I'm here" to "I'll just
         | upload some company secrets to my future employer" without
         | recourse.
        
           | mrfusion wrote:
           | If I trusted my employees that little I wouldn't trust them
           | to work for me.
        
           | igetspam wrote:
           | Because working in an environment of trust is way to retain
           | employees. I don't expect people to try and steal secret,
           | most of the time but I do expect them to take a few personal
           | moments throughout the day. If I found that a company was
           | monitoring my network traffic, I'd terminate the
           | relationship.
        
             | ozim wrote:
             | You are probably smart and aware of the dangers on the
             | internet. There are people that need help keeping their
             | work computers safe from malware and to prevent stuff to
             | happen to them.
             | 
             | I think there is false dichotomy that all monitoring is
             | bad.
             | 
             | Yes if they come to you and nag about browsing hacker news
             | because they found out you are slacking here and there it
             | is totally overstepping.
             | 
             | If company decrypts traffic and has automated scanners to
             | find out/block malware/bad sites I don't see an issue with
             | that.
        
           | foepys wrote:
           | German corporate culture is functioning without snooping
           | around on employee's emails and chats.
           | 
           | It's _strictly forbidden by law_ and only allowed in very
           | constrained circumstances when the employer has a strong
           | suspicion ( "Anfangsverdacht") that the employee is doing
           | things they aren't allowed to. Even then the work council has
           | to be informed and a representative has to be present to
           | ensure no laws will be broken and privacy is preserved.
        
           | fulafel wrote:
           | Sounds like you are from one of those cultures I referenced!
           | 
           | To explain it in plain words: It's not acceptable because I
           | have a human right to privacy.
           | 
           | Yes, privacy allows people to "hide" stuff. It's not an
           | overriding thing. Employment relationships, like rest of our
           | society, are largely based on trust and respect.
        
             | Hnrobert42 wrote:
             | You are indeed entitled to privacy on your own equipment.
             | Why should you be entitled to privacy on employer-owned
             | equipment?
        
               | oasisbob wrote:
               | For the same reason I'd expect privacy in an employer-
               | owned bathroom.
        
               | Hnrobert42 wrote:
               | Interesting. That is a fair point. What if your employer
               | provided you a laptop with two user accounts - one for
               | work and one for personal business?
        
               | fulafel wrote:
               | There's no reason why ownership of equipment would have a
               | bearing on my basic rights, just like they have no effect
               | on my enjoyment of other basic legally protected human
               | rights.
        
               | Hnrobert42 wrote:
               | Let's say I am a farmer who owns a tractor. I provide it
               | to you to work my land. I request that you not use it on
               | your adjacent farm. You agree. Do I have the right to
               | watch where you drive it?
               | 
               | I am not trying to trap you. I am genuinely trying to
               | figure out where the lines are.
        
               | alecbz wrote:
               | I mean, one can have whatever view of rights they choose,
               | but many rights are often considered to be contextual.
               | 
               | E.g., in the US, you have a right to not have your
               | picture taken indoors, but outside in public spaces you
               | don't have this right.
               | 
               | You have a right to do more or less whatever you want
               | with your personal laptop, but your agreement with your
               | employer generally implies that you should only use a
               | work computer for work tasks.
        
               | okasaki wrote:
               | Are you entitled to privacy on employer-owned toilets?
        
               | aroundthfur wrote:
               | What exactly do you want to say with this? Are you
               | comparing prevention of stealing intellectual property
               | with taking a shit?
        
               | okasaki wrote:
               | No? I'm responding to the poster implying that we're not
               | entitled to privacy on employer-owned equipment.
        
               | igetspam wrote:
               | If you're worried about theft, hopefully you've disabled
               | Bluetooth and USB because I can move a hell of a lot of
               | data without a network device.
               | 
               | Also, if this is a legit concern, you may have a hiring
               | problem (or you work for .gov or on .gov contracts
               | exclusively.)
        
               | Hnrobert42 wrote:
               | It's not just employees who leak data from corporate
               | devices. Malware often exfiltrates data through
               | compromised, "benign" websites.
        
               | Hnrobert42 wrote:
               | Toilets are provided as a convenience by employers to
               | perform entirely personal/non-business functions.
               | 
               | A better an analogy would be a desk phone. Does the
               | employer have the right to record phone calls if an
               | announcement is made to all parties?
        
               | [deleted]
        
             | blowfish721 wrote:
             | Simply don't use your company owned computer for personal
             | stuff, period. I've worked in 1st line support and had my
             | share of people calling in asking for help to erase their
             | browsing history from porn urls. But the bottom line is, do
             | you really want to be the one compromising the company you
             | work for by visiting non-work related things? At least
             | that's the biggest reason for me to keep my work and
             | private computers separate.
             | 
             | Edit: I just want to add that I however wouldn't agree with
             | or work for a company that monitors my work computer to
             | make sure I'm always "working" (taking screenshots,
             | tracking mouse and keyboard etc).
        
             | ozim wrote:
             | Well take another example, you have human right to wear
             | what you want like for example a swimsuit all day long.
             | 
             | Somehow most people understand that swimsuit is appropriate
             | on the beach and on the swimming pool and not in the
             | office.
        
       | IceWreck wrote:
       | > How can we trust anything at all?
       | 
       | Self host your software at home, and use a VPN like Wireguard to
       | access it which helps with your MITM problem.
        
         | wg0 wrote:
         | Self hosting won't solve much because even self hosted setup
         | would have to talk to the outside world.
        
           | IceWreck wrote:
           | Which is why I said use Wireguard to talk to your self hosted
           | services. HTTPS can be intercepted with a root certificate, a
           | VPN cannot.
        
       | test932184 wrote:
       | Relevant: Russia wants to ban the use of secure protocols such as
       | TLS 1.3, DoH, DoT, ESNI
       | 
       | https://www.zdnet.com/article/russia-wants-to-ban-the-use-of...
        
         | wg0 wrote:
         | That's the whole point of this discussion - they don't need to
         | do anything like that and can be done transparently without any
         | legislation. Just think of the scenario where you bribe certain
         | CA staff and get a sub CA of your own or something like that.
         | 
         | Now you sign the exact same certificates as with upstream
         | website and you're going to appear pretty much genuine.
        
       | upofadown wrote:
       | > I do not understand the Noise protocol but couldn't an ISP or a
       | government pretend to be man in the middle, between let us say a
       | Signal user and its servers?
       | 
       | Signal messenger uses the Signal Protocol.
       | 
       | Like most every other end to end encrypted messaging scheme,
       | Signal uses cryptographic signatures (or the equivalent in the
       | case of Signal) to insure that you are talking to who you think
       | you are talking to. You end up with a sort of identity number (a
       | really long one). As long as you can insure that you have the
       | right identity number for your correspondent you can be sure that
       | you are connected to that correspondent directly with no
       | interlopers. In person you can use something like a QR code to do
       | this. Otherwise you can try comparing numbers over the phone.
       | 
       | There is a whole discussion of Signal "safety numbers" and how
       | they relate to a MITM attack here:
       | 
       | * https://sequoia-pgp.org/blog/2021/06/28/202106-hey-signal-gr...
       | 
       | From that I get that in most cases it would be possible to MITM a
       | Signal connection because most users are unable to figure out how
       | to verify their safety numbers.
        
       | tinus_hn wrote:
       | Most of these attacks are blocked by the use of certificates
       | (basic proxying) and certificate pinning.
       | 
       | It has been tried before; if you for instance generate
       | certificates for the gmail domains Google will quickly find out
       | because their applications refuse to connect to their sites using
       | certificates that aren't signed by the right authority.
       | 
       | https://security.googleblog.com/2011/08/update-on-attempted-...
       | 
       | And with certificate transparency it becomes ever more obvious if
       | try to pull tricks.
        
       | stavros wrote:
       | > The proxies like Squid can do HTTPS intercepting
       | 
       | They can only do it if you let them. If you trust someone
       | malicious, nobody can save you, not TLS or anything else. What
       | would the alternative be?
        
       | outsomnia wrote:
       | Web PKI (or really, the web) is reasonably OK for peactime but is
       | not going to survive very well at a "wartime footing". There are
       | some attempts with Certificate Transparency
       | 
       | https://en.wikipedia.org/wiki/Certificate_Transparency
       | 
       | to make it more visible if unreasonable but valid and trusted
       | certificates (eg, a trusted Russian CA signed cert for
       | google.com) are seen.
       | 
       | Sites can tell browsers what CA they should expect from that site
       | 
       | https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ex...
       | 
       | if something else is seen, it tells the browser where to report
       | it. This helps but if the attacker controls the victim routing it
       | can suppress the reports.
       | 
       | Browsers ship trusting pretty much every country's CAs at the
       | moment, which is convenient. If CAs are found to be mis-issuing
       | certs, they will get distrusted from the browsers, which has
       | happened several times already. But in wartime, they are not
       | going to care about that if they can inflict massive damage
       | first.
        
         | wg0 wrote:
         | That's interesting. But let's say I am a nation state hell bent
         | on intercepting everything or even a major ISP in the region,
         | can't I remove Except-CT header from all outgoing HTTPS
         | responses?
        
       | btdmaster wrote:
       | There are some decentralised solutions for this issue such as
       | Tor. With Tor, every domain generates their own public/private
       | key pair (which represents the domain itself) hence making it
       | impractical for actors to break all of them (in fact, breaking
       | even one of the 128-bit ed25519 keys would be a breakthrough in
       | cryptography).
       | 
       | The primary issue with this is that centralised services have a
       | much greater network effect so this cannot be relied upon for
       | everything.
        
       | miki123211 wrote:
       | Another aspect of TLS I'm personally worried about is the ability
       | of regulators to influence CAs.
       | 
       | It gets increasingly more difficult to use websites that don't
       | implement HTTPS. Some browsers will warn you if you try to enter
       | a password on such websites, for example. Most users will not
       | know what to do with such warnings, and will probably close the
       | website upon seeing one. It's not impossible to imagine that, in
       | ten or so years, some browsers might disallow plain HTTP
       | entirely.
       | 
       | It's trivial for major governments (the US and the EU in
       | particular) to impose know-your-customer requirements on CAs, or
       | to force them to revoke the certificates of some unsavory
       | websites.
       | 
       | Replacing your default browser with one that doesn't care about
       | TLS support might not be trivial at that point, see Apple's
       | restrictions on third-party browsers and Microsoft's recent
       | tricks.
        
         | Capira wrote:
         | Indeed, a very real concern
        
       | y7 wrote:
       | Yes, if you have access to a root CA you can generally speaking
       | man-in-the-middle HTTPS traffic. One defense is HPKP [1], where a
       | certificate or root of trust is pinned so it cannot be
       | substituted. But it's a bit tricky to implement, because if you
       | make a mistake you can lock yourself out as server administrator.
       | I think some browsers also hard-pin some certificates, like
       | Chrome does for Google domains.
       | 
       | There's also Certificate Transparency [2] which maintains append-
       | only logs of all issued certificates. I'm not really sure how
       | widely it's implemented in browsers, or whether it can be
       | bypassed somehow.
       | 
       | 1. https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
       | 
       | 2. https://en.wikipedia.org/wiki/Certificate_Transparency
        
         | mgdm wrote:
         | I don't believe any modern browser supports HPKP any more, due
         | to how hard it was to set up and operate.
        
           | LinuxBender wrote:
           | They still honor it. I have to tell squid which domains to
           | not MITM because some of Google's sub-domains, paypal, the
           | EFF and a few others still use it.
        
             | toast0 wrote:
             | I suspect all of those pins are from preloading (arranged
             | by request with Chrome maintainers) and not HPKP. HPKP was
             | supposed to allow for similar security after first use,
             | without needing to interact with maintainers and wait for a
             | browser release, but because of the probability of shooting
             | your own foot, usage was extremely low and it was on the
             | path to removal, last I checked.
        
         | fulafel wrote:
         | Note that most browsers allow MITM in presence of HPKP in
         | certain circumstances to facilitate interception by schools and
         | corporate IT.
         | 
         | (Mentioned also in your linked WP article - "Most browsers
         | disable pinning for certificate chains with private root
         | certificates to enable various corporate content inspection
         | scanners")
        
       | GlitchMr wrote:
       | Browsers require certificates to be in Certificate Transparency
       | logs. Therefore, a valid certificate would need to be submitted
       | to CT logs, and a Certificate Authority found to generate
       | certificates without permission would quickly find themselves
       | removed out of browsers' trusted CA lists.
        
       | cookiengineer wrote:
       | As a sidenote here:
       | 
       | All CA certificates are accepted via emails, and are stored via a
       | salesforce CRM that generates a csv spreadsheet. [1]
       | 
       | And yes, this is a system running since the 1990s and is very
       | likely running on a heavily outdated UNIX machine.
       | 
       | So from a cyber security point of view, I wouldn't put much faith
       | in the security of the SSL cert chains of the root store itself.
       | 
       | I don't know who maintains them, but I hope these services are
       | not accessible from the internet, though it seems that the
       | database was at least scanned by shodan, so yeah :-/
       | 
       | I really hope that distro maintainers in between validate what
       | they push out as ca-certificates packages everywhere.
       | 
       | [1] https://www.ccadb.org/
        
       | throwaway984393 wrote:
       | PKI isn't intended to be resistant to state actors. It's intended
       | to keep your credit card safe from MITM by a wily hacker on your
       | network.
        
       ___________________________________________________________________
       (page generated 2021-12-25 23:01 UTC)