[HN Gopher] Certificate Transparency in Firefox
       ___________________________________________________________________
        
       Certificate Transparency in Firefox
        
       Author : tracymiranda
       Score  : 143 points
       Date   : 2025-02-25 18:52 UTC (4 days ago)
        
 (HTM) web link (blog.transparency.dev)
 (TXT) w3m dump (blog.transparency.dev)
        
       | schoen wrote:
       | Congratulations! That's terrific news.
        
       | joelthelion wrote:
       | Can someone explain in a nutshell what CT is, and how does it
       | help security for the average user?
        
         | djaychela wrote:
         | There's a site for it here (linked 2 levels deep from the
         | original article):
         | 
         | https://certificate.transparency.dev/
        
         | perching_aix wrote:
         | CT is an append-only distributed log for certificate issuances.
         | People and client software can use it to check if a certificate
         | is being provided by a trusted CA, if it has been revoked, or
         | is being provided by multiple CAs (the latter possibly
         | indicating CA compromise). CA meaning Certificate Authority,
         | the organizations that issue certificates.
         | 
         | This provides a further layer of technological defense to
         | attempting the mitigation of your web browser traffic being
         | intercepted and potentially tampered with.
         | 
         | In practice a regular person is unlikely to run into this,
         | because web PKI is mostly working as expected, so there's no
         | reason for the edge cases to happen en masse. This change is
         | covering one such edge case.
         | 
         | No idea how the typical corporate interception solutions (e.g.
         | Zscaler) circumvent it in other browsers where this check has
         | long been implemented.
        
           | q2dg wrote:
           | Will Mitmproxy stop working?
        
             | notpushkin wrote:
             | Chrome treats certificates added by user as not requiring
             | CT: https://github.com/mitmproxy/mitmproxy/discussions/5720
        
               | zinekeller wrote:
               | And to wit, Firefox too:
               | 
               | > Setting this preference to 2 causes Firefox to enforce
               | CT for certificates _issued by roots in Mozilla 's Root
               | CA Program_.
        
             | perching_aix wrote:
             | I believe so. You'll need to disable CT enforcement / or
             | add your SPKI hash to the ignore list in the browser
             | settings temporarily to get it working. [0] I guess this is
             | also how corporations get around this issue? Still unsure.
             | 
             | [0] https://wiki.mozilla.org/SecurityEngineering/Certificat
             | e_Tra...
        
               | mcpherrinm wrote:
               | No. CT is only required for public CAs. You only need
               | those browser policy settings if you're using a public CA
               | without CT.
        
               | perching_aix wrote:
               | I'd imagine this is why certs that terminate in root
               | certificates manually added to the trust store will work
               | fine then [as stated by other comments]?
        
               | mcpherrinm wrote:
               | Right, any CA you add yourself that isn't part of what
               | Mozilla ships isn't considered a publicly trusted CA.
        
         | tgsovlerkhgsel wrote:
         | It's a public, tamper-proof log of all certificates issued.
         | 
         | When a CA issues a certificate, it sends a copy to at least two
         | different logs, gets a signed "receipt", and the receipt needs
         | to be included in the certificate or browsers won't accept it.
         | 
         | The log then publishes the certificate. This means that a CA
         | cannot issue a certificate (that browsers would accept) without
         | including it in the log. Even if a government compels a CA,
         | someone compromises it, or even steals the CAs key, they'd have
         | to either also do the same to two CT logs, or publish the
         | misissued certificate.
         | 
         | Operators of large web sites then can and should monitor the CT
         | logs to make sure that nobody issued a certificate for their
         | domains, and they can and will raise hell if they see that
         | happen. If e.g. a government compels a CA to issue a MitM
         | certificate, or a CA screws up and issues a fake cert, and this
         | cert is only used to attack a single user, it would have been
         | unlikely to be detected in the past (unless that user catches
         | it, nobody else would know about the bad certificate). Now,
         | this is no longer possible without letting the world know about
         | the existence of the bad cert.
         | 
         | There are also some interesting properties of the logs that
         | make it harder for a government to compel the log to hide a
         | certificate or to modify the log later. Essentially, you can
         | store a hash representing the content of the log at any time,
         | and then for any future state, the log can prove that the new
         | state contains all the old contents. The "receipts" mentioned
         | above (SCTs) are also a promise to include a certificate by a
         | certain time, so if a log issues an SCT then publishes a new
         | state more than a day later that doesn't include the
         | certificate, that state + the SCT are proof that the log is
         | bad.
        
           | brookst wrote:
           | Thanks for the great explanation of both tech design and real
           | world benefits!
        
         | cyberax wrote:
         | Basically, a git repository for all the issues certificates.
        
       | linwangg wrote:
       | Great move! Curious to see how this will impact lesser-known CAs.
       | Will this make it easier to detect misissued certs, or will
       | enforcement still depend on browser policies?
        
         | arccy wrote:
         | firefox is just catching up with what chrome implemented years
         | ago. unless you have a site visited only by firefox users,
         | ecosystem effect is likely to be minimal... though it does
         | protect firefox users in the time between detection and
         | remediation.
        
       | ozim wrote:
       | Wonder why they say you should monitor transparency logs instead
       | of setting up CAA records - malicious actors will most likely
       | disregard CAA anyway.
        
         | lambdaone wrote:
         | Let's not let the best be the enemy of the good. Malicious
         | actors who disregard CAA would first have to have gone through
         | the process of accreditation to be added to public trust
         | stores, and then would quickly get removed from those trust
         | stores as soon as the imposture was detected. So while creating
         | a malicious CA and then ignoring CAA records is entirely
         | possible for few-shot high-value attacks, it's not a scalable
         | approach, and it means CAA offers at least partial protection
         | against malicious actors forging certificates as a day-to-day
         | activity.
         | 
         | Transparency logs are of course better because they make it
         | much easier for rogue CAs to be caught rapidly, but it's not a
         | reason to abandon CAA until transparency log checking is
         | universal, not just in browsers, but across the whole PKI
         | ecosystem.
        
         | mcpherrinm wrote:
         | In any security setting, it's usually good to have both
         | controls and detection.
         | 
         | CAA records help prevent unexpected issuance, but what if your
         | DNS server is compromised? DNSSEC might help.
         | 
         | Certificate Transparency provides a detection mechanism.
         | 
         | Also, unlike CAA records which are enforced only by policy that
         | CAs must respect them, CT is technically enforced by browsers.
         | 
         | So they are complimentary. A security-sensitive organization
         | should have both.
        
         | tgsovlerkhgsel wrote:
         | They're doing different things, and you should do both.
         | 
         | Setting CAA records primarily serves to reduce your attack
         | surface against vulnerable domain validation processes. If an
         | attacker wants to specifically attack your domain, and you use
         | CAA, the attacker now needs to find a vulnerability in _your_
         | CA 's domain validation process instead of _any_ CAs validation
         | process. If it works, it _prevents_ an attacker from getting a
         | valid cert.
         | 
         | Monitoring CT logs only detects attacks after the fact, but
         | will catch cases where CAs wrongly issued certificates despite
         | CAA records, and if you monitor against a whitelist of your own
         | known certificates, it will catch cases where someone got
         | _your_ CA to issue them a certificate, either by tricking the
         | CA or compromising your infrastructure (most alerts you will
         | actually see will be someone at your company just trying to get
         | their job done without going through what you consider the
         | proper channels, although I think you can now restrict CAA to a
         | specific account for LetsEncrypt).
         | 
         | Since CT is _required_ now by browsers, an attacker that
         | compromises (or compels!) a CA in any way would still have to
         | log the cert or also compromise or compel at least two logs to
         | issue SCTs (signed promises to include the cert in the log)
         | without actually publishing the cert (this is unlikely to get
         | caught but if it was, there would be signed proof that the log
         | did wrong).
        
       | Eikon wrote:
       | Shameless plug: Check out my Certificate Transparency monitor at
       | https://www.merklemap.com
       | 
       | The scale is massive, I just crossed 100B rows in the main
       | database! :)
        
         | AznHisoka wrote:
         | Why do it only show a few subdomains for _.statuspage.io? I
         | would have expected at least 10K or so.https://www.merklemap.co
         | m/search?query=*.statuspage.io&page=...
         | 
         | Is my query wrong or are you just showing less results
         | intentionally if you're not paying?_
        
           | Eikon wrote:
           | > Why do it only show a few subdomains for .statuspage.io? I
           | would have expected at least 10K or so. https://www.merklemap
           | .com/search?query=*.statuspage.io&page=...
           | 
           | Because they have a wildcard for *.statuspage.io, which they
           | are probably hosting their pages on.
           | 
           | > Is my query wrong or are you just showing less results
           | intentionally if you're not paying?
           | 
           | No, results are the same but not sorted.
        
         | gessha wrote:
         | This is really cool! It discovered even subdomains that lived
         | for a few days on my site. If it's not a secret, how do you
         | discover those? Is it by listening to DNS record changes?
        
           | Eikon wrote:
           | By using... certificate transparency logs.
           | 
           | https://www.merklemap.com/documentation/how-it-works
        
         | tgsovlerkhgsel wrote:
         | Are you continuously monitoring consistency proofs? Or in other
         | words, would someone (you or someone else) actually notice if a
         | log changed its contents retroactively?
        
           | Eikon wrote:
           | Not yet, but that's definitely the short term plan!
        
         | Etheryte wrote:
         | I'm clearly not the target audience for this, so excuse me if
         | this is a dumb question: what is this tool used for? Who would
         | usually use it and for what purpose?
        
           | Eikon wrote:
           | Anyone setting up infrastructure, security researchers,
           | security teams and IT teams.
           | 
           | It's also actually very useful too in the brand management
           | field, especially to detect phishing websites.
        
         | immibis wrote:
         | I tried to do something like this one time and had a problem
         | just _finding the logs_. All information on the internet points
         | to the fact that certain logs exist, but not how to access
         | them. Are they not public access? Do you have a B2B
         | relationship with the companies like Cloudflare that run logs?
        
         | antonios wrote:
         | That's interesting, can you share more information about your
         | tech stack?
        
           | Eikon wrote:
           | Merklemap is running PostgreSQL as the primary database,
           | currently scaling at ~18TB on NVMe storage, and around 30TB
           | of actual certificates that are stored on s3.
           | 
           | The backend is implemented in Rust (handling web services,
           | search functionality, and data ingestion pipelines).
           | 
           | The frontend is built with Next.js.
        
       | megamorf wrote:
       | Doesn't this effectively render corporate CAs useless?
        
         | Eikon wrote:
         | No, it just makes any CA accountable for all the certs they
         | issue.
        
         | zinekeller wrote:
         | > Doesn't this effectively render corporate CAs useless?
         | 
         | All of the browsers ignore transparency for enterprise roots.
         | To determine which is which, the list of _actual_ public roots
         | is stored separately in the CA database, listed in _chrome:
         | //certificate-manager/crscerts_ for Chrome and listed as a
         | "Builtin Object Token" in Firefox's Certificate Manager.
        
         | archi42 wrote:
         | Another comment mentioned [0]. Enterprise and people running a
         | private CA can set
         | "security.pki.certificate_transparency.disable_for_hosts" to
         | disable CT for certain domains (plus all their subdomains).
         | 
         | I just hope they automatically disable it for non-public tlds,
         | both from IANA and RFC 6762.
         | 
         | [0]
         | https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra...
        
       | perching_aix wrote:
       | Would be cool if DANE/TLSA record checks were also implemented.
       | Not sure why browsers are not adopting it.
        
         | zinekeller wrote:
         | Impractical in the sense that there are still TLDs ( _cc_ TLDs
         | mind you, ICANN can't force anything for those countries) which
         | do not have any form of DNSSEC, which makes DANE and TLSA
         | useless for those TLDs.
        
           | perching_aix wrote:
           | Kind of disappointing if that is the actual stated reason by
           | the various browser vendors, all or nothing doesn't sound
           | like a good policy for this. Surely there is a middle ground
           | possible.
        
             | Eikon wrote:
             | Supporting DANE means you need to maintain both traditional
             | CA validation and DANE simultaneously.
             | 
             | This may be controversial, but I believe that with CT logs
             | already in place, DANE could potentially reduce security by
             | leaving you without an audit trail of certificates issued
             | to your hosts. If you actively monitor certificate issuance
             | to your hosts using CT, you are in a much better security
             | posture than what DANE would provide you with.
             | 
             | People praising DANE seem to be doing so as a political
             | statement ("I don't want a 3rd party") rather than making a
             | technical point.
        
               | perching_aix wrote:
               | Why not do both at the same time? I understand that a
               | TLSA record in and of its own would suffice technically,
               | but combined with the regular CA-based PKI, I figured the
               | robustness would increase.
        
               | Eikon wrote:
               | > Why not do both at the same time? I understand that a
               | TLSA record in and of its own would suffice technically,
               | but combined with the regular CA-based PKI, I figured the
               | robustness would increase.
               | 
               | That seems quite complicated while not increasing
               | security by much, or at all?
        
               | perching_aix wrote:
               | I don't necessarily see the complication. The benefit
               | would be that I, the domain owner, would be able to
               | communicate to clients what certificate they should be
               | expecting, and in turn, clients would be able to tell if
               | there's a mismatch. Sounds like a simple win to me.
               | 
               | According to my understanding, multiple CAs can issue a
               | certificate covering the same domain just fine, so that
               | on its own showing up on the CT logs is not a sign of CA
               | compromise, just a clue. Could then check CAA, but that
               | is optional and clients are never supposed to check that
               | according to the standard, only the CAs (which again the
               | idea is that one or more are compromised in this
               | scenario). So there's a gap there. This gap to my
               | knowledge is currently bridged by people auditing CT
               | manually, and is the gap that would be filled with DANE
               | in this setup in my thinking, automating it away (or just
               | straight up providing it, because I can imagine that a
               | lot of domain owners do not monitor CT for their domains
               | whatsoever).
        
         | Avamander wrote:
         | DNSSEC is not a good PKI, that's why.
         | 
         | There are basically no rules on how to properly operate it,
         | even if there were, there'd be no way to enforce them. There's
         | also almost zero chance a leaked key would ever be detected.
        
           | perching_aix wrote:
           | I'm not sure I follow, could you please elaborate a bit more?
           | I'm not really suggesting DNS to be exclusively used for PKI
           | over the current Web PKI system of public CAs either.
        
             | bawolff wrote:
             | That is kind of the value proposition for DANE though.
        
               | perching_aix wrote:
               | What prevents me from putting the hash of the public key
               | of my public CA certificate into the TLSA record?
               | Nothing. What prevents clients from checking both that
               | the public CA based certificate I'm showing is valid and
               | is present on CT, as well as that it's hashes to the same
               | value I have placed into the TLSA record? Also nothing.
               | 
               | Am I grossly misunderstanding something here? Feels like
               | I missed a meta.
        
         | tialaramex wrote:
         | The reason browsers didn't implement DANE is because most
         | people's DNS servers are garbage, so if you do this the browser
         | doesn't work and "if you changed last you own the problem".
         | 
         | At the time if you asked a typical DNS server e.g. at an ISP or
         | built into a cheap home router - any type of question except
         | "A? some.web.site.example" you get either no answer or a
         | confused error. What do you mean records other than A exist?
         | RFC what? These days _most_ of them can also answer AAAA? but
         | good luck for the records needed by DNSSEC.
         | 
         | Today we could do better where people have any sort of DNS
         | privacy, whether that's over HTTPS or TLS or QUIC so long as
         | it's encrypted it's probably not garbage and it isn't being
         | intercepted _by_ garbage at your ISP.
         | 
         | Once the non-adoption due to rusted-in-place infrastructure
         | happened, you get (as you will probably see here on HN) people
         | who have some imagined principle reasons not to do DNSSEC,
         | remember always to ask them how their solution _fixed_ the
         | problem that they say DNSSEC hasn 't fixed. The fact the
         | problem still isn't fixed tells you everything you need to
         | know.
        
           | lxgr wrote:
           | > if you asked a typical DNS server e.g. at an ISP or built
           | into a cheap home router - any type of question except "A?
           | some.web.site.example" you get either no answer or a confused
           | error.
           | 
           | Really? Because that would mean that anything using SRV
           | records wouldn't work on home routers, yet it's an integral
           | part of many protocols at this point.
           | 
           | There's some room between "my DNS resolver doesn't do DNSSEC"
           | and "I can only resolve A records".
        
             | tialaramex wrote:
             | Yes really. Like I said - even AAAA though better than it
             | was isn't as reliable as A, the "Happy Eyeballs" tactic
             | makes that tolerable, maybe 90% of your customers have
             | IPv6, get the AAAA answer quickly, reach the IPv6 endpoint,
             | awesome. 9% only have IPv4 anyway, get IPv4 endpoint, also
             | fine, but 1% the AAAA query never returns, a few
             | milliseconds later the IPv4 connection succeeds and the
             | AAAA query is abandoned so who cares.
             | 
             | I'd guess that you if you build something which needs SRV?
             | to "Just work" in 2025, not "nice to have" but as a
             | requirement, you probably lose 1-2% of your potential users
             | for that. It might be worth it. But if you need 100% you'll
             | want a fallback. I suggest built-in DoH to, say,
             | Cloudflare.
        
           | perching_aix wrote:
           | I guess I did forget that me using Cloudflare and Google as
           | my DNS is not a normal setup to have...
           | 
           | But surely it doesn't have to be so black and white? TLSA
           | enforcement is not even a hidden feature flag in mainstream
           | web clients, it's just completely non-existent to my
           | knowledge.
        
         | cpach wrote:
         | DNSSEC has aged very poorly. I also believe it operates at the
         | wrong layer. When you surf to chase.com you want to make be
         | sure that the website you see is actually JPMorganChase and not
         | Mallory's fake bank site. That's why we have HTTPS and the
         | WebPKI. If your local DNS server is poisoned somehow that's
         | obviosly not good, but it cannot easily send you to a fake
         | version of chase.com.
         | 
         | Part of why it's so hard for Mallory to create a fake version
         | of a bank site is Certificate Transparency. It makes it much
         | much harder to issue a forged certificate that a browser such
         | as Chrome, Safari or Firefox will accept.
         | 
         | For further info about the flaws of DNSSEC I can recommend this
         | article: https://sockpuppet.org/blog/2015/01/15/against-dnssec/
         | It's from 2015 but I don't think anything has really changed
         | since that.
        
           | perching_aix wrote:
           | There are a lot of things that are different for sure since
           | that article's release, for example the crypto, but also the
           | existence of DoH/DoT and that it is leaps and bounds more
           | deployed. They also talk about key pinning, but key pinning
           | has been dead for a while and replaced by exactly CT.
           | 
           | I'm also not sure how much to trust the author. The writing
           | is very odd language wise and they seem to have quite the axe
           | to grind even with just public CA-based PKI, let alone their
           | combination. The FAQ they link to also makes no sense to me:
           | 
           | > Under DNSSEC/DANE, CA certificates still get validated. How
           | could a SIGINT agency forge a TLS certificate solely using
           | DNSSEC? By corrupting one of the hundreds of CAs trusted by
           | browsers.
           | 
           | It's literally what I'd want TLSA enforcement for to combat.
        
             | tptacek wrote:
             | The existence of DoH hurts DNSSEC, it doesn't help it.
             | While privacy is the motivating use case for DoH, it's also
             | the case that on-path attackers can't corrupt the results
             | of a DoH query; they have to move upstream of it.
             | 
             | The dream of TLSA as a bulwark against suborned CAs has
             | always been problematic, because the security of TLSA
             | records collapses down to that of the TLD operators, the
             | most popular of which are state actors or proxies for them,
             | and most of the remainder are essentially e-commerce firms,
             | not trust anchors.
             | 
             | But that doesn't matter, because TLSA as an _alternative_
             | to the WebPKI is already dead on arrival. So many people
             | have problematic access to DNS that no browser can ship
             | hard-fail DANE; in the (extraordinarily unlikely) future
             | world where mainstream browsers do DANE, everybody will
             | have soft-fail DANE falling back to the WebPKI. So, instead
             | of a small number of (state-run!) PKI roots, you 'll have
             | the thousands of legacy operators _plus_ the state-run PKI
             | roots.
             | 
             | This problem motivated the design of "stapling" protocols,
             | where we'd basically throw away the DNS part of the
             | protocol, and just keep the TLSA records, and attach them
             | to the TLS handshake. For several years, this was the last
             | best hope for DANE adoption (read Geoff Huston on this,
             | he's a DANE supporter and he's great), and it all fell
             | apart because nobody could get the security model right.
             | 
             | It's at this point I like to remind people that the
             | browsers basically had to shake down the CAs to get
             | Certificate Transparency to happen. They held almost all
             | the cards (except for antitrust claims, which were wielded
             | against them) --- "comply with CT, or we'll remove you from
             | our root program". But browsers can't do that with DNS TLD
             | operators; they hold none of the cards. So, in addition to
             | the fact that there's no "DNS Transparency" on the horizon,
             | there's also none of the leverage required to actually get
             | it deployed.
             | 
             | DANE does not work. DNSSEC is a dead letter. It's long past
             | time for people to move on. I have a lot of hope for what
             | we can accomplish with ubiquitous DoH-like lookups.
        
       | braiamp wrote:
       | I am on Debian Firefox 135.0.1 and https://no-sct.badssl.com/
       | doesn't error out as expected. Is Debian doing something
       | different?
        
         | masfuerte wrote:
         | I do get the warning using the same Firefox version on Windows.
         | 
         | Note that the link in the article is mangled. The link text is
         | <https://no-sct.badssl.com/>, but the actual href points to
         | <https://certificate.transparency.dev/useragents/>. Your
         | comment has the correct link which gives the warning.
        
         | jeroenhd wrote:
         | 135.0.1 on Ubuntu is warning me. Maybe Mozilla is doing a
         | delayed rollout?
         | 
         | For context, in about:config my
         | security.pki.certificate_transparency.mode is set to 2.
         | According to
         | https://wiki.mozilla.org/SecurityEngineering/Certificate_Tra...
         | if it's on 0 (disabled) or 1 (not enforcing, collecting
         | telemetry only), you can enable it.
         | 
         | I can imagine Mozilla setting that setting to 1 by default
         | (collecting telemetry on sites you visit) and Debian
         | overridding it for privacy purposes.
        
           | lxgr wrote:
           | Does the browser actually communicate with any external
           | service for enforcing CT?
           | 
           | I was under the impression it just checked the certificate
           | for an inclusion proof, and actual monitoring of consistency
           | between these proofs and logs is done by non-browser
           | entities.
        
             | tgsovlerkhgsel wrote:
             | No, I assume but Mozilla was first collecting telemetry to
             | see if enabling CT would cause user-visible errors or not.
        
               | lxgr wrote:
               | Ah, good point - presumably 2 also sends telemetry to
               | Mozilla?
        
               | tgsovlerkhgsel wrote:
               | I would expect (without having checked) that both 1 and 2
               | send telemetry to Mozilla if and only if the global
               | telemetry switch is on (which I think it is by default).
        
         | lxgr wrote:
         | 135.0.1 on macOS is warning me as well.
        
         | BSDobelix wrote:
         | Librewolf 135.0.1 is working as expected (FreeBSD, Windows and
         | Linux)
        
         | saint_yossarian wrote:
         | I'm on Debian sid / Firefox 135.0.1-1 and do get the warning.
        
         | wooque wrote:
         | Debian 12 with Firefox 135.0.1 and I do get a warning.
        
       | greatgib wrote:
       | In theory it is good, but somehow it is also a big threat to
       | privacy and security of your infrastructure.
       | 
       | No need anymore to scan your network to map the complete
       | endpoints of your infrastructure!
       | 
       | And it's a new single point of control and failure!
        
         | Eikon wrote:
         | You're essentially advocating for security through obscurity.
         | 
         | The fact that public infrastructure is mappable is actually
         | beneficial. It helps enforce best practices rather than relying
         | on the flawed 'no one will discover this endpoint' approach.
         | 
         | > And it's a new single point of control and failure!
         | 
         | This reasoning is flawed. X.509 certificates themselves embed
         | SCTs.
         | 
         | While log unavailability may temporarily affect new certificate
         | issuance, there are numerous logs operated by diverse
         | organizations precisely to prevent single points of failure.
         | 
         | Certificate validation doesn't require active log servers once
         | SCTs are embedded.
        
           | cle wrote:
           | They are tradeoffs and it's not all-or-nothing. There's a
           | reason security clearances exist, and that's basically
           | "security through obscurity".
           | 
           | The argument here is that the loss of privacy and the
           | incentives that will increase centralization might not be
           | worth the gain in security for some folks, but good luck
           | opting out. It basically requires telling bigco about your
           | domains. How convenient for crawlers...
        
           | tzs wrote:
           | > You're essentially advocating for security through
           | obscurity
           | 
           | So? The problem with security through obscurity is when it is
           | the only security you are using. I didn't see anything in his
           | comment that implied his only protection was the secrecy of
           | his endpoints.
           | 
           | Security through obscurity can be fine when used in addition
           | to other security measures, and has tangible benefits in a
           | significant fraction of real world situations.
        
             | grayhatter wrote:
             | > So? The problem with security through obscurity is when
             | it is the only security you are using. I didn't see
             | anything in his comment that implied his only protection
             | was the secrecy of his endpoints.
             | 
             | Directly, or unintentionally implied or not. That's an
             | implication you're allowed to infer when obscurity is the
             | only thing listed, because it's *very* common that _is_ the
             | only defense mechanism. Also, when given the choice between
             | mentioning something that works (literally any other
             | security measure), or mentioning something well known to
             | fail more often than work (obscurity). You 're supposed to
             | mention the functioning one, and omit the non-functioning
             | one. https://xkcd.com/463/
             | 
             | > Security through obscurity can be fine when used in
             | addition to other security measures,
             | 
             | No, it also has subtle downsides as well. It changes the
             | behavior of everything that interacts with the system.
             | Humans constantly over value the actual value of security
             | though obscurity. And will make decisions based on that
             | misconceived notion. I once heard an engineer tell me. "I
             | didn't know you could hit this endpoint with curl". The
             | mental model for permitting secrets to be used as part of
             | security is actively harmful to security. Much more than it
             | has ever shown to benefit it. Thus, the cure here is to
             | affirmatively reject security though obscurity.
             | 
             | We should treat it the same way we treat goto. Is goto
             | useful, absolutely. Are there places where it improves
             | code? Another absolutely. Did code quality as a whole
             | improve once SWE collectively shunned goto? Yes! Security
             | though obscurity is causing the exact same class of issues.
             | And until the whole industry adapts to the understanding
             | that it's actually more harmful than useful, we still let
             | subtle bugs like "I thought no one knew about this" sneak
             | in.
             | 
             | We're not going to escape this valley while people are
             | still advocating for security theatre. We all collectively
             | need to enforce the idea that secrets are dangerous to
             | software security.
             | 
             | > and has tangible benefits in a significant fraction of
             | real world situations.
             | 
             | So does racial profiling, but humans have proven over and
             | over and over again, that we're incapable of not misusing
             | in a way that's also actively harmful. And again, when
             | there are options that are better in every way, it's
             | malpractice to use the error prone methods.
        
               | Eikon wrote:
               | Thank you for putting this up so clearly!
        
         | tialaramex wrote:
         | "Passive DNS" is a thing people sell. If people connect to your
         | systems and use public DNS servers, chances are I can "map the
         | complete endpoints of your infrastructure" without touching
         | that infrastructure for a small cost.
         | 
         | If client X doesn't want client-x.your-firm.example to show up
         | by all means obtain a *.your-firm.example wildcard and let them
         | use any codename they like - but know that they're going to
         | name their site client-x.your-firm.example, because in fact
         | they don't properly protect such "secrets".
         | 
         | "Blue Harvest" is what secrets look like. Or "Argo" (the actual
         | event not the movie, although real life is much less dramatic,
         | they didn't get chased by people with guns, they got waved
         | through by sleepy airport guards, it's scary anyway but an
         | audience can't tell that).
         | 
         | What you're talking about is like my employer's data centre
         | being "unlisted" and having no logo on the signs. Google finds
         | it with an obvious search, it's not actually secret.
        
         | lxgr wrote:
         | > And it's a new single point of control and failure!
         | 
         | That's why there is a mandatory minimum of several unaffiliated
         | logs that each certificate has to be submitted to.
         | 
         | If all of these were to catastrophically fail, it would still
         | always be possible for browsers or central monitors to fall
         | back to trusting certificates logged by exactly these without
         | inclusion verification.
        
         | perching_aix wrote:
         | It does leak domain name info, but then you do still have the
         | option to use a wildcard certificate or set up a private CA
         | instead of relying on public ones, which likely makes more
         | sense when dealing with a private resource anyways.
         | 
         | I guess there might be a scenario where you need "secret"
         | domains be publicly resolvable and use distinct certs, but an
         | example escapes me.
        
         | bawolff wrote:
         | > In theory it is good, but somehow it is also a big threat to
         | privacy and security of your infrastructure.
         | 
         | This is silly. Certificates have to be in CT logs regardless of
         | if firefox valudates or not.
         | 
         | Additionally this doesnt apply to private CAs, so internal
         | infrastructure is probably not affected unless you are using
         | public web pki on them.
        
       | elmo2you wrote:
       | I may be (legitimately) flagged for asking a question that may
       | sound antagonizing ... but asked with sincerity: is at all smart
       | to mention Firefox and transparency in the same sentence, at
       | least at this particular moment in time?
       | 
       | While this no doubt is an overall win, at least for most and in
       | most cases, afaik this isn't completely without problems of its
       | own. I just hope it won't lead to a systemd-like situation, where
       | a cadre of (opinionated) people with power get to decide what's
       | right from wrong, based on their beliefs about what might only be
       | a subset of reality (albeit their only/full one at that).
       | 
       | Not trying to be dismissive here. Just have genuine concerns and
       | reservations. Even if mostly intuitively for now; no concrete
       | ones yet. Maybe it's just a Pavlov-reaction, after reading the
       | name Firefox. Honestly can't tell.
        
         | lxgr wrote:
         | You're spot on: You are reacting seemingly without
         | understanding the fundamentals of what you are reacting to.
         | 
         | Certificate Transparency [1] is an important technology that
         | improves TLS/HTTPS security, and the name was not invented by
         | Mozilla to my knowledge.
         | 
         | If Firefox were to implement a hypothetical IETF standard
         | called "private caching", would you also be cynical about
         | Firefox "doing something private at this point in time" without
         | even reading up what the technology in question does?
         | 
         | [1] https://en.wikipedia.org/wiki/Certificate_Transparency
        
           | elmo2you wrote:
           | > You're spot on: You are reacting seemingly without
           | understanding the fundamentals of what you are reacting to.
           | 
           | What if I did (understand)? What if I knew a thing or two
           | about it, even some lesser known details and side-effects?
           | Maybe including a controversy or two, or at least an odd
           | limitation and potential hazard at that. But, you correctly
           | do point out that Firefox isn't to blame for implementing
           | somebody else's "standard". Responsible for any and all
           | consequences? Nonetheless, certainly yes.
           | 
           | Aside from now probably not being the best of times for
           | Firefox, my main (potential) concern still stands. However,
           | it is hardly a Firefox-only one, I'll give it that.
        
             | bawolff wrote:
             | > What if I did (understand)?
             | 
             | I think its pretty clear you don't.
             | 
             | > What if I knew a thing or two about it, even some lesser
             | known details and side-effects?
             | 
             | Then you would explicitly mention them instead of alluding
             | to them.
             | 
             | People who know what they are talking about actually bring
             | up the things that they are concerned about. They don't
             | just say, i know an issue but im not going to tell you what
             | it is.
        
         | perching_aix wrote:
         | I guess this is a good lesson on what the reasoning one would
         | typically (and unfortunately) bring to a mainstream political
         | thread results in when met with a topic from another area of
         | life instead, particularly a technical one.
         | 
         | Especially this:
         | 
         | > where a cadre of (opinionated) people with power get to
         | decide what's right from wrong, based on their beliefs about
         | what might only be a subset of reality (albeit their only/full
         | one at that).
         | 
         | This is always true. There's no arrangement where you can
         | outsource reasoning and decisionmaking (by choice or by
         | coercion) but also not. That's a contradiction.
        
           | elmo2you wrote:
           | > This is always true. There's no arrangement where you
           | entrust someone else with decisionmaking (by choice or not
           | nonwithstanding) but then they're somehow not the ones
           | performing the decisionmaking afterwards.
           | 
           | I'm well aware of that. On itself there isn't a problem with
           | it, in principle at least. Right until it leads to bad
           | decisions being pushed through, and more often in ignorance
           | rather than malice. I personally only have a real problem
           | with it when people or tech ends up harmed or even destroyed,
           | just because of ignorance rather than deliberate arbitrary
           | choices (after consideration, hopefully).
           | 
           | To be clear, I'm not saying that any of that is the case
           | here. But lets just say that browser vendors in general, and
           | Mozilla as of lately in particular, aren't on my "I trust you
           | blindly at making the right decisions" list.
        
             | perching_aix wrote:
             | I do see pretty massive problems with it, such as those you
             | list off, but the unfortunate truth is that one cannot know
             | or do everything themselves. So usually it's not even a
             | choice but a coercive scenario.
             | 
             | For example, say I want to ensure my food is safe to eat.
             | That would require farmland where I can grow my own food.
             | Say I buy some, but then do I have the knowledge and the
             | means to figure out whether the food I grew is _actually_
             | safe to eat? After all, I just bought some random plot of
             | farmland, how would I know what was in it? Maybe it wasn 't
             | even the land that's contaminated but instead the wind
             | brought over some chance contamination? And so on.
        
             | lxgr wrote:
             | > browser vendors in general, and Mozilla as of lately in
             | particular, aren't on my "I trust you blindly at making the
             | right decisions" list.
             | 
             | That's entirely fair. But what does this have to do with
             | Mozilla's decision to enforce Certificate Transparency in
             | Firefox?
             | 
             | If you have a concrete concern, voicing it could lead to a
             | much more productive discussion than exuding a general aura
             | of distrust, even if warranted.
        
         | bawolff wrote:
         | > is at all smart to mention Firefox and transparency in the
         | same sentence, at least at this particular moment in time?
         | 
         | What are you expecting them to do? Rename the technology 1984
         | style?
         | 
         | > I just hope it won't lead to a systemd-like situation, where
         | a cadre of (opinionated) people with power get to decide what's
         | right from wrong, based on their beliefs about what might only
         | be a subset of reality (albeit their only/full one at that).
         | 
         | This is a non sensical statement. I mean that literally. It
         | does not make sense.
         | 
         | > Just have genuine concerns and reservations
         | 
         | Do you? Because it doesn't sound like it.
        
       | samgranieri wrote:
       | Hmm. I wonder how this will work with certificates generated by
       | enterprise or private certificate authorities. Specifically, I
       | use caddy for local web development and it generates a snake oil
       | ca for anything on *.localhost using code from step-ca.
       | 
       | I also use step-ca and bind to run a homelab top level domain and
       | generate certs using rfc2136. I have to install that root ca cert
       | everywhere, but it's worth it
        
         | Habgdnv wrote:
         | As of now, such stricter certificate requirements only apply to
         | publicly trusted CAs that ship with the browser. Custom-added
         | CAs are not subject to these requirements--this applies to all
         | major browsers.
         | 
         | I haven't tested Firefox's implementation yet, but I expect
         | your private CA to continue working as expected since it is
         | manually added.
         | 
         | Private CAs can:
         | 
         | * Issue longer certificates, even 500 years if you want. Public
         | CAs are limited to 1 year I think, or 2? I think it was 1..
         | 
         | * Can use weaker algorithms or older standards if they want.
         | 
         | * Not subject to browser revocation policies - no need for
         | OCSP/CRL etc.
         | 
         | * More things that I do not know?
        
           | Uvix wrote:
           | Public CAs are currently limited to 398 days (effectively 13
           | months).
        
       | snailmailman wrote:
       | I have had the flag to enable this setting enabled for quite some
       | time. It's never caused any issues. I have only seen it pop-up
       | once- for a cert that I had _just_ issued a second prior. The
       | cert was logged properly and the page loaded another second
       | later. Very quick.
        
       | ocdtrekkie wrote:
       | Highlight point: They are just using Chrome's transparency logs.
       | Yet again, Firefox chooses to be subservient to Google's view of
       | the world.
        
       | ruuda wrote:
       | Through this article, a few links away, I learnt about tiling
       | logs, explained in https://research.swtch.com/tlog.
        
       | einpoklum wrote:
       | It seems Mozilla is making Firefox irrelevant through collection
       | of data on users and its plan to have users consent to that data
       | collected and passed on to third parties. So what it does with
       | certificates may not be very important, very soon.
        
         | user3939382 wrote:
         | Yeah in light of their license change, my reaction to this is
         | "who cares".
        
       ___________________________________________________________________
       (page generated 2025-03-01 23:00 UTC)