[HN Gopher] Why I no longer have an old-school cert on my HTTPS ...
       ___________________________________________________________________
        
       Why I no longer have an old-school cert on my HTTPS site
        
       Author : mcbain
       Score  : 388 points
       Date   : 2025-05-23 10:56 UTC (1 days ago)
        
 (HTM) web link (rachelbythebay.com)
 (TXT) w3m dump (rachelbythebay.com)
        
       | moxli wrote:
       | https://archive.is/QPcLY
        
         | skrebbel wrote:
         | why? it isn't paywalled or something right?
        
           | qwertox wrote:
           | This site can't be reached rachelbythebay.com took too long
           | to respond.
        
             | luckman212 wrote:
             | Site is working fine for me, East coast US.
        
               | rcarmo wrote:
               | Same. Europe.
        
           | neuroticnews25 wrote:
           | Author blacklists some ISPs [0]:
           | 
           | >I contacted Rachel and she said - and this is my poor
           | paraphrasing from memory - that the IP ban was something she
           | intentionally implemented but I got caught as a false
           | positive
           | 
           | [0] https://news.ycombinator.com/item?id=42599359
        
             | dan_hawkins wrote:
             | Yeah, I can't reach out their website from Denmark for some
             | unknown reasons. On top of that the most recent update of
             | their RSS feed server fxxxd up my news reader so I'm even
             | less inclined to see whatever they do because it looks like
             | they're not very competent technology-wise.
        
           | moxli wrote:
           | The page wasn't loading when I tried to open it. I thought it
           | must be the Hacker News hug of death.
        
       | neogodless wrote:
       | Oh parts of this remind me of having to write an HMAC signature
       | for some API calls. I like to start in Postman, but the
       | provider's supplied Postman collection was fundamentally broken.
       | I tried and tried to write a pre-request script over a day or
       | two, and ended up giving up. I want to get back to it, but it's
       | frustrating because there's no feedback cycle. Every request
       | fails with the same 401 Unauthorized error, so you are on your
       | own for figuring out which piece of the script isn't doing
       | _quite_ the right thing.
        
       | ndsipa_pomu wrote:
       | I was amazed by them having so much distrust of the various
       | clients. Certbot is typically in the repositories for things like
       | Debian/Ubuntu.
       | 
       | My favourite client is probably https://github.com/acmesh-
       | official/acme.sh
       | 
       | If you use a DNS service provider that supports it, you can use
       | the DNS-01 challenge to get a certificate - that means that you
       | can have the acme.sh running on a completely different server
       | which should help if you're twitchy about running a complex
       | script on it. It's also got the advantage of allowing you to get
       | certificates for internal/non-routable addresses.
        
         | skywhopper wrote:
         | Certbot goes out of its way to be inscrutable about what it's
         | doing. It munges your web server config (temporarily) to handle
         | http challenges, and for true sysadmins who are used to having
         | to know all the details of what's going on, that sort of script
         | is a nightmare waiting to happen.
         | 
         | I assume certbot is the client she's alluding to that
         | misinterprets one of the factors in the protocol as hex vs
         | decimal and somehow things still work, which is incredibly
         | worrisome.
        
           | jeroenhd wrote:
           | With the HTTP implementation that's true, but the DNS
           | implementation of certbot's certificate request plugins don't
           | touch your server config. As an added bonus, you can use that
           | to also obtain wildcard certificates for your subdomains so
           | different applications can share the same certificate (so you
           | only need one single ACME client).
        
           | claudex wrote:
           | You can configure certbot to write in a directory directly
           | and it won't touch your web server config.
        
           | castillar76 wrote:
           | Having my ACME client munge my webserver configs to obtain a
           | cert was one of the supreme annoyances about using them -- it
           | felt severely constraining on how I structured my configs,
           | and even though it's a blip, I hated the double restart
           | required to fetch a cert (restart with new config, restart
           | with new cert).
           | 
           | Then I discovered the web-root approach people mention here
           | and it made a huge difference. Now I have the HTTP snippet in
           | my server set to serve up ACME challenges from a static
           | directory and push everything else to HTTPS, and the ACME
           | client just needs write permission to that directory. I can
           | dynamically include that snippet in all of the sites my
           | server handles and be done.
           | 
           | If I really felt like it, I could even write a wrapper
           | function so the ACME client doesn't even need restart
           | permissions on the web-server (for me, probably too much to
           | bother with, but for someone like Rachel perhaps worthwhile).
        
             | ndsipa_pomu wrote:
             | A wrapper function may be overkill when you can do
             | something like this:                 letsencrypt renew
             | --non-interactive --post-hook "systemctl reload nginx"
        
               | castillar76 wrote:
               | Oh definitely, but her point was she didn't want the ACME
               | client having the rights to frob the webserver -- I
               | figured that meant restart-rights too. :)
        
           | ndsipa_pomu wrote:
           | > It munges your web server config (temporarily) to handle
           | http challenges
           | 
           | I run it in "webroot" mode on NgINX servers so it's just a
           | matter of including the relevant config file in your HTTP
           | sections (likely before redirecting to HTTPS) so that
           | "/.well-known/acme-challenge/" works correctly. Then when you
           | do run certbot, it can put the challenge file into the
           | webroot and NgINX will automatically serve it. This allows
           | certbot to do its thing without needing to do anything with
           | NgINX.
        
         | christina97 wrote:
         | I used to like them, then they somehow sold out to zerossl and
         | switched the default there from LE after an update.
         | 
         | Pinned to an old version and looking for a replacement right
         | now.
        
           | Bender wrote:
           | That annoyed me as well given the wording on the ZeroSSL site
           | suggested one has to create an account which is not true. _I
           | had hit an error using DNS-01 at the time._ They have an
           | entirely different page for ACME clients but it is not _or
           | was not_ linked from anywhere on the main page.
           | 
           | If anyone else ran into that it's just a matter of adding
           | --server letsencrypt
        
             | castillar76 wrote:
             | You can also permanently change your default to LE --
             | acme.sh actually has instructions for doing so in their
             | wiki.
             | 
             | I rather liked using ZeroSSL for a long time (perhaps just
             | out of knee-jerk resistance to the "Just drink the
             | Koolaid^W^W^Wuse Let's Encrypt! C'mon man, everyone's doing
             | it!" nature of LE usage), but of late ZeroSSL has gotten so
             | unreliable that I've rolled my eyes and started swapping
             | things back to LE.
        
           | ndsipa_pomu wrote:
           | I only started using it after the default was ZeroSSL, but
           | it's easy to specify LetsEncrypt instead
        
         | corford wrote:
         | Agree with the acme.sh recommendation. It's my favourite by far
         | (especially, as you point out, when leveraging with DNS-01
         | challenges so you can sidestep most of the security risks the
         | article author worries about)
        
         | egorfine wrote:
         | certbot is complexity creep at it's finest. I'd love to hear
         | Rachel's take on it.
         | 
         | +1 for acme.sh, it's beautiful.
        
         | JoshTriplett wrote:
         | Certbot is definitely one of the strongest arguments _against_
         | ACME and Let 's Encrypt.
         | 
         | Personally, I find that tls-alpn-01 is even nicer than dns-01.
         | You can run a web server (or reverse proxy) that listens to
         | port 443, and nothing else, and have it automatically obtain
         | and renew TLS certificates, with the challenges being sent via
         | TLS ALPN over the same port you're already listening on.
         | Several web servers and reverse proxies have support for it
         | built in, so you just configure your domain name and the email
         | address you want to use for your Let's Encrypt account, and you
         | get working TLS.
        
           | Shadowmist wrote:
           | Does this only work if LE can reach port 443 on one of your
           | servers/proxies?
        
             | JoshTriplett wrote:
             | Yes. If you want to create certificates for a private
             | server you have to use a different mechanism, such as
             | dns-01.
        
         | xorcist wrote:
         | acme.sh is 8000 lines, still a magnitude better than certbot
         | for something security-critical, but not great.
         | 
         | tiny-acme.py is 200 lines, easy to audit and incorporate parts
         | into your own infrastructure. It works well for the tiny work
         | it does but it does support anything more modern.
        
         | 12_throw_away wrote:
         | Dunno about the protocol, but man, working with certbot and
         | getting it do what I wanted was ... well, a lot more work than
         | I would have guessed. The hooks system was so much trouble that
         | I ended up writing my own.
         | 
         | But yeah, can definitely recommend DNS-01 over HTTP-01, since
         | it doesn't involve implicitly messing with your server
         | settings, and makes it much easier to have a single locked
         | server with all the ACME secrets, and then distribute the certs
         | to the open-to-the-internet web servers.
        
         | notherhack wrote:
         | I feel like acme.sh is the kind of client she's ranting about.
         | 8000 lines of shell code in acme.sh and more in dozens of user-
         | contributed hook scripts, and over 1000 open issues on github?
         | 
         | Personally I like https://github.com/dehydrated-io/dehydrated.
         | Same concept as acme.sh but only 2500 lines of shell and 54
         | open issues. You do have to roll your own hook script though.
         | 
         | Curiously, first commits for both acme.sh and dehydrated were
         | in December 2015. Maybe they both took a security class at uni
         | that fall.
        
       | skywhopper wrote:
       | I identify with this so much because of my own revulsion for the
       | ACME protocol and the available tooling for using it--and SSL
       | tooling in general for that matter--and because this is also
       | representative of my process for figuring out this sort of low
       | priority technical issue that I have to _understand_ before I can
       | implement, in a way that clearly most folks in the industry don't
       | care about understanding.
        
       | tux3 wrote:
       | JOSE/JWK is indeed some galactically overengineered piece of
       | spec, but the rest seems.. fine?
       | 
       | There are private keys and hash functions involved. But base64url
       | and json aren't the worst web crimes to have been inflicted upon
       | us. It's not _that_ bad, is it?
        
         | oneplane wrote:
         | I personally don't see the overengineering in JOSE; as you
         | mention, a JWK (and JWKs) is not much more than the RSA key
         | data we already know and love but formatted for Web and HTTP.
         | It doesn't get more reasonable than that. JWTs, same story,
         | it's just JSON data with a standard signature.
         | 
         | The spec (well, the RFC anyway) is indeed classically RFC-ish,
         | but the same applies to HTTP or TCP/IP, and I haven't seen the
         | same sort of complaints about those. Maybe it's just resistance
         | to change? Most of the specs (JOSE, ACME etc) aren't really
         | complex for the sake of complexity, but solve problems that
         | aren't simple problems to solve simply in a simple fashion. I
         | don't think that's bad at all, it's mostly indicative of the
         | complexity of the problem we're solving.
        
           | lmz wrote:
           | Imagine coming from JWK and having to encode that public key
           | into a CSR or something with that attitude.
        
             | oneplane wrote:
             | Imagine writing your own security software when there are
             | proven systems that just take that problem out of your
             | hands so you don't need to complain about it.
        
               | lmz wrote:
               | I'm _agreeing_ with you that the author is complaining
               | too much. Going the other way they would probably go
               | "and then we have to somehow encode the numbers '1 2 840
               | 113549 1 1' somehow to mark the key type".
        
           | unscaled wrote:
           | I would argue that JOSE _is_ complex for the sake of
           | complexity. It 's not nearly as bad as old cryptographic
           | standards (X.509 and the PKCS family of standards) and
           | definitely much better than XMLDSig, but it's still a lot
           | more complex than it needs to be.
           | 
           | Some examples of gratuitous complexity:
           | 
           | 1. Supporting too many goddamn algorithms. Keeping RSA and
           | HMAC-SHA256 for leagcy-compatible stuff, and Ed25519 for
           | XChaChaPoly1305 for regular use would have been better.
           | Instead we support both RSA with PKCS#1 v1.5 signatures and
           | RSA-PSS with MGF1, as well as ECDH with every possible curve
           | in theory (in practice only 3 NIST Prime curves).
           | 
           | 2. Plethora of ways to combine JWE and JWS. You can encrypt-
           | then-sign or sign-then-encrypt. You can even create multiple
           | layers of nesting.
           | 
           | 3. Different "typ"s in the header.
           | 
           | 4. RSA JWKs can specify the d, p, q, dq, dp and qi values of
           | the RSA private key, even though everything can be derived
           | from "p" and "q" (and the public modulus and exponent "n" and
           | "e").
           | 
           | 5. JWE supports almost every combination of key encryption
           | algorithm, content encryption algorithm and compression
           | algorithm. To make things interesting, almost all of the
           | options are insecure to a certain degree, but if you're not
           | an expert you wouldn't know that.
           | 
           | 6. Oh, and JWE supports password-based key derivation for
           | encryption.
           | 
           | 7. On the other, JWS is smarter. It doesn't need this fancy
           | shmancy password-based key derivation thingamajig! Instead,
           | you can just use HMAC-SHA256 with any key length you want. So
           | if you fancy encrypting your tokens with a cool password like
           | "secret007" and feel like you're a cool guy with sunglasses
           | in a 1990s movie, just go ahead!
           | 
           | This is just some of the things of the top of my head. JOSE
           | is bonkers. It's a monument to misguided overengineering. But
           | the saddest thing about JOSE is that it's still much simpler
           | than the standards which predated it: PKCS#7/CMS, S/MIME and
           | the worst of all - XMLDSig.
        
             | oneplane wrote:
             | It's bonkers if you don't need it, just like JSONx (JSON-
             | as-XML) is bonkers if you don't need it. But standards
             | aren't for a single individual need, if they were they
             | wouldn't be standards. And some people DO need these
             | variations.
             | 
             | Take your argument about order of operations or algorithms.
             | Just because you might not need to do it in an alternate
             | order or use a legacy (and broken) algorithm doesn't mean
             | nobody else does. Keep in mind that this standard isn't
             | exactly new, and isn't only used in startups in San
             | Francisco. There are tons of systems that use it that might
             | only get updated a handful of times each year. Or long-
             | lived JWTs that need to be supported for 5 years. Not going
             | to replace hardware that is out on a pole somewhere just
             | because someone thought the RFC was too complicated.
             | 
             | Out of your arguments, none of them require you to do it
             | that way. Example: you don't have to supply d, dq, dp or qi
             | if you don't want to. But if you communicate with some
             | embedded device that will run out of solar power before it
             | can derive them from the RSA primitives, you will
             | definitely help it by just supplying it on the big beefy
             | hardware that doesn't have that problem. It allows you to
             | move energy and compute cost wherever it works best for the
             | use case.
             | 
             | Even simpler: if you use a library where you can specify a
             | RSA Key and a static ID, you don't have to think about any
             | of this; it will do all of it for you and you wouldn't even
             | know about the RFC anyway.
             | 
             | The only reason someone would need to know the details is
             | if you don't use a library or if you are the one writing
             | it.
        
             | folmar wrote:
             | `cfssl` shows how easy could getting certificates signed be
             | for the typical use case.
             | 
             | https://github.com/cloudflare/cfssl
        
         | unscaled wrote:
         | Yes, JOSE is certainly overengineered and JWK is arguably
         | somewhat overengineered as well.
         | 
         | But "the rest" of ACME also include X.509 certificates and
         | PKCS#10 Certificate Signing Requests, which are in turn based
         | on ASN.1 (you're fortunate enough you only need DER encoding)
         | and RSA parameters. ASN.1 and X.509 are devilishly complex if
         | you don't let openssl do everything for you and even if you do.
         | The first few paragraphs are all about making the correct CSR
         | and dealing with RSA, and encoding bigints the right way (which
         | is slightly different between DER and JWK to make things more
         | fun).
         | 
         | Besides that I don't know much about the ACME spec, but the
         | post mentions a couple of other things :
         | 
         | So far, we have (at least): RSA keys, SHA256 digests, RSA
         | signing, base64 but not really base64, string concatenation,
         | JSON inside JSON, Location headers used as identities instead
         | of a target with a 301 response, HEAD requests to get a single
         | value buried as a header, making one request (nonce) to make
         | ANY OTHER request, and there's more to come.
         | 
         | This does sound quite complex. I'm just not sure how much
         | simpler ACME could be. Overturning the clusterfuck that is
         | ASN.1, X.509 and the various PKCS#* standards has been a lost
         | cause for decades now. JOSE is something I would rather do
         | without, but if you're writing an IETF RFC, you're only other
         | option is CMS[1], which is even worse. You can try to offer a
         | new signature format, but that would be shut down for being
         | "simpler and cleaner than JOSE, but JOSE just has some warts
         | that need to be fixed or avoided"[2].
         | 
         | I think the things you're left with that could have been
         | simplified _and_ accepted as a standard are the APIs
         | themselves, like getting a nonce with a HEAD request and
         | storing identifiers in a Location header. Perhaps you could
         | have removed signatures (and then JOSE) completely and rely on
         | client IDs and secrets since we 're already running over TLS,
         | but I'm not familiar enough with the protocol to know what
         | would be the impact. If you really didn't need any PKI for the
         | protocol itself here, then this is a magnificent edifice of
         | overengineering indeed.
         | 
         | [1] https://datatracker.ietf.org/doc/html/rfc5652 [2]
         | https://mailarchive.ietf.org/arch/msg/cfrg/4YQH6Yj3c92VUxqo-...
        
           | GoblinSlayer wrote:
           | >ASN.1 and X.509 are devilishly complex
           | 
           | Most of it is unused though, only CN, SANs and public key are
           | used.
        
             | tialaramex wrote:
             | You really shouldn't need CN (but it's convenient for
             | humans) however there are a bunch of other interesting
             | things in the X.509 certificate, lets look at the one for
             | this site:
             | 
             | Issuer: We need to know who issued this cert, then we can
             | check whether we trust them and whether the signature on
             | the certificate is indeed from them, and potentially repeat
             | this process - this cert was issued by Let's Encrypt's E5
             | intermediate
             | 
             | Validity: We need to know when this cert was or will be
             | valid, a perfectly good certificate for 2019 ain't much
             | good now, this one is valid from early May until early
             | August
             | 
             | Now we get a public key, in this case a nice modern
             | elliptic curve P-256 key
             | 
             | We need to know how the signature works, in this case it's
             | ECDSA with SHA-384
             | 
             | And we need a serial number for the certificate, this
             | unique number helps sidestep some nasty problems and also
             | gives us an easy shorthand if reporting problems,
             | 05:6B:9D:B0:A1:AE:BB:6D:CA:0B:1A:F0:61:FF:B5:68:4F:5A will
             | never be any other cert only this one.
             | 
             | We get a mandatory notice that this particular certificate
             | is NOT a CA certificate, it's just for a web server, and we
             | get the "Extended key use" which says it's for either
             | servers or for clients (Let's Encrypt intends to cease
             | offering "for client" certificates in the next year or so,
             | today they're the default)
             | 
             | Then we get a URL for the CRL where you can find out if
             | this certificate (or others like it) were revoked since
             | issuance, info with a URL for OCSP (also going away soon)
             | and a URL where you can get your own copy of the issuer's
             | certificate if you somehow do not have that.
             | 
             | We get a policy OID, this is effectively a complicated way
             | to say "If you check Let's Encrypt's formal policy
             | documents, this certificate was specifically issued under
             | the policy identified with this OID", these do change but
             | not often.
             | 
             | Finally we get two embedded SCTs, these are proof that two
             | named Certificate Transparency Log services have seen this
             | certificate, or rather, the raw data in the certificate,
             | although they might also have the actual certificate.
             | 
             | So, quite a lot more than you listed.
             | 
             | [A correct decoder also needs to actually verify the
             | signature, I did not list that part, obviously ignoring the
             | signature would be a bad idea for a live system as then
             | anybody can lie about anything]
        
               | bostik wrote:
               | > _You really shouldn 't need CN (but it's convenient for
               | humans)_
               | 
               | Is this finally the case now? About 5 years back IIS
               | would fail to load a certificate without a CN, despite
               | the field being deprecated since 2000.
               | 
               | And who would run IIS? A bunch of janky/dodgy/shady
               | marketing affiliates, at least. Quite common in the
               | gambling industry.
        
               | tialaramex wrote:
               | I am _definitely_ not telling you that it will work in
               | all software, yeah. Only that it should work and well,
               | that 's not useful engineering advice.
        
       | 1a527dd5 wrote:
       | I don't understand the tone of aggression against ACME and their
       | plethora of clients.
       | 
       | I know it isn't a skill issue because of who the author is. So I
       | can only imagine it is some sort of personal opinion that they
       | dislike ACME as a concept or the tooling around ACME in general.
       | 
       | We've been using LE for a while (since 2019 I think) for handful
       | of sites, and the best nonsense client _for us_ was
       | https://github.com/do-know/Crypt-LE/releases.
       | 
       | Then this year we've done another piece of work this time against
       | the Sectigo ACME server and le64 wasn't quite good enough.
       | 
       | So we ended up trying:-
       | 
       | - https://github.com/certbot/certbot on GitHub Actions, it was
       | fine but didn't quite like the locked down environment
       | 
       | - https://github.com/go-acme/lego huge binary, cli was
       | interestingly designed and the maintainer was quite rude when
       | raising an issue
       | 
       | - https://github.com/rmbolger/Posh-ACME our favourite, but we
       | ended up going with certbot on GHA once we fixed the weird issues
       | around permissions
       | 
       | Edit* Re-read it. The tone isn't aimed at the ACME or the
       | clients. It's the spec itself. ACME idea good, ACME
       | implementation bad.
        
         | immibis wrote:
         | Some people don't want to be forced to run a bunch of stuff
         | they don't understand on the server, and I agree with them.
         | 
         | Sadly, security is a cat and mouse game, which means it's
         | always evolving and you're forced to keep up - and it's
         | inherent by the nature of the field, so we can't really blame
         | anyone (unlike, say, being forced to integrate with the latest
         | Google services to be allowed on the Play Store). At least you
         | get to write your own ACME client if you want to. You don't
         | have to use certbot, and there's no TPM-like behaviour locking
         | you out of your own stuff.
        
           | spockz wrote:
           | Given that keys probably need to be shared between multiple
           | gateway/ingresses, how common is it to just use some HSM or
           | another mechanism of exchanging the keys with all the
           | instances? The acme client doesn't have to run on the servers
           | itself.
        
             | tialaramex wrote:
             | > The acme client doesn't have to run on the servers
             | itself.
             | 
             | This is really important to understand if you care about
             | either: Actually engineering security at some scale or
             | knowing what's actually going on in order to model it
             | properly in your head.
             | 
             | If you just want to make a web site so you can put up a
             | blog about your new kitten, any of the tools is fine, you
             | don't care, click click click, done.
             | 
             | For somebody like Rachel or many HN readers, knowing enough
             | of the technology to understand that the ACME client
             | needn't run on your web servers is crucial. It also means
             | you know that when some particular client you're evaluating
             | needs to run on the web server that it's a limitation of
             | that client not of the protocol - birds can't all fly, but
             | flying is totally one of the options for birds, we should
             | try an eagle not an emu if we want flying.
        
             | immibis wrote:
             | You could if your domain was that valuable. Most aren't.
        
           | hannob wrote:
           | > Some people don't want to be forced to run a bunch of stuff
           | they > don't understand on the server, and I agree with them.
           | 
           | Honest question:
           | 
           | * Do you understand OS syscalls in detail?
           | 
           | * Do you understand how your BIOS initializes your hardware?
           | 
           | * Do you understand how modern filesystems work?
           | 
           | * Do you understand the finer details of HTTP or TCP?
           | 
           | Because... I don't. But I know enough about them that I'm
           | quite convinced each of them is a lot more difficult to
           | understand than ACME. And all of them _and a lot more stuff_
           | are required if you want to run a web server.
        
             | frogsRnice wrote:
             | Sure - but people are still free to decide where they draw
             | the line.
             | 
             | Each extra bit of software is an additional attack surface
             | after all
        
             | sussmannbaka wrote:
             | This point is so tired. I don't understand how a thought
             | forms in my neurons, eventually matures into a decision and
             | how the wires in my head translate this into electrical
             | pulses to my finger muscles to type this post so I guess I
             | can't have opinions about complexity.
        
             | kjs3 wrote:
             | I hear some variation of this line of 'reasoning' about
             | once a week, and it's always followed by some variation of
             | "...and that's why we shouldn't have to do all this
             | security stuff you want us to do".
        
             | fc417fc802 wrote:
             | An OS is (at least generally) a prerequisite. If minimalism
             | is your goal then you'd want to eliminate tangentially
             | related things that aren't part of the underlying
             | requirements.
             | 
             | If you're a fan of left-pad I won't judge but don't expect
             | me to partake without bitter complaints.
        
             | snowwrestler wrote:
             | I get where you're going with this, but in this particular
             | case it might not be relevant because there's a decent
             | chance that Rachel By The Bay does actually understand all
             | those things.
        
           | g-b-r wrote:
           | > Some people don't want to be forced to run a bunch of stuff
           | they don't understand on the server
           | 
           | It's not just about not understanding, it's that more complex
           | stuff is inherently more prone to security vulnerabilities,
           | however well you think you reviewed its code.
        
             | Avamander wrote:
             | > It's that more complex stuff is inherently more prone to
             | security vulnerabilities
             | 
             | That's overly simplifying it and ignores the part where the
             | simple stuff is not secure to begin with.
             | 
             | In the current context you could take a HTTP client with a
             | formally verified TLS stack, would you really say it's
             | inherently more vulnerable than a barebones HTTP client
             | talking to a server over an unencrypted connection? I'd say
             | there's a lot more exposed in that barebones client.
        
               | g-b-r wrote:
               | The alternative of the article was ACME vs other ways of
               | getting TLS certificates, not https vs http.
               | 
               | Of course plain http would be, generally, much more
               | dangerous than a however complex encrypted connection
        
           | tptacek wrote:
           | Non-ACME certs are basically over. The writing has been on
           | the wall for a long time. I understand people being squeamish
           | about it; we fear change. But I think it's a hopeful thing:
           | the Web PKI is evolving. This is what that looks like: you
           | can't evolve _and_ retain everyone 's prior workflows, and
           | that has been a pathology across basically all Internet
           | security standards work for decades.
        
             | ipdashc wrote:
             | ACME is cool (compared to what came before it), but I'm
             | kind of sad that EV certs never seemed to pan out at all. I
             | feel like they're a neat concept, and had the potential to
             | mitigate a lot of scams or phishing websites in an ideal
             | world. (That said, discriminating between "big companies"
             | and "everyone else who can't afford it" would definitely
             | have some obvious downsides.) Does anyone know why they
             | never took off?
        
               | johannes1234321 wrote:
               | > Does anyone know why they never took off?
               | 
               | Browser vendors at some point claimed it confused users
               | and removed the highlight (I think the same browser
               | vendors who try to remove the "confusing" URL bar ...)
               | 
               | Aside from that EV certificates are slow to issue and
               | phishers got similar enough EV certs making the whole
               | thing moot.
        
               | tialaramex wrote:
               | EV can't actually work. It was always about branding for
               | the for-profit CAs so that they have a premium product
               | which helps the line go up. Let me give you a brief
               | history - you did ask.
               | 
               | In about 2005, the browser vendors and the Certificate
               | Authorities began meeting to see if they could reach some
               | agreement as neither had what they wanted and both might
               | benefit from changes. This is the creation of the
               | CA/Browser Forum aka CA/B Forum which still exists today.
               | 
               | From the dawn of SSL the CAs had been on a race to the
               | bottom on quality and price.
               | 
               | Initially maybe somebody from a huge global audit firm
               | that owns a CA turn up on a plane and talks to your VP of
               | New Technology about this exciting new "World Wide Web"
               | product, maybe somebody signs a $1M deal over ten years,
               | and they issue a certificate for "Huge Corporation, Inc"
               | with all the HQ address details, etc. and oh yeah,
               | "www.hugecorp.example" should be on there because of that
               | whole web thing, whatever that's about. Nerd stuff!
               | 
               | By 2005 your web designer clicks a web page owned by some
               | bozo in a country you've never heard of, types in the
               | company credit card details, company gets charged $15
               | because it's "on sale" for a 3 year cert for
               | www.mycompany.example and mail.mycompany.com is thrown in
               | for free, so that's nice. Is it secure? Maybe? I dunno, I
               | think it checked my email address? Whatever. The "real
               | world address" field in this certificate now says "Not
               | verified / Not verified / None" which is weird, but maybe
               | that's normal?
               | 
               | The CAs can see that if this keeps up in another decade
               | they'll be charging $1 each for 10 year certificates,
               | they need a better product and the browser vendors can
               | make that happen.
               | 
               | On the other hand the browser vendors have noticed that
               | whereas auditors arriving by aeroplane was a bit much,
               | "Our software checked their email address matched in the
               | From line" is kinda crap as an "assurance" of "identity".
               | 
               | So, the CA/B Baseline Requirements aka BRs are one
               | result. Every CA agreed they'd do at least what the
               | "baseline" required and in practice that's basically all
               | they do because it'd cost extra to do more so why bother.
               | The BRs started out pretty modest - but it's amazing what
               | you find people were doing when you begin writing down
               | the basics of what obviously they shouldn't do.
               | 
               | For example, how about "No issuing certificates for names
               | which don't exist" ? Sounds easy enough right? When
               | "something.example" first comes into existence there
               | shouldn't already be certificates for "something.example"
               | because it didn't exist... right? Oops, lots of CAs had
               | been issuing those certificates, reasoning that it's
               | probably fine and hey, free money.
               | 
               | Gradually the BRs got stricter, improving the quality of
               | this baseline product in both terms of the technology and
               | the business processes, this has been an enormous boon,
               | because it's an agreement for the industry this ratchets
               | things for everybody, so there's no race to the bottom on
               | quality because your competitors aren't allowed to do
               | worse than the baseline. On price, the same can't be
               | said, zero cost certificates are what Let's Encrypt is
               | most famous for after all.
               | 
               | The other side of the deal is what the CAs wanted, they
               | wanted UI for their new premium product. That's EV.
               | Unlike many of the baseline requirements, this is very
               | product focused (although to avoid being an illegal
               | cartel it is forbidden for CA/B Forum to discuss
               | products, pricing etc.) and so it doesn't make much
               | technical sense.
               | 
               | The EV documents basically say you get all of the
               | Baseline, plus we're going to check the name of your
               | business - here's how we'll check - and then the web
               | browser is going to display that name. It's implied that
               | these extra checks cost more money (they do, and so this
               | product is much more expensive). So improvements to that
               | baseline do help still, but they also help everybody who
               | didn't buy the premium EV product.
               | 
               | Now, why doesn't this work in practice? The DNS name or
               | IP address in an "ordinary" certificate can be compared
               | to reality automatically by the web browser. This site
               | says it is news.ycombinator.com, it has a certificate for
               | news.ycombinator.com, that's the same OK. Your browser
               | performs this check, automatically and seamlessly, for
               | every single HTTP transaction. Here on HN that's per page
               | load, but on many sites you're doing transactions as you
               | click UI or scroll the page, each is checked.
               | 
               | With EV the checks must be done by a human, is this site
               | really "Bob's Burgers" ? Actually wait, is it really
               | "Bob's Burgers of Ohio, US" ? Worse, probably although
               | you know them as Bob's Burgers, legally, as you'd see on
               | their papers of incorporation they are "Smith Restaurant
               | Holdings Inc." and they're registered in Delaware because
               | of course they are.
               | 
               | So now you're staring at the formal company name of a
               | busines and trying to guess whether that's legitimate or
               | a scam. But remember you can't just do this check once,
               | scammers might find a way to redirect some traffic and so
               | you need to check every individual transaction like your
               | web browser does. Of course it's a tireless machine and
               | you are not.
               | 
               | So in practice this isn't effective.
        
               | immibis wrote:
               | Still sounds better than nothing. And gives companies an
               | incentive to register under their actual names.
        
               | tialaramex wrote:
               | I'm not convinced on either, the mindless automation is
               | always effective so you just don't need to think about
               | it, whereas for EV you need to intimately understand
               | exactly which transactions you verified and what that
               | means - the login HTML was authentic but you didn't check
               | the Javascript? The entire login page was checked but
               | HTTP POST of your password was not? The redirect to
               | payment.mybank.example wasn't checked? Only the images
               | were checked?
               | 
               | Imagine explaining to my mother how to properly check
               | this, then imagine explaining why the check she just made
               | is wrong now because the bank changed how their login
               | procedure works.
               | 
               | We _could_ have attempted something with better security,
               | although nowhere close to fool proof, but the CAs were
               | focused on a profitable product not on improving
               | security, and I do not expect anyone to have another bite
               | of that cherry.
               | 
               | As to the incentive to register, this is a cart v horse
               | problem. Most businesses do not begin with a single
               | unwavering vision of their eventual product and branding,
               | they iterate, and that means the famous branding will
               | need an expensive corporate change just to make the EV
               | line up properly, that's just not going to happen much of
               | the time, so people get used to seeing the "wrong" name
               | and once that happens this is worthless.
               | 
               | Meanwhile crooks can spend a few bucks to register a
               | similar-sounding name and registration authorities don't
               | care, while the machine sees at a glance the differences
               | between bobs-burgers.example and robs-burgers.example and
               | bobsburgers.example, the analogous business registrations
               | look similar enough that humans would click right past.
        
               | ipdashc wrote:
               | This was a fun read. Thanks for the explanation!
        
               | bandrami wrote:
               | Phishers also got EV certs.
               | 
               | The big problem with PKI is that there are known bad (or
               | at least sketchy) actors on the big CA lists that
               | realistically can't be taken off that list.
        
               | akerl_ wrote:
               | What's an example?
               | 
               | We're in an era where browsers have forced certificate
               | transparency and removed major vendor CAs when they've
               | issued certificates in violation of the browsers'
               | requirements.
               | 
               | The concern about bad/sketchy CAs in the list feels
               | dated.
        
               | bandrami wrote:
               | Look at the list of state actors in your certificates
               | bundle, for a start
        
               | solatic wrote:
               | How big of a problem is it really, with CAA records and
               | FIDO2 or passkeys?
               | 
               | CAA makes sure only one CA signs the cert for the real
               | domain. FIDO2 prevents phising on a similar-looking
               | domain. EV would force a phisher to get a similar-looking
               | corporate name, but it's beside the main FIDO2
               | protection.
        
               | amiga386 wrote:
               | Because they actively thwarted security.
               | 
               | https://arstechnica.com/information-
               | technology/2017/12/nope-...
               | 
               | https://web.archive.org/web/20191220215533/https://stripe
               | .ia...
               | 
               | > this site uses an EV certificate for "Stripe, Inc",
               | that was legitimately issued by Comodo. However, when you
               | hear "Stripe, Inc", you are probably thinking of the
               | payment processor incorporated in Delaware. Here, though,
               | you are talking to the "Stripe, Inc" incorporated in
               | Kentucky.
               | 
               | There's a lot of validation that's difficult to get
               | around in DV (Domain Validation) and in DNS generally.
               | Unless you go to every legal jurisdiction in the world
               | and open businesses and file and have granted trademarks,
               | you _cannot_ guarantee that no other person will have the
               | same visible EV identity as you.
               | 
               | It's up to visitors to know that apple.com is where you
               | buy Apple stuff, while apple.net, applecart.com,
               | 4ppl3.com, arrle.som, ., example.com/https://apple.com
               | are not. But if they can manage that, they can trust
               | apple.com more than they could any URL with an "Apple,
               | Inc." EV certificate. When browsers show the URL bar tend
               | to highlight the top-level domain prominently, and they
               | reject DNS names with mixed script, to avoid scammers
               | fooling you. It's working better than EV.
        
           | throw0101b wrote:
           | > _Some people don 't want to be forced to run a bunch of
           | stuff they don't understand on the server, and I agree with
           | them._
           | 
           | There are a number of shell-based ACME clients whose
           | prerequisites are: OpenSSL and cURL. You're probably already
           | relying on OpenSSL and cURL for a bunch of things already.
           | 
           | If you can read shell code you can step through the logic and
           | understand what they're doing. Some of them (e.g., acme.sh)
           | often run as a service user (e.g., default install from
           | FreeBSD ports) so the code runs unprivileged: just add a
           | _sudo_ (or _doas_ ) config to allow it to restart
           | Apache/nginx.
        
         | diggan wrote:
         | > I don't understand the tone of aggression against ACME and
         | their plethora of clients.
         | 
         | The older posts on the same website provided a bit more context
         | for me to understand today's post better:
         | 
         | - "Why I still have an old-school cert on my https site" -
         | January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/
         | 
         | - "Another look at the steps for issuing a cert" - January 4,
         | 2023 - https://rachelbythebay.com/w/2023/01/04/cert/
        
         | lucideer wrote:
         | > _I don 't understand the tone of aggression against ACME and
         | their plethora of clients._
         | 
         | > _ACME idea good, ACME implementation bad._
         | 
         | Maybe I'm misreading but it sounds like you're on a similar
         | page to the author.
         | 
         | As they said at the top of the article:
         | 
         | > _Many of the existing clients are also scary code, and I was
         | not about to run any of them on my machines. They haven 't
         | earned the right to run with privileges for my private keys
         | and/or ability to frob the web server (as root!) with their
         | careless ways._
         | 
         | This might seem harsh but when I think it's a pretty fair
         | perspective to have when running security-sensitive processes.
        
           | giancarlostoro wrote:
           | Im not a container guru by any means (at least not yet?) but
           | would docker not suffice these concerns?
        
             | TheNewsIsHere wrote:
             | My reading of the article suggested to me that the author
             | took exception to the code that touched the keying
             | material. Docker is immaterial to that problem. I won't
             | deign to speak for Rachel By The Bay (mother didn't raise a
             | fool, after all), but I expect Docker would be met with a
             | similar regard.
             | 
             | Which I do understand. Although I use Docker, I mainly use
             | it personally for things I don't want to spend much time
             | on. I don't really like it over other alternatives, but it
             | makes standing up a lab service stupidly easy.
        
             | fpoling wrote:
             | The issue is that the client needs to access the private
             | key, tell web server where various temporary files are
             | during the certificate generation (unless the client uses
             | DNS mode) and tell the web server about a new certificate
             | to reload.
             | 
             | To implement that many clients run as a root. Even if that
             | root is in a docket container, this is needlessly elevated
             | privileges especially given the complexity (again,
             | needless) of many clients.
             | 
             | The sad part is that it is trivial to run most of the
             | clients with an account with no privileges that can access
             | very few files and use a unix socket to tell the web server
             | to reload the certificate. But this is not done.
             | 
             | And then ideally at this point the web servers should if
             | not implement then at least facilitate ACME protocol
             | implementations, like, for example, redirect traffic
             | requests from acme servers to another port with one-liner
             | in config. But this is not the case.
        
               | GoblinSlayer wrote:
               | It's cheap. If the client was done today, it would be
               | based on AI.
        
               | ptx wrote:
               | Apache comes with built-in ACME support. Just enable the
               | mod_md module:
               | https://httpd.apache.org/docs/2.4/mod/mod_md.html
        
               | tialaramex wrote:
               | But the requirements you listed aren't actually
               | requirements of ACME, they're lazy choices you _could_
               | make but they aren 't necessary. Some clients do better.
               | 
               | For example the client needs a Certificate Signing
               | Request, one way to achieve that is to either have the
               | client choose the private keys or give it access to a
               | chosen key, but the whole _point_ of a CSR is that you
               | don 't need the private key, the CSR can be made by
               | another system, including manually by a human and it can
               | even be re-used repeatedly so that you don't need new
               | ones until you decide to replace your keys.
               | 
               | Yes, if we look back at my hopes when Let's Encrypt
               | launched we can be very disappointed that although this
               | effort was a huge success almost all the server vendors
               | continued to ship garbage designed for a long past era
               | where HTTPS is a niche technology they barely support.
        
               | toast0 wrote:
               | I don't know that it's accurate, but at the beginning, it
               | felt like using certbot was the only supported way to use
               | ACME/LE, and it really wanted to do stuff as root and
               | restart your webserver whenever.
               | 
               | Or you could run Caddy which had a built in ACME client,
               | but then you're running an extra daemon.
               | 
               | apache_mod_md eventually came along which works for me,
               | but it's also got some lazy things (it mostly just
               | manages requesting certs, you've got to have a frequent
               | enough reload to pick them up; I guess that's ok because
               | I don't think public Apache ever learned to periodically
               | check if it needs to reopen access logs when they're
               | rotated, so you probably reload Apache from time to time
               | anyway)
               | 
               | Before that was workable, I did need some certs and used
               | acme.sh by hand, and it was nicer than trusting a big
               | thing running in a cron and restarting things, but it was
               | also inconvenient becsause I had to remember to go do it.
        
               | tialaramex wrote:
               | > I don't know that it's accurate, but at the beginning,
               | it felt like using certbot was the only supported way to
               | use ACME/LE, and it really wanted to do stuff as root and
               | restart your webserver whenever.
               | 
               | It's fair to say that on day one the only launch client
               | was Certbot, although on that day it wasn't called
               | "Certbot" yet so if that's the name you remember it
               | wasn't the only one. Given that it's not guaranteed this
               | will be a success (like the American Revolution, or the
               | Harry Potter books it seems obvious in hindsight but
               | that's too late) it's understandable that they didn't
               | spend lots of money developing a variety of clients and
               | libraries you might want.
        
             | rsync wrote:
             | Yes, it does.
             | 
             | I run acme in a non privileged jail whose file system I can
             | access from _outside_ the jail.
             | 
             | So acme sees and accesses nothing and I can pluck results
             | out with Unix primitives from the outside.
             | 
             | Yes, I use dns mode. Yes, my dns server is also a
             | (different) jail.
        
             | lucideer wrote:
             | I use docker for the same reasons as the author's
             | reservations - I combine a docker exec with some of my own
             | loose automation around moving & chmod-ing files &
             | directories to obviate the need for the acme client to have
             | unfettered root access to my system.
             | 
             | Whether it's a local binary or a dockerised one, that
             | access still needs to be marshalled either way & it can get
             | complex facilitating that with a docker container. I
             | haven't found it too bad but I'd really rather not need
             | docker for on-demand automations.
             | 
             | I give plenty* of services root access to my system, most
             | of which I haven't written myself & I certainly haven't
             | audited their code line-by-line, but I agree with the
             | author that you do get a _sense_ from experience of the
             | overall hygiene of a project  & an ACME client has yet to
             | give me good vibes.
             | 
             | * within reason
        
               | paul_h wrote:
               | Copilot suggests:                   docker run --rm \
               | -v /srv/mywebsite/certs:/acme.sh/certs \           -v
               | /srv/mywebsite/public/.well-known/acme-
               | challenge:/acme-challenge \           neilpang/acme.sh
               | --issue \           --webroot /acme-challenge \
               | -d yourdomain.com \           --cert-file
               | /acme.sh/certs/cert.pem \           --key-file
               | /acme.sh/certs/key.pem \           --fullchain-file
               | /acme.sh/certs/fullchain.pem
               | 
               | I don't know why it's suggesting `neilpang` though, as he
               | no longer has a fork.
        
               | lucideer wrote:
               | Yeah I'm not running anything llms spit at me in a
               | security-sensitive context.
               | 
               | That example is not so bad - you've already pointed out
               | the main obvious supply-chain attack vector in
               | referencing a random ephemeral fork, but otherwise it's
               | certonly (presumably neil's default) so it's the simplest
               | case. Many clients have more... intrusive defaults that
               | prioritise first-run cert onboarding which is opening
               | more surface area for write error.
        
           | dwedge wrote:
           | This is the same author that threw everyone into a panic
           | about atop and turned out to not really have found anything.
        
             | ezekiel68 wrote:
             | Agreed and -- in particular -- I don't recall seeing any
             | kind of "everybody get back into the pool" follow-up after
             | the developers of atop quickly addressed the issue with an
             | update. At least not any kind of follow-up that got the
             | same kind of press as the initial alarm.
        
           | dangus wrote:
           | I disagree, the author is overcomplicating and overthinking
           | things.
           | 
           | She doesn't "trust" tooling that basically the entire
           | Internet including major security-conscious organizations are
           | using, essentially letting perfect get in the way of good.
           | 
           | I think if she were a less capable engineer she would just
           | set that shit up using the easiest way possible and forget
           | about it like everyone else, and nothing bad would happen.
           | Download nginx proxy manager, click click click, boom I have
           | a wilcard cert, who cares?
           | 
           | I mean, this is her https site, which seems to just be a
           | blog? What type of risk is she mitigating here?
           | 
           | Essentially the author is so skilled that she's letting
           | perfect get in the way of good.
           | 
           | I haven't thought about certificates for years because it's
           | not worth my time. I don't really care about the tooling,
           | it's not my problem, and it's never caused a security issue.
           | Put your shit behind a load balancer and you don't even need
           | to run any ACME software on your own server.
        
             | nothrabannosir wrote:
             | Sometimes I wonder how y'all became programmers. I learned
             | basically everything by SRE-larping on my shitty nobody-
             | cares-home-server for years and suddenly got paid to do it
             | for real.
             | 
             | Who do you think they hire to manage those LBs for you?
             | People who never ran any ACME software, or people who have
             | a blog post turning over every byte of JSON in the protocol
             | in excruciating detail?
        
           | thayne wrote:
           | No the author seems opposed to the idea specification of
           | ACME, not just the implementation of the clients.
           | 
           | And a lot of the complaints ultimately boil down to not
           | liking JWS. And I'm not really sure what she would have
           | preferred there. ASN.1, which is even more complicated? Some
           | bespoke format where implementations can't make use of
           | existing libraries?
        
             | imtringued wrote:
             | This is exactly the impression I got here.
             | 
             | I would have had sympathy for the disdain for certbot, but
             | certbot wasn't called out and that isn't what the blog post
             | is about at all.
        
       | jeroenhd wrote:
       | There's something to be said for implementing stuff like this
       | manually for the experience of having done it yourself, but the
       | author's tone makes it sound like she hates the protocol and all
       | the extra work she needs to do to make the Let's Encrypt setup
       | work.
       | 
       | Kind of makes me wonder what kind of stack her website is running
       | on that something like a lightweight ACME library
       | (https://github.com/jmccl/acme-lw comes to mind, but there's a
       | C++ library for ESP32s that should be even more lightweight)
       | loading in the certificates isn't doing the job.
        
         | mschuster91 wrote:
         | > but the author's tone makes it sound like she hates the
         | protocol and all the extra work she needs to do to make the
         | Let's Encrypt setup work.
         | 
         | The problem is, SSL is a _fucking hot, ossified mess_. Many of
         | the noted core issues, especially the weirdnesses around
         | encoding and bitfields, are due to historical baggage of ASN.1
         | /X.509. It's not fun to deal with it, at all... the math alone
         | is bad enough, but the old abstractions to store all the
         | various things for the math are simply constrained by the
         | technological capabilities of the late '80s.
         | 
         | There would have been a chance to at least _partially_ reduce
         | the mess with the introduction of LetsEncrypt - basically, have
         | the protocol transmit all of the required math values in a
         | decent form and get an x.509 cert back - and HTTP /2, but that
         | wasn't done because it would have required redeveloping a bunch
         | of stuff from scratch whereas one can build an ACME CA with,
         | essentially, a few lines of shell script, OpenSSL and six
         | crates of high proof alcohol to drink away one's frustrations
         | of dealing with OpenSSL, and integrate this with all software
         | and libraries that exist there.
        
           | jeroenhd wrote:
           | There's no easy way to "just" transmit data in a foolproof
           | manner. You practically need to support CSRs as a CA anyway,
           | so you might as well use the existing ASN.1+X509 system to
           | transmit data.
           | 
           | ASN.1 and X509 aren't all that bad. It's a comprehensively
           | documented binary format that's efficient and used
           | everywhere, even if it's hidden away in binary protocols you
           | don't look at every day.
           | 
           | Unlike what most people seem to think, ACME isn't something
           | invented just for Let's Encrypt. Let's Encrypt was certainly
           | the first high-profile CA to implement the protocol, but
           | various CAs (free and paid) have their own ACME servers and
           | have had them for ages now. It's a generic protocol for
           | certificate authorities to securely do domain validation and
           | certificate provisioning that Let's Encrypt implemented
           | first.
           | 
           | The unnecessarily complex parts of the protocol when writing
           | a from-the-ground-up client are complex because ACME didn't
           | reinvent the wheel, and reused existing standard protocols
           | instead. Unfortunately, that means having to deal with JWS,
           | but on the other hand, it means most people don't need to
           | write their own ACME-JWS-replacement-protocol parsers. All
           | the other parts are complex because the problem ACME is
           | solving is actually quite complex.
           | 
           | The author wrote [another
           | post](https://rachelbythebay.com/w/2023/01/03/ssl/) about the
           | time they fell for the lies of a CA that promised an "easier"
           | solution. That solution is pretty much ACME, but with more
           | manual steps (like registering an account, entering domain
           | names).
           | 
           | I personally think that for this (and for many other
           | protocols, to be honest) XML would've been a better fit as
           | its parsers are more resilient against weird data, but these
           | days talking about XML will make people look at you like
           | you're proposing COBOL. Hell, I even exchanging raw, binary
           | ASN.1 messages would probably have gone over pretty well, as
           | you need ASN.1 to generate the CSR and request the
           | certificate anyway. But, people chose "modern" JSON instead,
           | so now we're base64 encoding values that JSON parsers will
           | inevitably fuck up instead.
        
             | GoblinSlayer wrote:
             | The described protocol looks like rewording of X509 with
             | json syntax, but you still have X509, as a result you have
             | two X509. Replay nonce is used straightforwardly as serial
             | number, termsOfServiceAgreed can be extension, and CSR is
             | automatically signed in the process of generation.
        
             | schoen wrote:
             | > Unlike what most people seem to think, ACME isn't
             | something invented just for Let's Encrypt. Let's Encrypt
             | was certainly the first high-profile CA to implement the
             | protocol, but various CAs (free and paid) have their own
             | ACME servers and have had them for ages now. It's a generic
             | protocol for certificate authorities to securely do domain
             | validation and certificate provisioning that Let's Encrypt
             | implemented first.
             | 
             | This depends on whether you're speaking as a matter of
             | history. ACME was originally invented and implemented by
             | the Let's Encrypt team, but _in the hope_ that it could
             | become an open standard that would be used by other CAs.
             | That hope was eventually borne out.
        
           | schoen wrote:
           | Yes, we actually considered the "have the protocol transmit
           | all of the required math values in a decent form and get an
           | x.509 cert back" version, but some people who were interested
           | in using Let's Encrypt were apparently very keen on being
           | able to use an existing external CSR. So that became
           | mandatory in order not to have two totally separate code
           | paths for X.509-based requests and non-X.509-based requests.
           | 
           | An argument for this is that it makes it theoretically
           | possible for devices that have no knowledge of anything about
           | PKI since the year 2000, and/or no additional
           | programmability, to use Let's Encrypt certs (obtained on
           | their behalf by an external client application). I have, in
           | fact, subsequently gotten something like that to work as a
           | consultant.
        
             | mschuster91 wrote:
             | Yikes. Guessed as much. Thanks for your explanation.
             | 
             | As for oooold devices - doesn't LetsEncrypt demand key
             | lengths and hash algorithms nowadays that simply weren't
             | implemented back then?
        
               | schoen wrote:
               | Yes, I guess 2000 would actually be an exaggeration these
               | days, as you can't use SHA-1 or 1024-bit RSA subject
               | keys. So you could maintain compatibility with oold
               | devices, but not with oooold devices.
        
       | z3t4 wrote:
       | At some stage you need to update your TXT records, and if you
       | register a wildcard domain you have to do it twice for the same
       | request! And you have to propagate these TXT records _twice_ to
       | all your DNS servers, and wait for some third party like google
       | dns to request the TXT record. And it all has to be done within a
       | minute in order to not time out. DNS servers are not made to
       | change records from one second to another and rely heavily on
       | caching, so I'm lucky that I run my own DNS servers, but good
       | luck doing this if you are using something like a anycast DNS
       | service.
        
         | XorNot wrote:
         | Or just use the HTTP protocol, which works fine.
        
           | fpoling wrote:
           | For wildcard certificates DNS is the only option.
        
         | castillar76 wrote:
         | Fortunately that's only needed if you're using the DNS
         | validation method -- necessary if you're getting wildcards
         | (but...eek, wildcards). For HTTP-01, no DNS changes are needed
         | unless you want to add CAA records to block out other CAs.
        
           | elric wrote:
           | Wildcard Certificates are your friend if you don't want all
           | of your hostnames becoming public knowledge.
        
             | 12_throw_away wrote:
             | Having tried it myself, I can highly recommend a security
             | posture that doesn't depend on the secrecy of any
             | particular URL :)
        
               | elric wrote:
               | Ah yes, that old chestnut again. Because there's only
               | ever one way to secure anything, and defense in depth is
               | apparently meaningless. Comments like this are not
               | helpful.
        
             | castillar76 wrote:
             | You're not wrong: there's definitely evidence, for
             | instance, of savvy attackers watching the CT logs for
             | things like newly-instantiated WordPress servers and then
             | attacking them before the admins have set the initial
             | password on them. (Which is really a WP problem, but I
             | digress.) So there's benefit in not having the internals of
             | your infrastructure writ large in public CT logs.
             | 
             | My problem is with the selected solution: wildcard
             | certificates are a huge compromise waiting to happen. They
             | give an attacker the ability to impersonate _anything_ in
             | my infrastructure for as long as the cert is valid (and
             | even a week is _long_ time for that). Worse, if I'm then
             | distributing the wildcard to everything on my internal
             | network that needs to do anything over HTTPS, that's a lot
             | of potential attack points. (If it's just one TLS-
             | terminating bastion host that's very tightly secured,
             | then...maybe. _Maybe_. But it almost never stays that way.)
             | 
             | To me, it's a much better security tradeoff to accept the
             | hostname problem (or run my own CA internally for stuff
             | that doesn't need a public cert) and avoid wildcards
             | entirely.
        
         | Arnavion wrote:
         | You can have multiple TXT records for the same domain
         | identifier, and the ACME server will look through all of them
         | to find the one that it expects. So for an order that requests
         | SANs example.org and *.example.org, where the server asks for
         | two authorizations to be completed for _acme-
         | challenge.example.org, you can create both TXT records at the
         | same time.
         | 
         | https://datatracker.ietf.org/doc/html/rfc8555#section-8.4
         | 
         | >2. Query for TXT *records* for the validation domain name
         | 
         | >3. Verify that the contents of *one of the TXT records* match
         | the digest value
         | 
         | (Emphasis mine.)
        
       | sam_lowry_ wrote:
       | I am running an HTTP-only blog and it's getting harder every year
       | not to switch to HTTPS.
       | 
       | For instance, Whatsapp can not open HTTP links anymore.
        
         | projektfu wrote:
         | You can proxy it, which for a small server might be the best
         | way to avoid heavy traffic, through caching at the proxy.
        
         | g-b-r wrote:
         | For god's sake, however complex ACME might be it's better than
         | not supporting TLS
        
           | sam_lowry_ wrote:
           | Why? The days of MITM boxes injecting content into HTTP
           | traffic are basically over, and frankly they never were a
           | thing in my part of the world.
           | 
           | I see no other reason to serve content over HTTPS.
        
             | JoshTriplett wrote:
             | > Why? The days of MITM boxes injecting content into HTTP
             | traffic are basically over
             | 
             | The _reason_ you don 't see many MITM boxes injecting
             | content into HTTP anymore is _because_ of widespread HTTPS
             | adoption and browsers taking steps to distrust HTTP, making
             | MITM injection a near-useless tactic.
             | 
             | (This rhymes with the observation that some people now
             | perceive Y2K as overhyped fear-mongering that amounted to
             | nothing, without understanding that immense work happened
             | behind the scenes to avert problems.)
        
               | sam_lowry_ wrote:
               | How do browsers distrust HTTP, exactly?
        
               | castillar76 wrote:
               | They've been making it harder and harder to serve things
               | over HTTP-only for a while now. Steps like marking HTTP
               | with big "NOT SECURE" labels and trying to auto-push to
               | HTTP have been pretty effective. (With the exception of
               | certain contexts, I think this is a generally good trend,
               | FWIW.)
        
               | g-b-r wrote:
               | I have to change two settings to be able to see plain
               | http things, and luckily I only need to a handful of
               | times a year.
               | 
               | If I'm really curious about your plain http site I'll
               | check it out through archive.org, and I'm definitely not
               | going to keep visiting it frequently.
               | 
               | It's been easy to live with forced https for at least
               | five years (and for at least the last ten with https
               | first, with confirmations for plain http).
        
               | castillar76 wrote:
               | I was genuinely taken aback by visiting a restaurant
               | website last night that was only served over HTTP.
               | (Attempting to hit it with HTTPS generated a cert error
               | because their shared provider didn't have a cert for it.)
               | These days, I'd gotten so used to things just being over
               | HTTPS all the time that the warnings in the browser
               | _actually worked_ to grab my attention.
               | 
               | It's a restaurant that's been here with the same menu
               | since the 1970s, and their website does absolutely
               | nothing besides pass out information (phone number, menu,
               | directions), so they probably put it up in 2002 and
               | haven't changed it since. It was just a startling
               | reminder of how ubiquitous HTTPS has gotten.
        
               | JoshTriplett wrote:
               | They show any site served over HTTP as _explicitly_ not
               | secure in the address bar (making HTTPS the  "default"
               | and HTTP the visibly dangerous option), they limit many
               | web APIs to sites served over HTTPS (
               | https://developer.mozilla.org/en-
               | US/docs/Web/Security/Secure...) ,
               | https://developer.mozilla.org/en-
               | US/docs/Web/Security/Secure... ), they block or upgrade
               | mixed-content by default (HTTPS sites cannot request
               | HTTP-only resources anymore), they require HTTPS for
               | HTTP/2 and HTTP/3, they increasingly attempt HTTPS to a
               | site _first_ even if linked /typed as http, they warn
               | about downloads over http, and they're continuing to
               | ratchet up such measures over time.
        
               | fc417fc802 wrote:
               | > they increasingly attempt HTTPS to a site first even if
               | linked/typed as http
               | 
               | And can generally be configured by the user not to
               | downgrade to http without an explicit prompt.
               | 
               | Honestly I disagree with the refusal to support various
               | APIs over http. Making the (configurable last I checked)
               | prompt mandatory per browser session would have sufficed
               | to push all mainstream sites to strictly https.
        
               | JoshTriplett wrote:
               | > And can generally be configured by the user not to
               | downgrade to http without an explicit prompt.
               | 
               | Absolutely, and this works quite well on the current web.
               | 
               | > Honestly I disagree with the refusal to support various
               | APIs over http.
               | 
               | There are multiple good reasons to do so. Part of it is
               | pushing people to HTTPS; part of it is the observation
               | that if you allow an API over HTTP, you're allowing that
               | API to any attacker.
        
               | fc417fc802 wrote:
               | > if you allow an API over HTTP, you're allowing that API
               | to any attacker.
               | 
               | In the scenario I described you're doing that only after
               | the user has explicitly opted in on a case by case basis,
               | and you're forcing a per-session nag on them in order to
               | coerce mainstream website operators to adopt the secure
               | default.
               | 
               | At that point it's functionally slightly more obtuse than
               | adding an exception for a certificate (because those are
               | persistent). Rejecting the latter on the basis of
               | security is adopting a position that no amount of user
               | discretion is acceptable. At least personally I'm
               | comfortable disagreeing with that.
               | 
               | More generally, I support secure defaults but almost
               | invariably disagree with disallowing users to shoot
               | themselves in the foot. As an example, I expect a stern
               | warning if I attempt to uninstall my kernel but I also
               | expect the software on my device to do exactly what I
               | tell it to 100% of the time regardless of what the
               | developers might have thought was best for me.
        
               | JoshTriplett wrote:
               | > More generally, I support secure defaults but almost
               | invariably disagree with disallowing users to shoot
               | themselves in the foot.
               | 
               | I agree with this. But also, there is a strong degree to
               | which users _will_ go track down ways (or follow random
               | instructions) to shoot themselves in the foot if some
               | site they care about says  "do this so we can function!".
               | I do think, in cases where there's value in collectively
               | pushing for better defaults, it's sometimes OK for the "I
               | can always make my device do exactly what I tell it to
               | do" escape hatch to be "download the source and change it
               | yourself". Not every escape hatch gets a setting, because
               | not every escape hatch is supported.
        
               | fc417fc802 wrote:
               | > escape hatch to be "download the source and change it
               | yourself"
               | 
               | In theory I agree. In practice building a modern browser
               | is neither easy nor is it feasible to do so on an average
               | computer.
               | 
               | Given that a nag message would suffice to coerce all
               | mainstream operators (thus accomplishing the goals I've
               | seen stated) I'm left unconvinced. That said, I'm not
               | particularly bothered since the trivial workaround is a
               | self signed certificate and the user adding an exception.
               | It's admittedly a case of principle rather than practice.
        
               | foobiekr wrote:
               | If browser vendors really cared, they would disable
               | javascript on non-https sites.
        
               | GoblinSlayer wrote:
               | https://googleprojectzero.blogspot.com/2025/03/blasting-
               | past...
        
               | 0xCMP wrote:
               | If you try to open anything with just HTTP on an iOS
               | device (e.g. "Hey, look at this thing I made and have
               | served on the public internet! <link>") it just won't
               | load.
               | 
               | This was my experience sending a link to someone who
               | primarily uses an iPad and is non-technical. They were
               | not going to find/open their Macbook to see the link.
        
               | schoen wrote:
               | https://en.wikipedia.org/wiki/Preparedness_paradox
        
               | JoshTriplett wrote:
               | Thank you!
        
             | g-b-r wrote:
             | You see no reason for privacy, ok
        
           | bigstrat2003 wrote:
           | There's no good reason to serve a blog over TLS. You're not
           | handling sensitive data, so unencrypted is just fine.
        
             | cAtte_ wrote:
             | relevant blog post and HN discussion:
             | https://news.ycombinator.com/item?id=22146291
        
             | foobiekr wrote:
             | The reason is to prevent your site from becoming a watering
             | hole where malicious actors use it to inject malware into
             | the browsers of your users.
             | 
             | TLS isn't for you, it's for your readers.
        
               | GoblinSlayer wrote:
               | Ads will inject it anyway.
        
               | nssnsjsjsjs wrote:
               | If there are ads.
        
               | dijit wrote:
               | but like, who's doing that?
               | 
               | Maybe the answer is disabling the JS runtime on non-TLS
               | sites, maybe that has the added benefit of making the web
               | (documents and "posters") light again.
               | 
               | SMS is unencrypted, phone calls are unencrypted- yet we
               | don't worry nearly as much about people injecting content
               | or modifying things. Because we trust out providers,
               | largely, but the capability 100% exists for that; with no
               | actual recourse. With browsing the internet we do have
               | recourse- optionally use a VPN.
               | 
               | All of this security theatre is just moving the trust
               | around, I would much rather make laws that protect the
               | integrity of traffic sent via ISPs than add to the
               | computational waste from military grade encrypting the
               | local menu for the pizza shop.
               | 
               | Worse still, the pizza shop won't go through the effort
               | so they either won't bother having a website or will put
               | it on facebook or some other crazy centralised platform.
               | 
               | I'll tell you something, I trust my ISP (Bahnhof- famous
               | for protecting thepiratebay) a lot more than I trust
               | Facebook not to do weird moderation activities.
        
             | throw0101b wrote:
             | > _You 're not handling sensitive data, so unencrypted is
             | just fine._
             | 
             | Except when an adversary MITMs your site and injects an
             | attack to one of your readers:
             | 
             | * https://www.infoworld.com/article/2188091/uk-spy-agency-
             | uses...
             | 
             | Further: tapping glass is a thing, and if the only traffic
             | that is encrypted is the "important" or "sensitive" stuff,
             | then it sticks out in the flow, and so attackers know to
             | focus just on that. If _all_ traffic is encrypted, then it
             | 's much harder for attackers to figure out what is
             | important and what is not.
             | 
             | So by encrypting your "unimportant" data you add more noise
             | that has to be sifted through.
        
           | agarren wrote:
           | Why? I can understand the argument that you don't want an ISP
           | or a middlebox injecting ads or scripts (valid I think even
           | if I've never encountered it to my knowledge), but otherwise
           | you're publishing content intended for the world. There's
           | presumably nothing especially sensitive that you need to hide
           | on the wire.
        
             | mort96 wrote:
             | > (valid I think even if I've never encountered it to my
             | knowledge)
             | 
             | Visitors to your website may encounter it. Do you not care
             | that your visitors may be seeing ads?
             | 
             | You're also leaking what your visitors are reading to their
             | ISPs and governments. Maybe you don't consider anything you
             | write about to be remotely sensitive, but how critically do
             | you examine that with every new piece you write?
             | 
             | If you wrote something which _could_ be sensitive to
             | readers in some parts of the world (something about
             | circumventing censorship, something critical of some
             | religion, something involving abortion or other forms of
             | healthcare that some governments are cracking down on), do
             | you then add SSL at that point? Or do you refrain from
             | publishing it?
             | 
             | Personally, I like the freedom to just not think about
             | these things. I can write about whatever I want, however
             | controversial it might be in some regions, no matter how
             | dangerous it is for some people to be found reading about
             | it, and be confident that my readers can expect at least a
             | baseline of safety because my blog, like pretty much every
             | other in the world today, uses cryptography to ensure that
             | governments and ISPs can't use deep packet inspection to
             | scan the words they read or use MITM to inject things into
             | my blog. Does it really matter? Well _probably_ not for my
             | site specifically, but across all the blogs and websites in
             | the world and all the visitors in the world, that 's a
             | whole lot of "probably not"s which all combine together
             | into a huge "almost definitely".
        
               | nssnsjsjsjs wrote:
               | I agree but SSL doesn't encrypt the IP so there is that
               | information leaked. That isn't to say don't use SSL but
               | more it isn't 100% privacy protection from middlemen.
        
               | mort96 wrote:
               | Yeah, it's unfortunate that it doesn't hide the IP
               | address you connect to. But my thinking is: for someone
               | to hypothetically get in trouble for reading my blog if I
               | don't use SSL, their ISP/government just needs to inspect
               | the packets they receive and scan for particular words or
               | phrases, and they'll have hard proof that the person
               | accessed a particular article. If I use SSL, those
               | governments must have specifically flagged my web
               | server's IP address, and they won't have any hard proof
               | about what content was accessed.
               | 
               | So while it's not perfect, it's pretty good, and a very
               | low investment for me as a website owner :)
        
         | pornel wrote:
         | You're making a mistake assuming that the push for HTTPS-only
         | Web is about protecting the content of your site.
         | 
         | The problem is that mere existence of HTTP is a vulnerability.
         | Users following any insecure link to anywhere allow MITM
         | attackers to inject any content, and redirect to any URL.
         | 
         | These can be targeted attacks against vulnerabilities in the
         | browser. These can be turning browsers into a botnet like the
         | Great Cannon. These can be redirects, popunders, or other
         | sneaky tab manipulation for opening phishing pages for other
         | domains (unrelated to yours) that do have important content.
         | 
         | Your server probably won't even be contacted during such
         | attack. Insecure URLs to your site are the vulnerability. Don't
         | spread URLs that disable network-level security.
        
       | ThePowerOfFuet wrote:
       | With the greatest respect to Rachel, ain't _nobody_ got time for
       | that.
        
       | egorfine wrote:
       | > import JSON (something I use as little as possible)
       | 
       | This makes me wonder what world of development she is in. Does
       | she prefer SOAP?
        
         | codeduck wrote:
         | Given her experience and work history, it's much more likely
         | that she views any text-based protocol as an unnecessary
         | abstraction over simply processing raw TCP.
        
           | horsawlarway wrote:
           | Is this a joke? I don't even know where to begin with this
           | comment... It reads like a joke, but I suspect it's not?
           | 
           | TCP is just a bunch of bytes... You can't process a bunch of
           | bytes without understanding what they are, and that requires
           | signaling information at a different level (ex - in the bytes
           | themselves as a defined protocol like SSH, SCP, HTTP, etc -
           | or some other pre-shared information between server and
           | client [the worst of protocols - custom bullshit]).
        
             | codeduck wrote:
             | parent mentioned SOAP as an alternative to JSON. I was
             | being glib about the fact that the engineer who wrote this
             | blog post is a highly-regarded sysadmin and SRE who tinkers
             | on things ranging from writing her own build systems to
             | playing with RF equipment.
        
               | horsawlarway wrote:
               | Sure. Between the two comments, I think the SOAP joke is
               | a lot better.
        
             | lesuorac wrote:
             | > or some other pre-shared information between server and
             | client [the worst of protocols - custom bullshit])
             | 
             | Why is this worse than JSON?
             | 
             | "{'protected': {'protected': { 'protected': 'QABE' }}}" is
             | just as custom as 66537 imo. It's easier to reverse
             | engineer than 66537 but that's not less custom.
        
             | dmwilcox wrote:
             | __attribute__(packed) structs with an enum at the front
             | indicating their type. The receivers use a switch to figure
             | out which sub-message to interpret the strict as...
             | 
             | It isn't pretty and you better be able to control the
             | rollout of both sides because backwards compatibility will
             | not be a thing. But I'll take it over a giant pile of code
             | generation in many circumstances
        
         | hansvm wrote:
         | JSON is slow, not particularly comfortable for humans to work
         | with, uses dangerous casts by default, is especially dangerous
         | when it crosses library or language boundaries, has the
         | exponential escaping problem when people try to embed
         | submessages, relies on each client to appropriately validate
         | every field, doesn't have any good solution for binary data, is
         | prone to stack overflow when handling nested structures, etc.
         | 
         | If the author says they dislike JSON, especially given the tone
         | of this article with respect to nonsensical protocols, I highly
         | doubt they approve of SOAP.
        
           | egorfine wrote:
           | > JSON is [...]
           | 
           | What would you suggest instead given all these cons?
        
             | Y_Y wrote:
             | Fixing all of those at once might be a bit too much to ask,
             | but I have some quick suggestions. I'd say for a more
             | robust JSON you could try Dhall. If you just want to
             | exchange lumps of data between programs I'd use Protobuf.
             | If you want simple and freeform I'd go with good old sexps.
             | 
             | https://github.com/dhall-lang/dhall-lang
             | 
             | https://protobuf.dev/
             | 
             | https://en.wikipedia.org/wiki/S-expression
        
               | imtringued wrote:
               | Suggesting protobuf as alternative to JSON is crazy talk.
        
               | Y_Y wrote:
               | You must not have seen the abject misuses of JSON I've
               | seen then
        
         | wolf550e wrote:
         | Her webserver outputs logs in protobuf, so I think she likes
         | binary serialization.
        
       | liampulles wrote:
       | I appreciate the author calling this stuff out. The increasing
       | complexity of the protocols that the web is built on is not a
       | problem for developers who simply need to find a tool or client
       | to use the protocol, but it is a kind of regulatory capture that
       | ensures only established players will be the ones able to meet
       | the spec required to run the internet.
       | 
       | I know ACME alone is not insurmountably complex, but it is
       | another brick in the wall.
        
         | charcircuit wrote:
         | These protocols all have open source implementations. And as AI
         | gets stronger this barrier will get smaller and smaller.
        
           | chrisandchris wrote:
           | So instead of designing simpler protocols (like HTTP/1.1 is),
           | we do not care and let AI figure it out? Sounds great to
           | me... /s
        
       | tialaramex wrote:
       | One of the things this gestures at might as well get a brief
       | refresher here:
       | 
       | Subject Alternative Name (SAN) is not an alternative in the sense
       | that it's an alias, SANs exist because the X.509 certificate
       | standard is, as its name might suggest, intended for the X.500
       | directory system, a system from the 20th century which was never
       | actually deployed. Mozilla (back then the Netscape Corporation)
       | didn't like re-inventing wheels and this standard for
       | certificates already existed so they used it in their new "Secure
       | Sockets" technology but it has no Internet names so at first they
       | just put names in plain text. However, X.500 was intended to be
       | infinitely extensible, so we can just invent an _alternative_
       | naming scheme, and that 's what the SANs are, which is why
       | they're mandatory for certificates in the Web PKI today - these
       | are the Internet's names for things, so they're mandatory when
       | talking about the Internet, they're described in detail in PKIX,
       | the IETF document standardising the use of X.500 for the
       | Internet.
       | 
       | There are several types of name we can express as SANs but in a
       | certificate the two you'll commonly see are dnsName - the same
       | ASCII names you'd see in URLs like "news.ycombinator.com" or
       | "www.google.com" and ipAddress - a 32-bit integer typically
       | spelled as four dotted decimals 10.20.30.40 [yes or an IPv6
       | 128-bit integer will work here, don't worry]
       | 
       | Because the SANs aren't just free text a machine can reliably
       | parse them which would doubtless meet Rachel's approval. The
       | browser can mindlessly compare the bytes in the certificate
       | "news.ycombinator.com" with the bytes in the actual DNS name it
       | looked up "news.ycombinator.com" and those match so this cert is
       | for this site.
       | 
       | With free text in a CN field like a 1990s SSL certificate (or,
       | sadly, many certificates well into the 2010s because it was
       | difficult to get issuers to comply properly with the rules and
       | stop spewing nonsense into CN) it's entirely possible to see a
       | certificate for " 10.200.300.400" which well, what's that for? Is
       | that leading space significant? Is that an IP address? But those
       | numbers don't even fit in one byte each I hope our parser copes!
        
         | p_ing wrote:
         | Did browsers ever strictly require a SAN; they certainly didn't
         | even as of ~10 years ago? Yes, it is "required", but CN only
         | has worked for quite some time. I find this tricks up some IT
         | admins who are still used to only supplying a CN and don't know
         | what a SAN is.
        
           | tialaramex wrote:
           | > Did browsers ever strictly require a SAN;
           | 
           | Yes, all the popular browsers require this.
           | 
           | > they certainly didn't even as of ~10 years ago?
           | 
           | That's true, ten years ago it was likely that if a browser
           | required this they would see unacceptably high failure rates
           | because CAs were non-compliant and enforcement wasn't good
           | enough. Issuing certs which would fail PKIX was prohibited,
           | but so is speeding and yet people do that every day. CT
           | improved our ability to inspect what was being issued and
           | monitor fixes.
           | 
           | > Yes, it is "required", but CN only has worked for quite
           | some time.
           | 
           | No trusted CA will issue "CN only" for many years now, if you
           | could obtain such a certificate you'd find it won't work in
           | any popular browser either. You can read the Chromium or
           | Mozilla source and there just isn't any code to look in CN,
           | the browser just parses the SANs.
           | 
           | > I find this tricks up some IT admins who are still used to
           | only supplying a CN and don't know what a SAN is.
           | 
           | In most cases this is a sign you're using something crap like
           | openssl's command line to make CSRs, and so you're probably
           | expending a lot of effort filling out values which will be
           | ignored by the CA and yet not offered parameters you did
           | need.
        
             | p_ing wrote:
             | You're forgetting that browsers deal with plenty of
             | internal-only CAs. Just because a public CA won't issue a
             | CN only cert doesn't mean an internal CA won't. That is why
             | I'm curious to know if browsers /strictly/ require SANs,
             | yet. Not something I've tested in a long time since I
             | started supporting public-only websites/cloud infra.
             | 
             | As you noted about OpenSSL, Windows CertSvr will allow you
             | to do CN only, too.
        
               | tialaramex wrote:
               | I mean, no, I'm not forgetting that, of course your
               | private CA can issue whatever nonsense you like, to this
               | day - and indeed several popular CAs are designed to do
               | just that as you noted. Certificates which ignore this
               | rule won't work in a browser though, or in some other
               | modern software.
               | 
               | Chromium published an "intent to remove" and then
               | actually removed the CN parsing in 2017, at that point
               | EnableCommonNameFallbackForLocalAnchors was available for
               | people who were still catching up to policy from ~15
               | years ago. The policy override flag was removed in 2018,
               | after people had long enough to fix their shit.
               | 
               | Mozilla had already made an equivalent change before
               | that, maybe it worked for a few more years in Safari? I
               | don't have a Mac so no idea.
        
         | fanf2 wrote:
         | Tedious and pedantic note:
         | 
         | You can't mindlessly compare the bytes of the host name: you
         | have to know that it's the presentation format of the name, not
         | the DNS wire format; you have to deal with ASCII case
         | insensitivity; you have to guess what to do about trailing dots
         | (because that isn't specified); you have to deal with wildcards
         | (being careful to note that PKIX wildcard matching is different
         | from DNS wildcard matching).
         | 
         | It's not as easy as it should be!
        
           | tialaramex wrote:
           | In practice it's much easier than you seem to have understood
           | 
           | The names PKIX writes into dnsName are exactly the same as
           | the hostnames in DNS. They are defined to always be Fully
           | Qualified, and yet not to have a trailing dot, you don't have
           | to _like_ that but it 's specified and it's exactly how the
           | web browsers worked already 25+ years ago.
           | 
           | You're correct that they're not the on-wire label-by-label
           | DNS structure, but they are the canonical human readable DNS
           | name, specifically the Punycode encoded name, so [the
           | website] https://xn--j1ay.xn--p1ai/ the Russian registry
           | which most browsers will display with Cyrllic, has its names
           | stored in certificates the same way as it is handled in DNS,
           | as Punycode "xn--j1ay.xn--p1ai". In software I've seen the
           | label-by-label encoding stuff tends to live deep inside DNS-
           | specific code, but the DNS name needed for comparing with a
           | certificate does not do this.
           | 
           | You don't need to "deal with" case except in the sense that
           | you ignore it, DNS doesn't handle case, the dnsName in SANs
           | explicitly doesn't carry this, so just ignore the case bits.
           | Your DNS client will do the case bit wiggling entropy hack,
           | but that's not in code the certificate checking will care
           | about.
           | 
           | You do need to care about wildcards, but we eliminated the
           | last very weird certificate wildcards because they were
           | minted only by a single CA (which argued by their reading
           | they were obeying PKIX) and that CA is no longer in business
           | 'cos it turns out some of the stupid things they were doing
           | even a creative lawyerly reading of specifications couldn't
           | justify. So the only use actually enabled today is replacing
           | one DNS label at the front of the name. Nothing else is used,
           | no suffixes, no mid-label stuff, no multi-label wildcards, no
           | labels other than the first.
           | 
           | Edited to better explain the IDN situation hopefully
        
             | cryptonector wrote:
             | > You don't need to "deal with" case except in the sense
             | that you ignore it, DNS doesn't handle case
             | 
             | The DNS is case-insensitive, though only for ASCII. So you
             | have to compare names case-insensitively (again, for
             | ASCII). It _is_ possible to have DNS servers return non-
             | lowercase names! E.g., way back when sun.com's DNS servers
             | would return Sun.COM if I remember correctly. So you do
             | have to be careful about this, though if you do a case-
             | sensitive, memcmp()-like comparison, 999 times out of 1,000
             | everything will work, and you won't fail open when it
             | doesn't.
        
               | tialaramex wrote:
               | You're correct that your DNS queries and answers can set
               | the case bit, but the protocol design always said that
               | while the answers must match the query, the case isn't
               | actually significant. For a long time that's just an
               | obscure Um Actually nerd trivia question, but traditional
               | (before DPRIVE) DNS is very old and is mostly a UDP
               | protocol, so people try to spoof it, let's follow that
               | story:
               | 
               | At first: The only anti-spoof defence provided in DNS is
               | an ID number, initially people are like 1, 2, 3, 4, 5...
               | and so the attacker watches you use these IDs then makes
               | you ask a question, "6: A? mybank.example" and
               | simultaneously they answer "6: A 10.20.30.40" before your
               | real DNS server could likely respond - choosing their own
               | IP address. Six is the expected ID, you just got spoofed
               | and will visit their fake bank.
               | 
               | So then: DNS clients get smarter using randomisation for
               | the ID, 23493, 45390, 18301... this helps considerably,
               | but bandwidth is cheap and those IDs are only 16-bit so a
               | bad guy can actually send you 65536 answers and get a
               | 100% success rate with spoofing, or more realistically
               | maybe they send 1000 answers and still have more than 1%
               | success.
               | 
               | Today as a further improvement, we use Paul Vixie's bit
               | 0x20 hack which uses every letter of a DNS label to hide
               | one bit of entropy in the case in addition to the random
               | ID. Now the attacker has to try not only different IDs
               | but different case spellings, A? mYbANk.eXAmpLE or maybe
               | A? MyBANk.EXAMple or A? mybaNK.EXamPLE -- only responses
               | with the right case match, the others are ignored.
               | 
               | So, because of all this security shenanigans, your DNS
               | client knows that case in DNS queries doesn't matter and
               | will do what we want for this purpose.
               | 
               | [Edited: fixed a few typos]
        
       | amiga386 wrote:
       | Things change over time.
       | 
       | Part of not wanting to let go is the sunk cost fallacy. Part of
       | it is being suspicious of being (more) dependent on someone else
       | (than you are already dependent on a different someone else).
       | 
       | (As an aside, the n-gate guy who ranted against HTTPS in general
       | and thought static content should just be HTTP also thought like
       | that. Unfortunately, as I'm at a sketchy cafe using their wifi,
       | his page currently says I should click here to enter my bank
       | details, and I should download new cursors, and oddly doesn't
       | include any of his own content at all. Bit weird, but of course I
       | can trust _he_ didn 't modify his page, and it's just a silly
       | unnecessary imposition on him that _I_ would like him to use
       | HTTPS)
       | 
       | Unfortunately for those rugged individuals, you're in a worldwide
       | community of people who _want_ themselves, and you, to be
       | dependent on someone else. We 're still going with "trust the
       | CAs" as our security model. But with certificate transparency and
       | mandatory stapling from multiple verifiers, we're going with
       | "trust but verify the CAs".
       | 
       | Maximum acceptable durations for certificates are coming down,
       | down, down. You have to get new ones sooner, sooner, sooner. This
       | is to limit the harm a rogue CA or a naive mis-issuing CA can do,
       | as CRLs just don't work.
       | 
       | The only way that can happen is with automation, and being
       | _required_ to prove you still own a domain and /or a web-server
       | on that domain, to a CA, on a regular basis. No "deal with this
       | once a year" anymore. That's _gone_ and it 's not coming back.
       | 
       | It's good to know the whole protocol, and yes certbot can be
       | overbearing, but Debian's python3-certbot + python3-certbot-
       | apache integrates perfectly with how Debian has set up apache2.
       | It shouldn't be a hardship.
       | 
       | And if you don't like certbot, there are lots of other ACME
       | clients.
       | 
       | And if you don't like Let's Encrypt, there are other entities
       | offering certificates via the ACME protocol (YMMV, do you trust
       | them enough to vouch for you?)
        
         | pixl97 wrote:
         | > thought static content should just be HTTP
         | 
         | Yep, I've seen that argument so many times and it should never
         | make sense to anyone that understands MITM.
         | 
         | The only way it could possibly work is if the static content
         | were signed somehow, but then you need another protocol the
         | browser and you need a way to exchange keys securely, for
         | example like signed RPMs. It would be less expensive as the
         | encryption happens once, but is it worth having yet another
         | implementation?
        
           | XorNot wrote:
           | For catching purposes form content distribution an
           | unencrypted signed protocol would've helped a lot. Every
           | Linux packaging format having to bake one in via GPG is a
           | huge pain.
        
             | PhilipRoman wrote:
             | I think it would be enough to just have a widely supported
             | hash://sha256... protocol with a hint of what host(s) is
             | known to provide the object (falling back to something DHT
             | based maybe). There is https://www.rfc-
             | editor.org/rfc/rfc6920.html but I haven't seen any support
             | for it.
        
           | drob518 wrote:
           | The argument doesn't even make sense for static content
           | ignoring mitm attacks.
        
             | pixl97 wrote:
             | There is no such thing as static content. There is only
             | content. Bits are sent to your browser which it then
             | applies the DOM to.
             | 
             | If you want to ensure the bits that were sent from the
             | server to your browser they must be signed in some method.
        
               | drob518 wrote:
               | Right, that's exactly my point.
        
               | pixl97 wrote:
               | Oh sorry I misunderstood your statement, yes, that's
               | correct.
        
           | RiverCrochet wrote:
           | Well there is the integrity atttribute.
           | https://www.w3schools.com/Tags/att_script_integrity.asp
        
             | pixl97 wrote:
             | Pretty useless in this case if I control the stream going
             | to you. The main page defining the integrity would have to
             | be encrypted.
             | 
             | Maybe you could have a mixed use case page in the browser
             | where you had your secure context, then a sub context of
             | unencrypted protected objects, that could possibly increase
             | caching. With that said, looks like another fun hole
             | browser makers would be chasing every year or so.
        
           | bigstrat2003 wrote:
           | > Yep, I've seen that argument so many times and it should
           | never make sense to anyone that understands MITM.
           | 
           | Rather, it's that most people simply don't need to care about
           | MITM. It's not a relevant attack for most content that can be
           | reasonably served over HTTP. The goal isn't to eliminate
           | every security threat possible, it's to eliminate the ones
           | that are actually a problem for your use case.
        
             | pixl97 wrote:
             | MITM is a risk to everyone. End of story.
             | 
             | The browser content model knows nothing if the data it's
             | receiving is static or not.
             | 
             | ISPs had already shown time and again they'd inject content
             | into http streams for their own profit. BGP attacks routed
             | traffic off to random places. Simply put the modern web
             | should be zero trust at all.
        
             | kbolino wrote:
             | MITM is a very real threat in any remotely public place.
             | Coffee shop, airport, hotel, municipal WAN, library, etc. I
             | honestly wouldn't put that much trust in a lot of
             | residential/commercial broadband setups or
             | hosting/colocation providers either. It does not matter
             | what is intended to be served, because it can be replaced
             | with anything else. Innocuous blog? Transparently replaced
             | with a phishing site. Harmless image? Rewritten to appear
             | the same but with a zero-day exploit injected.
             | 
             | There's no such thing as "not worth the effort to secure"
             | because neither the site itself nor its content matters,
             | only the network path from the site to the user, which is
             | not under the full control of either party. These need not
             | be, and usually aren't, targeted attacks; they'll hit
             | anything that can be intercepted and modified, without a
             | care for what it's meant to be, where it's coming from, or
             | who it's going to.
             | 
             | Viewing it is an A-to-B interaction where A is a good-
             | natured blogger and B is a tech-savvy reader, and that's
             | all there is to it, is archaic and naive to the point of
             | being dangerous. It is really an A-to-Z interaction where
             | even if A is a good-natured blogger and Z is a tech-savvy
             | user, parties B through Y all get to have a crack at
             | changing the content. Plain HTTP is a protocol for a high-
             | trust environment and the Internet has not been such a
             | place for a very long time. It is unfortunate that party A
             | (the site) must bear the brunt of the security burden, but
             | that's the state of things today. There were other ways to
             | solve this problem but they didn't get widespread adoption.
        
       | orion138 wrote:
       | Not the main point of the article, but the author's comments on
       | Gandi made me wonder:
       | 
       | What registrar do people recommend in 2025?
        
         | sloped wrote:
         | Pork bun is my favorite.
        
           | graemep wrote:
           | It seems to be what Rachel decided on.
           | 
           | Must be other good ones? Somewhat prefer something in the UK
           | (but have been using Gandi so its not essential).
        
             | mattl wrote:
             | Gandi prices went way way up. I've been using Porkbun too.
        
             | jsheard wrote:
             | I don't know about the UK, but if you want to keep things
             | in Europe then I can vouch for Netim in France.
             | 
             | INWX in Germany also seems well regarded but I haven't used
             | them.
        
           | floren wrote:
           | I've been on Namecheap for years but I'm ready to move just
           | because they refuse to support dynamic AAAA records. How's
           | Porkbun on that front?
        
             | sloped wrote:
             | Not sure, I use dnsimple for dns and wrote my own little
             | service to update my A record, no ip6 in my corner of the
             | world so have not checked for AAAA record support.
        
             | new23d wrote:
             | Use AWS Route53?
        
         | KolmogorovComp wrote:
         | Any feedback on CF one?
        
           | jsheard wrote:
           | CF sells domains at cost so you're not going to beat them on
           | price, but the catch is that domains registered through them
           | are locked to their infrastructure, you're not allowed to
           | change the nameservers. They're fine if you don't need that
           | flexibility and they support the TLDs you want.
        
         | samch wrote:
         | Since you asked, I use Cloudflare for my registrar. I can't
         | really say if it's objectively better or worse than anybody
         | else, but they seemed like a good choice when Google was in the
         | process of shutting off their registry service.
        
         | memset wrote:
         | I have moved to porkbun.
         | 
         | I have built a registrar in the past and have a lot of arcane
         | knowledge about how they work. Just need to figure out a way to
         | monetize!
        
         | 0xCMP wrote:
         | I use Cloudflare for everything I can and then currently use
         | Namecheap for anything it doesn't support. I haven't tried
         | Porkbun mostly because I'm okay with what I have already.
        
         | upofadown wrote:
         | I recently had to bail on Gandi. I had a special requirement,
         | being Canadian, in that I didn't want to use a registrar in the
         | USA. I found a Canadian registrar that seemed to have the
         | technical stuff reasonably worked out (many don't) and had easy
         | to understand pricing:
         | 
         | https://grape.ca/
        
         | teddyh wrote:
         | Like I frequently1 advise2:
         | 
         | Don't look to large, well-known registrars. I would suggest
         | that you look for local registrars _in your area_. The TLD
         | registry for your country /area usually has a list of the
         | authorized registrars, so you can simply search that for
         | entities with a local address.
         | 
         | Disclaimer: I work at such a small registrar, but you are
         | probably not in our target market.
         | 
         | 1. <https://news.ycombinator.com/item?id=32095499>
         | 
         | 2. <https://news.ycombinator.com/item?id=32507784>
        
           | benlivengood wrote:
           | I miss the days when Network Solutions had a permanent option
           | to switch/sign-up with a PGP key, binding all future
           | communications and change requests to it.
           | 
           | I forget how they handled key expiration/revocation...
        
         | benlivengood wrote:
         | After Google ditched Domains I moved to Route53. I guess the
         | only downside is that it doesn't handle some TLDs?
         | 
         | What you want from a registrar is to keep existing for many
         | years and resilience to social engineering, and AWS seems like
         | the next best thing to Google which you famously can't even
         | talk to for a social engineering attempt. I expect AWS account
         | management to be almost as good as Gaia, but don't really know
         | how hard social engineering is.
        
         | e40 wrote:
         | Been with namecheap for as long as I can remember. Hasn't
         | changed at all. Still great.
        
         | quicksilver03 wrote:
         | Distribute your domain across 2, better 3 registrars, so if one
         | does something stupid with your domains at least the others
         | keep working.
        
       | bananapub wrote:
       | tangentially, for anyone looking to make their lives easier, you
       | can run `acme-dns` on a spared 53/udp somewhere, CNAME the
       | _acme_challenge. from your real DNS hosting to that, then have
       | `lego` or whatever do DNS challenges via acme-dns - no need to
       | let inscrutable scripts touch your real DNS config, no need for
       | anything to touch your HTTP config.
        
         | elric wrote:
         | I wish DNS providers offered more granular access control. Some
         | offer an API key per zone, others have a single key which
         | grants access to every single zone in your account. I haven't
         | come across any that offer "acme-only" APIs.
         | 
         | It's on my long list of potential side projects, but I don't
         | think I'll ever gey around to it
        
         | Arnavion wrote:
         | You can also use an NS record directly instead of CNAME'ing to
         | a different domain.
        
       | eadmund wrote:
       | > So, yes, instead of saying that "e" equals "65537", you're
       | saying that "e" equals "AQAB". Aren't you glad you did those
       | extra steps?
       | 
       | Oh JSON.
       | 
       | For those unfamiliar with the reason here, it's that JSON parsers
       | cannot be relied upon to treat numbers properly. Is
       | 4723476276172647362476274672164762476438 a valid JSON number?
       | Yes, of course it is. What will a JSON parser due with it?
       | Silently truncate it to a 64-bit or 63-bit integer, or a float,
       | probably or if you're very lucky emit an error (a _good_ JSON
       | decoder written in a _sane_ language like Common Lisp would of
       | course just return the number, but few of us are so lucky).
       | 
       | So the only way to reliably get large integers into and out of
       | JSON is to encode them as something else. Base64-encoded big-
       | endian bytes is not a terrible choice. Silently doing the wrong
       | thing is the root of many security errors, so it not wrong to
       | treat every number in the protocol this way. Of course, then one
       | loses the readability of JSON.
       | 
       | JSON is better than XML, but it really isn't great. Canonical
       | S-expressions would have been far preferable, but for whatever
       | reason the world didn't go that way.
        
         | drob518 wrote:
         | Seems like a large integer can always be communicated as a
         | vector of byte values in some specific endian order, which is
         | easier to deal with than Base64 since a JSON parser will at
         | least convert the byte value from text to binary for you.
         | 
         | But yea, as a Clojure guy sexprs or EDN would be much better.
        
         | matja wrote:
         | Aren't JSON parsers technically not following the standard if
         | they don't reliably store a number that is not representable by
         | a IEEE754 double precision float?
         | 
         | It's a shame JSON parsers usually default to performance rather
         | than correctness, by using bignums for numbers.
        
           | q3k wrote:
           | Have a read through RFC7159 or 8259 and despair.
           | 
           | > This specification allows implementations to set limits on
           | the range and precision of numbers accepted
           | 
           | JSON is a terrible interoperability standard.
        
             | matja wrote:
             | So a JSON parser that cannot store a _2_ is technically
             | compliant? :(
        
               | q3k wrote:
               | Yep. Or one that parses it into a 7 :)
        
               | kevingadd wrote:
               | I once debugged a production issue that boiled down to "A
               | PCI compliance .dll was messing with floating point
               | flags, causing the number 4 to unserialize as 12"
        
               | xeromal wrote:
               | That sounds awful. lol
        
               | chasd00 wrote:
               | > Or one that parses it into a 7 :)
               | 
               | if it's known and acceptable that LLMs can hallucinate
               | arguments to an API then i don't see how this isn't
               | perfectly acceptable behavior either.
        
               | reichstein wrote:
               | JSON is a text format. A parser must recognize the text
               | `2` as a valid production of the JSON number grammar.
               | 
               | Converting that text to _any_ kind of numerical value is
               | outside the scope of the specification. (At least the
               | JSON.org specification, the RFC tries to say more.)
               | 
               | As a textural format, when you use it for data
               | interchange between different platforms, you should
               | ensure that the endpoints agree on the _interpretation_,
               | otherwise they won't see the same data.
               | 
               | Again outside of the scope of the JSON specification.
        
               | tsimionescu wrote:
               | A JSON parser has to check if a numeric value is actually
               | numeric - the JSON {"a" : 123456789} is valid, but {"a" :
               | 12345678f} is not. Per the RFC, a standards-compliant
               | JSON parser can also refuse {"a": 123456789} if it
               | considers the number is too large.
        
               | deepsun wrote:
               | The more a format restricts, the more useful it is. E.g.
               | if a format allows pretty much anything and it's up to
               | parsers to accept or reject it, we may as well say "any
               | text file" (or even "any data file") -- it would allow
               | for anything.
               | 
               | Similarly to a "schema-less" DBMS -- you will still have
               | a schema, it will just be in your application code, not
               | enforced by the DBMS.
               | 
               | JSON is a nice balance between convenience and
               | restrictions, but it's still a compromise.
        
           | kens wrote:
           | > Aren't JSON parsers technically not following the standard
           | if they don't reliably store a number that is not
           | representable by a IEEE754 double precision float?
           | 
           | That sentence has four negations and I honestly can't figure
           | out what it means.
        
             | umanwizard wrote:
             | "The standard technically requires that JSON parsers
             | reliably store numbers, even those that are not
             | representable by an IEEE double".
             | 
             | (It seems this claim is not true, but at least that's what
             | the sentence means.)
        
             | alterom wrote:
             | _> > Aren't JSON parsers technically not following the
             | standard if they don't reliably store a number that is not
             | representable by a IEEE754 double precision float?_
             | 
             | >That sentence has four negations and I honestly can't
             | figure out what it means.
             | 
             | This example is halfway as bad as the one Orwell gives in
             | my favorite essay, "Politics the the English Language"1.
             | 
             | Compare and contrast:
             | 
             |  _> I am not, indeed, sure whether it is not true to say
             | that the Milton who once seemed not unlike a seventeenth-
             | century Shelley had not become, out of an experience ever
             | more bitter in each year, more alien (sic) to the founder
             | of that Jesuit sect which nothing could induce him to
             | tolerate._
             | 
             | Orwell has much to say about either.
             | 
             | _____
             | 
             | 1https://www.orwellfoundation.com/the-orwell-
             | foundation/orwel...
        
               | NooneAtAll3 wrote:
               | that Orwell quote can be saved a lot by proper
               | punctuation
               | 
               | I am not, indeed, sure*,* whether it is not true to say
               | that the Milton *(*who once seemed not unlike a
               | seventeenth-century Shelley*)* had not become *-* out of
               | an experience *-* ever more bitter in each year, more
               | alien (sic) to the founder of that Jesuit sect*,* which
               | nothing could induce him to tolerate.
        
             | NooneAtAll3 wrote:
             | Aren't {X}? -> isn't it true that {X}?
             | 
             | {X} = JSON parsers technically [are] not following the
             | standard if {reason}
             | 
             | {reason} = [JSON parsers] don't reliably store a number
             | that {what kind of number?}
             | 
             | {what kind of number} = number that is not representable by
             | a IEEE754 double precision float
             | 
             | seems simple
        
         | em-bee wrote:
         | as someone who started the s-expression task on
         | rosettacode.org, i approve. if you need an s-expression parser
         | for your language, look here
         | https://rosettacode.miraheze.org/wiki/S-expressions (the
         | canonical url is https://rosettacode.org/wiki/S-expressions but
         | they have DNS issues right now)
        
         | tempodox wrote:
         | "Worse is better" is still having ravaging success.
        
         | marcosdumay wrote:
         | What I don't understand is why you (and a lot of other people)
         | just expect S-expression parsers to not have the exact same
         | problems.
        
           | 01HNNWZ0MV43FF wrote:
           | I think they mean that Common Lisp has bigints by default
        
             | ryukafalz wrote:
             | As do Scheme and most other Lisps I'm familiar with, and
             | integers/floats are typically specified to be distinct. I
             | think we'd all be better off if that were true of JSON as
             | well.
             | 
             | I'd be happy to use s-expressions instead :) Though to GP's
             | point, I suppose we might then end up with JS s-expression
             | parsers that still treat ints and floats interchangeably.
        
               | petre wrote:
               | And in addition to that are unable to distingush between
               | a string "42" and a number 42.
        
           | eadmund wrote:
           | Because canonical S-expressions don't have numbers, just
           | atoms (i.e., byte sequences) and lists. It is up to the using
           | code to interpret "34" as the string "34" or the number 34 or
           | the number 13,108 or the number 13,363, which is part of the
           | protocol being used. In most instances, the byte sequence is
           | probably a decimal number.
           | 
           | Now, S-expressions as used for programming languages such as
           | Lisp _do_ have numbers, but again Lisp has bignums. As for
           | parsers of Lisp S-expressions written in other languages: if
           | they want to comply with the standard, they need to support
           | bignums.
        
             | its-summertime wrote:
             | "it can do one of 4 things" sounds very much like the pre-
             | existing issue with JSON
        
             | tsimionescu wrote:
             | You can write JSON that exclusively uses strings, so this
             | is not really relevant. Sure, maybe it can be considered an
             | advantage that s-expressions force you to do that, though
             | it can also be seen just as easily as a disadvantage. It
             | certainly hurts readability of the format, which is not a
             | 0-cost thing. This is also why all Lisps use more than
             | plain sexps to represent their code: having different
             | syntax for different types helps.
        
             | motorest wrote:
             | > Because canonical S-expressions don't have numbers, just
             | atoms (i.e., byte sequences) and lists.
             | 
             | If types other than string and a list bother you, why don't
             | you stick with those types in JSON?
        
         | ncruces wrote:
         | Go can decode numbers losslessly as strings:
         | https://pkg.go.dev/encoding/json#Number
         | 
         | json.Number is (almost) my _"favorite"_ arbitrary decimal:
         | https://github.com/ncruces/decimal?tab=readme-ov-file#decima...
         | 
         | I'm half joking, but I'm not sure why S-expressions would be
         | better here. There are LISPs that don't do arbitrary precision
         | math.
        
           | mise_en_place wrote:
           | For actual SERDES, JSON becomes very brittle. It's better to
           | use something like protobuf or cap'n'proto for such cases.
        
           | eadmund wrote:
           | > Go can decode numbers losslessly as strings:
           | https://pkg.go.dev/encoding/json#Number
           | 
           | Yup, and if you're using JSON in Go you really do need to be
           | using Number exclusively. Anything else will lead to pain.
           | 
           | > I'm half joking, but I'm not sure why S-expressions would
           | be better here. There are LISPs that don't do arbitrary
           | precision math.
           | 
           | Sure, but I'm referring specifically to
           | https://www.ietf.org/archive/id/draft-rivest-sexp-13.html,
           | which only has lists and bytes, and so number are always just
           | strings and it's up to the program to interpret them.
        
         | kangalioo wrote:
         | But what's wrong with sending the number as a string? `"65537"`
         | instead of `"AQAB"`
        
           | ayende wrote:
           | Too likely that this would not work because silent conversion
           | to number along the way
        
             | iforgotpassword wrote:
             | Then just prefixing it with an underscore or any random
             | letter would've been fine but of course base64 encoding the
             | binary representation in base 64 makes you look so much
             | smarter.
        
           | comex wrote:
           | The question is how best to send the modulus, which is a much
           | larger integer. For the reasons below, I'd argue that base64
           | is better. And if you're sending the modulus in base64, you
           | may as well use the same approach for the exponent sent along
           | with it.
           | 
           | For RSA-4096, the modulus is 4096 bits = 512 bytes in binary,
           | which (for my test key) is 684 characters in base64 or 1233
           | characters in decimal. So the base64 version is much smaller.
           | 
           | Base64 is also more efficient to deal with. An RSA
           | implementation will typically work with the numbers in binary
           | form, so for the base64 encoding you just need to convert the
           | bytes, which is a simple O(n) transformation. Converting the
           | number between binary and decimal, on the other hand, is
           | O(n^2) if done naively, or O(some complicated expression
           | bigger than n log n) if done optimally.
           | 
           | Besides computational complexity, there's also implementation
           | complexity. Base conversion is an algorithm that you normally
           | don't have to implement as part of an RSA implementation. You
           | might argue that it's not hard to find some library to do
           | base conversion for you. Some programming languages even have
           | built-in bigint types. But you typically want to avoid using
           | general-purpose bigint implementations for cryptography. You
           | want to stick to cryptographic libraries, which typically aim
           | to make all operations constant-time to avoid timing side
           | channels. Indeed, the apparent ease-of-use of decimal would
           | arguably be a bad thing since it would encourage implementors
           | to just use a standard bigint type to carry the values
           | around.
           | 
           | You could argue that the same concern applies to base64, but
           | it should be relatively safe to use a naive implementation of
           | base64, since it's going to be a straightforward linear scan
           | over the bytes with less room for timing side channels
           | (though not none).
        
             | nssnsjsjsjs wrote:
             | Ah OK so: readable, efficient, consistent; pick 2.
        
           | foobiekr wrote:
           | Cost.
        
           | deepsun wrote:
           | PHP (at least old versions I worked with) treats "65537" and
           | 65537 similarly.
        
             | red_admiral wrote:
             | That sounds horrible if you want to transmit a base64
             | string where the length is a multiple of 3 and for some
             | cursed reason there's no letters or special characters
             | involved. If "7777777777777777" is your encoded string
             | because you're sending a string of periods encoded in BCD,
             | you're going to have a fun time. Perhaps that's karma for
             | doing something braindead in the first place though.
        
           | red_admiral wrote:
           | This sounds like an XY problem to me. There is already an
           | alternative that is at least as secure and only requires a
           | single base-64 string: Ed25519.
        
           | shiandow wrote:
           | Converting large integers to decimal is nontrivial,
           | especially when you don't trust languages to handle large
           | numbers.
           | 
           | Why you wouldn't just use the hexadecimal that everyone else
           | seems to use I don't know. There seems to be a rather
           | arbitrary cutoff where people prefer base64 to hexadecimal.
        
         | JackSlateur wrote:
         | Is this ok ?                 Python 3.13.3 (main, May 21 2025,
         | 07:49:52) [GCC 14.2.0] on linux       Type "help", "copyright",
         | "credits" or "license" for more       information.       >>>
         | import json       >>>            json.loads('472347627617264736
         | 247627467216476247643800000000000000000000000000000000000000000
         | 00')      47234762761726473624762746721647624764380000000000000
         | 000000000000000000000000000000
        
           | jazzyjackson wrote:
           | yes, python falls into the sane language category with
           | arbitrary-precision arithmetic
        
             | faresahmed wrote:
             | Not so much,                   >>> s="1"+"0"*4300
             | >>> json.loads(s)         ...         ValueError: Exceeds
             | the limit (4300 digits) for integer string conversion:
             | value has 4301 digits; use sys.set_int_max_str_digits() to
             | increase the limit
             | 
             | This was done to prevent DoS attacks 3 years ago and have
             | been backported to at least CPython 3.9 as it was
             | considered a CVE.
             | 
             | Relevant discussion:
             | https://news.ycombinator.com/item?id=32753235
             | 
             | Your sibling comment suggests using decimal.Decimal which
             | handles parsing >4300 digit numbers (by default).
        
               | lifthrasiir wrote:
               | This should be interpreted as a stop-gap measure before a
               | subquadratic algorithm can be adopted. Take a look at
               | _pylong.py in new enough CPython.
        
           | teddyh wrote:
           | I prefer                 >> import json, decimal       >> j =
           | "472347627617264736247627467216476247643800000000000000000000
           | 00000000000000000000000"       >> json.loads(j,
           | parse_float=decimal.Decimal, parse_int=decimal.Decimal)
           | Decimal('4723476276172647362476274672164762476438000000000000
           | 0000000000000000000000000000000')
           | 
           | This way you avoid this problem:                 >> import
           | json       >> j = "0.4723476276172647362476274672164762476438
           | 0000000000000000000000000000000000000000000"       >>
           | json.loads(j)       0.47234762761726473
           | 
           | And instead can get:                 >> import json, decimal
           | >> j = "0.472347627617264736247627467216476247643800000000000
           | 00000000000000000000000000000000"       >> json.loads(j,
           | parse_float=decimal.Decimal, parse_int=decimal.Decimal)
           | Decimal('0.47234762761726473624762746721647624764380000000000
           | 000000000000000000000000000000000')
        
           | sevensor wrote:
           | Just cross your fingers and hope for the best if your data is
           | at any point decoded by a json library that doesn't support
           | bigints? Python's ability to handle them is beside the point
           | of they get mangled into ieee754 doubles along the way.
        
         | cortesoft wrote:
         | > Canonical S-expressions would have been far preferable, but
         | for whatever reason the world didn't go that way.
         | 
         | I feel like not understanding why JSON won out is being
         | intentionally obtuse. JSON can easily be hand written, edited,
         | and read for most data. Canonical S-expressions are not as easy
         | to read and much harder to write by hand; having to prefix
         | every atom with a length makes is very tedious to write by
         | hand. If you have a JSON object you want to hand edit, you can
         | just type... for an Canonical S-expression, you have to count
         | how many characters you are typing/deleting, and then update
         | the prefix.
         | 
         | You might not think the ability to hand generate, read, and
         | edit is important, but I am pretty sure that is a big reason
         | JSON has won in the end.
         | 
         | Oh, and the Ruby JSON parser handles that large number just
         | fine.
        
           | eadmund wrote:
           | > I feel like not understanding why JSON won out is being
           | intentionally obtuse.
           | 
           | I didn't feel like my comment was the right place to shill
           | for an alternative, but rather to complain about JSON. But
           | since you raise it.
           | 
           | > JSON can easily be hand written, edited, and read for most
           | data.
           | 
           | So can canonical S-expressions!
           | 
           | > Canonical S-expressions are not as easy to read and much
           | harder to write by hand; having to prefix every atom with a
           | length makes is very tedious to write by hand.
           | 
           | Which is why the advanced representation exists. I contend
           | that this:
           | (urn:ietf:params:acme:error:malformed          (detail "Some
           | of the identifiers requested were rejected")
           | (subproblems ((urn:ietf:params:acme:error:malformed
           | (detail "Invalid underscore in DNS name \"_example.org\"")
           | (identifier (dns _example.org)))
           | (urn:ietf:params:acme:error:rejectedIdentifier
           | (detail "This CA will not issue for \"example.net\"")
           | (identifier (dns example.net))))))
           | 
           | is far easier to read than this (the first JSON in RFC 8555):
           | {             "type": "urn:ietf:params:acme:error:malformed",
           | "detail": "Some of the identifiers requested were rejected",
           | "subproblems": [                 {
           | "type": "urn:ietf:params:acme:error:malformed",
           | "detail": "Invalid underscore in DNS name \"_example.org\"",
           | "identifier": {                         "type": "dns",
           | "value": "_example.org"                     }
           | },                 {                     "type":
           | "urn:ietf:params:acme:error:rejectedIdentifier",
           | "detail": "This CA will not issue for \"example.net\"",
           | "identifier": {                         "type": "dns",
           | "value": "example.net"                     }
           | }             ]         }
           | 
           | > for an Canonical S-expression, you have to count how many
           | characters you are typing/deleting, and then update the
           | prefix.
           | 
           | As you can see, no you do not.
        
             | eximius wrote:
             | For you, perhaps. For me, the former is denser, but
             | crossing into a "too dense" region. The JSON has
             | indentation which is easy on my poor brain. Also, it's nice
             | to differentiate between lists and objects.
             | 
             | But, I mean, they're basically isomorphic with like 2
             | things exchanges ({} and [] instead of (); implicit vs
             | explicit keys/types).
        
               | josephg wrote:
               | Yeah. I don't even blame S-expressions. I think I've just
               | been exposed to so much json at this point that my visual
               | system has its own crappy json parser for pretty-printed
               | json.
               | 
               | S expressions may well be better. But I don't think S
               | expressions are better _enough_ to be able to overcome
               | json's inertia.
        
             | eddythompson80 wrote:
             | > is far easier to read than this (the first JSON in RFC
             | 8555):
             | 
             | It's not for me. I'd literally take anything over csexps.
             | Like there is nothing that I'd prefer it to. If it's the
             | only format around, then I'll just roll my own.
        
               | justinclift wrote:
               | > Like there is nothing that I'd prefer it to.
               | 
               | May I suggest perl regex's? :)
        
             | thayne wrote:
             | Your example uses s-expressions, not canonical
             | s-expressions. Canonical s expressions[1] is basically a
             | binary format. Each atom/string is prefixed by a decimal
             | length of the string and a colon. It's advantage over
             | regular s expressions is that there is no need to escape or
             | quote strings with whitespace, and there is only a single
             | possible representation for a given data structure. The
             | disadvantage is it is much harder to write and read by
             | humans.
             | 
             | As for s-expressions vs json, there are pros and cons to
             | each. S-expressions don't have any way to encode type
             | information in the data itself, you need a schema to know
             | if a certain value should be treated as a number or a
             | string. And it's subjective which is more readable.
             | 
             | [1]: https://en.m.wikipedia.org/wiki/Canonical_S-
             | expressions
        
               | dietr1ch wrote:
               | The length thing sounds like an editor problem, but we
               | have wasted too much time in coming up with syntax that
               | pleases personal preferences without admitting we would
               | be better off moving away from text.
               | 
               | 927 can be avoided, but it's way harder than it seems,
               | which is why we have the proliferation of standards that
               | fail to become universal.
        
               | eadmund wrote:
               | > Your example uses s-expressions, not canonical
               | s-expressions.
               | 
               | I've always used 'canonical S-expressions' to refer to
               | Rivest's S-expressions proposal:
               | https://www.ietf.org/archive/id/draft-rivest-
               | sexp-13.html, a proposal which has canonical, basic
               | transport & advanced transport representations which are
               | all equivalent to one another (i.e., every advanced
               | transport representation has a single canonical
               | representation). I don't know where I first saw it, but
               | perhaps it was intended to distinguish from other
               | S-expressions such as Lisp's or Scheme's?
               | 
               | Maybe I should refer to them as 'Rivest S-expressions' or
               | 'SPKI S-expressions' instead.
               | 
               | > S-expressions don't have any way to encode type
               | information in the data itself, you need a schema to know
               | if a certain value should be treated as a number or a
               | string.
               | 
               | Neither does JSON, as this whole thread indicates. This
               | applies to other data types, too: while a Rivest
               | expression could be                   (date
               | [iso8601]2025-05-24T12:37:21Z)
               | 
               | JSON is stuck with:                   {           "date":
               | "2025-05-24T12:37:21Z"         }
               | 
               | > And it's subjective which is more readable.
               | 
               | I really disagree. The whole reason YAML exists is to
               | make JSON more readable. Within limits, the more data one
               | can have in a screenful of text, the better. JSON is so
               | terribly verbose if pretty-printed that it takes up
               | screens and screens of text to represent a small amount
               | of data -- and when not pretty-printed, it is close to
               | trying to read a memory trace.
               | 
               | Edit: updated link to the January 2025 proposal.
        
               | antonvs wrote:
               | That Rivest draft defines canonical S-expressions to be
               | the format in which every token is preceded by its
               | length, so it's confusing to use "canonical" to describe
               | the whole proposal, or use it as a synonym for the
               | "advanced" S-expressions that the draft describes.
               | 
               | But that perhaps hints at some reasons that formats like
               | JSON tend to win popularity contests over formats like
               | Rivest's. JSON is a single format for authoring and
               | reading, which doesn't address transport at all. The name
               | is short, pronounceable (vs. "spikky" perhaps?), and
               | clearly refers to one thing - there's no ambiguity about
               | whether you might be talking about a transport encoding
               | instead,
               | 
               | I'm not saying these are good reasons to adopt JSON over
               | SPKI, just that there's a level of ambition in Rivest's
               | proposal which is a poor match for how adoption tends to
               | work in the real world.
               | 
               | There are several mechanism for JSON transport encoding -
               | including plain old gzip, but also more specific formats
               | like MessagePack. There isn't one single standard for it,
               | but as it turns out that really isn't that important.
               | 
               | Arguably there's a kind of violation of separation of
               | concerns happening in a proposal that tries to define all
               | these things at once: "a canonical form ... two transport
               | representations, and ... an advanced format".
        
               | kevin_thibedeau wrote:
               | > clearly refers to one thing
               | 
               | Great, this looks like JSON. Is it JSON5? Does it expect
               | bigint support? Can I use escape chars?
        
               | antonvs wrote:
               | You're providing an example of my point. People don't, in
               | general, care about any of that, so "solving" those
               | "problems" isn't likely to help adoption.
               | 
               | To your specific points:
               | 
               | 1. JSON5 didn't exist when JSON adoption occurred, and in
               | any case they're pretty easy to tell apart, because JSON
               | requires keys to be quoted. This is a non-problem. Why do
               | you think it might matter? Not to mention that the
               | existence of some other format that resembles JSON is
               | hardly a reflection on JSON itself, except perhaps as a
               | compliment to its perceived usefulness.
               | 
               | 2. Bigint support is not a requirement that most people
               | have. It makes no difference to adoption.
               | 
               | 3. Escape character handling is pretty well defined in
               | ECMA 404. Your point is so obscure I don't even know
               | specifically what you might be referring to.
        
               | thayne wrote:
               | I agree with most of what you said, but json's numbers
               | are problematic. For one thing, many languages have
               | 64-bit integers, which can't be precisely represented as
               | a double, so serializing such a value can lead to subtle
               | bug if it is deserialized by a parser that only supports
               | doubles. And deserializing in languages that have
               | multiple numeric types is complicated, since the parser
               | often doesn't have enough context to know what the best
               | numeric type to use is.
        
               | wat10000 wrote:
               | JSON also had the major advantage of having an enormous
               | ecosystem from day 1. It was ugly and kind of insecure,
               | but the fact that every JavaScript implementation could
               | already parse and emit JSON out of the box was a huge
               | boost. It's hard to beat that even if you have the best
               | format in the world.
        
               | antonvs wrote:
               | Haha yes, that does probably dwarf any other factors.
               | 
               | But still, I think if the original JSON spec had been
               | longer and more comprehensive, along the lines of
               | Rivest's, that could have limited JSON's popularity, or
               | resulted in people just ignoring parts of it and focusing
               | on the parts they found useful.
               | 
               | The original JSON RFC-4627 was about 1/3rd the size of
               | the original Rivest draft (a body of 260 lines vs. 750);
               | it defines a single representation instead of four; and
               | e.g. the section on "Encoding" is just 3 sentences. Here
               | it is, for reference:
               | https://www.ietf.org/rfc/rfc4627.txt
        
               | wat10000 wrote:
               | We already see that a little bit. JSON in theory allows
               | arbitrary decimal numbers, but in practice it's almost
               | always limited to numbers that are representable as an
               | IEEE-754 double. It used to allow UTF-16 and UTF-32, but
               | in practice only UTF-8 was widely accepted, and that
               | eventually got reflected in the spec.
               | 
               | I'm sure you're right. If even this simple spec exceeded
               | what people would actually use as a real standard, surely
               | anything beyond that would also be left by the wayside.
        
             | michaelcampbell wrote:
             | > is far easier to read than this
             | 
             | Readability is a function of the reader, not the medium.
        
             | remram wrote:
             | This doesn't help with numbers at all, though. Any textual
             | representation of numbers is going to have the same problem
             | as JSON.
        
             | NooneAtAll3 wrote:
             | > I contend that this is far easier to read than this
             | 
             | oh boi, that's some Lisp-like vs C-like level of holywar
             | you just uncovered there
             | 
             | and wooow my opinion is opposite of yours
        
           | pharrington wrote:
           | The entire reason ACME exists is because you are never
           | writing or reading the CSR by hand.
           | 
           | So of course, ACME is based around a format whose entire
           | reason d'etre is being written and read by hand.
           | 
           | It's weird.
        
             | thayne wrote:
             | The reason json is a good format for ACME isn't that it is
             | easy to read and write by hand[1], but that most languages
             | have at least one decent json implementation available, so
             | it is easier to implement clients in many different
             | languages.
             | 
             | [1]: although being easy to read by humans is an advantage
             | when debugging why something isn't working.
        
           | motorest wrote:
           | > I feel like not understanding why JSON won out is being
           | intentionally obtuse. JSON can easily be hand written,
           | edited, and read for most data.
           | 
           | You are going way out of your way to try to come up with ways
           | to rationalize why JSON was a success. The ugly truth is far
           | simpler than what you're trying to sell: it was valid
           | JavaScript. JavaScript WebApps could parse JSON with a call
           | to eval(). No deserialization madness like XML, no need to
           | import a parser. Just fetch a file, pass it to eval(), and
           | you're done.
        
             | amne wrote:
             | it's in the name after all: [j]ava[s]cript [o]bject
             | [n]otation
        
             | jaapz wrote:
             | But also, all the other reasons written by the person you
             | replied to
        
           | beeflet wrote:
           | you can use a program to convert between s-expressions and a
           | more readable format. In a world where canonical
           | s-expressions rule, this "more readable format" would
           | probably be an ordinary s-expression
        
           | lisper wrote:
           | > Canonical S-expressions are not as easy to read and much
           | harder to write by hand
           | 
           | You don't do that, any more than you read or write machine
           | code in binary. You read and write regular S-expressions (or
           | assembly code) and you translate that into and out of
           | canonical S expressions (or machine code) with a tool (an
           | assembler/disassembler).
        
             | cortesoft wrote:
             | I have written by hand and read JSON hundreds of times. You
             | can tell me I shouldn't, but I am telling you I do. Messing
             | around with an API with curl, tweaking a request object
             | slightly for testing something, etc.
             | 
             | Reading happens even more times. I am constantly printing
             | out API responses when I am coding, verifying what I am
             | seeing matches what I am expecting, or trying to get an
             | idea of the structure of something. Sure, you can tell me I
             | shouldn't do this and I should just read a spec, but in my
             | experience it is often much faster just to read the JSON
             | directly. Sometimes the spec is outdated, just plain wrong,
             | or doesn't exist. Being able to read the JSON is a regular
             | part of my day.
        
               | lisper wrote:
               | I think there may be a terminological disconnect here.
               | S-expressions and _canonical_ S-expressions are not the
               | same thing. S-expressions (non-canonical) are a
               | comparable to JSON, intended to be read and written by
               | humans, and actually much easier to read and write than
               | JSON because it uses less punctuation.
               | 
               | https://en.wikipedia.org/wiki/S-expression
               | 
               | A _canonical_ S-expression is a binary format, intended
               | to be both generated and parsed by machines, not humans:
               | 
               | https://en.wikipedia.org/wiki/Canonical_S-expressions
        
         | TZubiri wrote:
         | It feels like malpractice to use json in encryption
        
           | red_admiral wrote:
           | Sadly JWT and friends are "standard". In theory the
           | representation and the data are independent and you can
           | marshal and unmarshal correctly.
           | 
           | In practice, "alg:none" is a headache and everyone involved
           | should be ashamed.
        
         | mindcrime wrote:
         | JSON is better than XML, but it really isn't great.
         | 
         | JSON doesn't even support comments, c'mon. I mean, it's handy
         | for some things, but I don't know if I'd say "JSON is better
         | than XML" in any universal sense. I still go by the old saw
         | "use the right tool for the job at hand". In some cases maybe
         | it's JSON. In others XML. In others S-Exprs encoded in EBCDIC
         | or something. Whatever works...
        
           | deepsun wrote:
           | Yup, imagine if HTML was JSON-like, not XML-like.
        
         | supermatt wrote:
         | As you said - it's not really a problem with the JSON structure
         | and format itself, but the underlying parser, which is
         | specifically designed to map to the initial js types. There are
         | parsers that don't have this problem, but then the JSON itself
         | is not portable.
         | 
         | The problem with your solution is that it's also not portable
         | for the same reason (it's not part of the standard), and the
         | reason that it wasn't done that way in the first place is
         | because it wouldn't map to those initial js types!
         | 
         | FYI, you can easily work around this by using replacer and
         | revivers that are part of the standards for stringify and parse
         | and treat numbers differently. But again, the json isn't
         | portable to places without those replacer/revivers.
         | 
         | I.e, the real problem is treating something that looks like
         | json as json by using standards compliant json parsers - not
         | the apparent structure of the format itself. You could fix this
         | problem in an instant by calling it something other than JSON,
         | but people will see it and still use a JSON parser because it
         | looks like JSON, not because it is JSON.
        
           | zelphirkalt wrote:
           | Isn't the actual problem that it is supposed to map to JS
           | types, which are badly designed, and thus being infectious
           | for other ecosystems, that don't have these defects?
        
         | tsimionescu wrote:
         | This seems like a just-so story. Your explanation could make
         | some sense if we were comparing {"e" : "AQAB"} to {"e" :
         | 65537}, but there is no reason why that should be the
         | alternative. The JSON {"e" : "65537"} will be read precisely
         | the same way by any JSON parser out there. Converting the
         | string "65537" to the number 65537 is exactly as easy (or
         | hard), but certainly unambiguous, as converting the string
         | "AQAB" to the same number.
         | 
         | Of course, if you're doing this in JS and have reasons to think
         | the resulting number may be larger than the precision of a
         | double, you have a huge problem either way. Just as you would
         | if you were writing this in C and thought the number may be
         | larger than what can fit in a long long. But that's true
         | regardless of how you represent it in JSON.
        
           | deepsun wrote:
           | Some parsers, like PHP, may treat 65537 and "65537" the same.
           | Room for vulnerability.
        
             | int_19h wrote:
             | Why would they do so? It's semantically distinct JSON, even
             | JS itself treats it differently?
        
               | hiciu wrote:
               | It's PHP. Handling numbers in PHP is complicated enough
               | that a reasonable person would not trust it by default.
               | 
               | https://www.php.net/manual/en/language.types.numeric-
               | strings...
        
               | dwattttt wrote:
               | Time for a trip to the Abbey of Hidden Absurdities.
               | 
               | http://www.thecodelesscode.com/case/161
        
           | pornel wrote:
           | For very big numbers (that could appear in these fields),
           | generating and parsing a base 10 decimal representation is
           | way more cumbersome than using their binary representation.
           | 
           | The DER encoding used in the TLS certificates uses the big
           | endian binary format. OpenSSL API wants the big endian binary
           | too.
           | 
           | The format used by this protocol _is_ a simple one.
           | 
           | It's almost exactly the format that is needed to use these
           | numbers, except JSON can't store binary data directly.
           | Converting binary to base 64 is a simple operation (just bit
           | twiddling, no division), and it's easier than converting
           | arbitrarily large numbers between base 2 and base 10. The
           | 17-bit value happens to be an easy one, but other values may
           | need thousands of bits.
           | 
           | It would be silly for the sender and recipient to need to use
           | a BigNum library when the sender has the bytes and the
           | recipient wants the bytes, and neither has use for a decimal
           | number.
        
         | mnahkies wrote:
         | I'm still haunted by a bug caused by the JSON serializer our C#
         | apps were using emitting bigints as JSON numbers, only for the
         | JavaScript consumers to mangle them silently.
         | 
         | Kinda blows my mind that the accepted behavior is to just
         | overflow and not raise an exception.
         | 
         | I try to stick to strings for anything that's not a 32 bit int
         | now.
        
         | fulafel wrote:
         | Is the correct number implementation really the exception? The
         | first 2 json decoders I just tried (Python & Clojure) worked
         | correctly with that example.
        
         | rendaw wrote:
         | Canonical S-expressions don't have an object/mapping type,
         | which means you can't have generic tooling unambiguously
         | perform certain common operations like data merges.
        
         | rr808 wrote:
         | > JSON is better than XML
         | 
         | hard disagree on that one.
        
           | motorest wrote:
           | > hard disagree on that one.
           | 
           | Ok, thanks for your insight.
        
             | rr808 wrote:
             | You're welcome! I hope it was helpful.
        
         | zubspace wrote:
         | Wouldn't it just solve a whole lot of problems if we could just
         | add optional type declarations to json? It seems so simple and
         | obvious that I'm kinda dumbfounded that this is not a thing
         | yet. Most of the time you would not need it, but it would
         | prevent the parser from making a wrong guess in all those edge
         | cases.
         | 
         | Probably there are types not every parser/language can accept,
         | but at least it could throw a meaningful error instead of
         | guessing or even truncating the value.
        
           | ivanbakel wrote:
           | I doubt that would fix the issue. The real cause is that
           | programmers mostly deal in fixed-size integers, and that's
           | how they think of integer values, since those are the
           | concepts their languages provide. If you're going to write a
           | JSON library for your favourite programming language, you're
           | going to reach for whatever ints are the default, regardless
           | of what the specs or type hints suggest.
           | 
           | Haskell's Aeson library is one of the few exceptions I've
           | seen, since it only parses numbers to 'Scientific's
           | (essentially a kind of bigint for rationals.) This makes the
           | API very safe, but also incredibly annoying to use if you
           | want to just munge some integers, since you're forced to
           | handle the error case of the unbounded values not fitting in
           | your fixed-size integer values.
           | 
           | Most programmers likely simply either don't consider that
           | case, or don't want to have to deal with it, so bad JSON
           | libraries are the default.
        
           | movpasd wrote:
           | This is actually a deliberate design choice, which the
           | breathtakingly short JSON standard explains quite well [0].
           | The designers deliberately didn't introduce any semantics and
           | pushes all that to the implementors. I think this is a
           | defensible design goal. If you introduce semantics, you're
           | sure to annoy someone.
           | 
           | There's an element of "worse is better" here [1]. JSON
           | overtook XML exactly because it's so simple and solves for
           | the social element of communication between disparate
           | projects with wildly different philosophies, like UNIX byte-
           | oriented I/O streams, or like the C calling conventions.
           | 
           | ---
           | 
           | [0] https://ecma-international.org/publications-and-
           | standards/st...
           | 
           | [1] https://en.wikipedia.org/wiki/Worse_is_better
        
         | josephg wrote:
         | The funny thing about this is that JavaScript the language has
         | had support for BigIntegers for many years at this point. You
         | can just write 123n for a bigint of 123.
         | 
         | JSON could easily be extended to support them - but there's no
         | standards body with the authority to make a change like that.
         | So we're probably stuck with json as-is forever. I really hope
         | something better comes along that we can all agree on before I
         | die of old age.
         | 
         | While we're at it, I'd also love a way to embed binary data in
         | json. And a canonical way to represent dates. And comments. And
         | I'd like a sane, consistent way to express sum types. And sets
         | and maps (with non string keys) - which JavaScript also
         | natively supports. Sigh.
        
           | aapoalas wrote:
           | It's more a problem of support and backwards compatibility.
           | JSON and parsers for it are so ubiquitous, and the spec
           | completely lacks any versioning support, that adding a
           | feature would be a breaking change of horrible magnitude, on
           | nearly all levels of the modern software infrastructure
           | stack. I wouldn't be surprised if some CPUs might break from
           | that :D
           | 
           | JSON is a victim of its success: it has become too big to
           | fail, and too big to improve.
        
           | Sammi wrote:
           | There are easy workarounds to getting bigints in JSON: https:
           | //github.com/GoogleChromeLabs/jsbi/issues/30#issuecomm...
        
         | ownedthx wrote:
         | The numerical issues here are due to JavaScript, not JSON.
        
         | llm_nerd wrote:
         | >JSON is better than XML
         | 
         | JSON is still hack garbage compared to XML from the turn of the
         | millennia. Like most dominant tech standards, JSON took hold
         | purely because many developers are intellectually lazy and it
         | was easier to slam some sloppy JSON together than to understand
         | XML.
         | 
         | XML with XSD, XPath and XQuery is simply a glorious
         | combination.
        
       | matja wrote:
       | Lucky that 415031 is prime :)
       | 
       | The steps described in the article sound familiar to the process
       | done in the early 2000's, but I'm not sure why you'd want to make
       | it hard for yourself now.
       | 
       | I use certbot with "--preferred-challenges dns-01" and "--manual-
       | auth-hook" / "--manual-cleanup-hook" to dynamically create DNS
       | records, rather than needing to modify the webserver config (and
       | the security/access risks that comes with). It just needs putting
       | the cert/key in the right place and reloading the
       | webserver/loadbalancer.
        
       | wolf550e wrote:
       | Implementing an ACME client in python using pyca/cryptography (or
       | in Go) would be fine, but why do it in C++ ?
        
         | mr_toad wrote:
         | Not everyone wants to deal with maintaining Python and untold
         | dependencies on their web server. A C++ binary often has no
         | additional dependencies, and even if it does they'll be dealt
         | with by the OS package manager.
        
           | wolf550e wrote:
           | I think uv[1] basically solved this problem for python
           | scripts. Go creates statically linked executables that are
           | easy to deploy.
           | 
           | 1 - https://docs.astral.sh/uv/guides/scripts/
        
           | fireflash38 wrote:
           | Docker/podman?
        
             | dmitrygr wrote:
             | Not everyone wants to spin up a multi-GB container for when
             | a 80KB C++ binary would do...
        
       | dang wrote:
       | Related from before:
       | 
       |  _Why I still have an old-school cert on my HTTPS site_ -
       | https://news.ycombinator.com/item?id=34242028 - Jan 2023 (63
       | comments)
        
       | Arnavion wrote:
       | If you want to actually implement an ACME client from first
       | principles, reading the RFC (plus related RFCs for JOSE etc) is
       | probably easier than you think. I did exactly that when I made a
       | client for myself.
       | 
       | I also wrote up a digested description of the issuance flow here:
       | https://www.arnavion.dev/blog/2019-06-01-how-does-acme-v2-wo...
       | It's not a replacement for reading the RFCs, but it presents the
       | information in the sequence that you would follow for issuance,
       | so think of it like an index to the RFC sections.
        
         | anishathalye wrote:
         | Implementing an ACME client is part of the final lab assignment
         | for MIT's security class:
         | https://css.csail.mit.edu/6.858/2023/labs/lab5.html
        
           | Bluecobra wrote:
           | Nice thanks! I've been wanted to learn it as dealing with
           | cert expirations every year is a pain. My guess is that we
           | will have 24 hour certs at some point.
        
             | justusthane wrote:
             | I don't know about 24 hours, but it will be 47 days in
             | 2029.
        
           | jazzyjackson wrote:
           | Looks like a good class; is it only available to enrolled
           | students? videos seem to be behind a log-in wall.
        
             | anishathalye wrote:
             | Looks like the 2023 lectures weren't uploaded to YouTube,
             | but the lectures from earlier iterations of the class,
             | including 2022, are available publicly. For example, see
             | the YouTube links on https://css.csail.mit.edu/6.858/2022/
             | 
             | (6.858 is the old name of the class, it was renamed to
             | 6.5660 recently.)
        
       | heraldgeezer wrote:
       | Devs need to be sysadmins also.
        
       | donnachangstein wrote:
       | OpenBSD has a dead-simple lightweight ACME client (written in C)
       | as part of the base OS. No need to roll your own. I understand it
       | was created because existing alternatives ARE bloatware and
       | against their Unixy philosophy.
       | 
       | Perhaps the author wasn't looking hard enough. It could probably
       | be ported with little effort.
        
         | zh3 wrote:
         | Or uacme [0] - litle bit of C that's been running perfectly
         | since endless battery failures with the LE python client made
         | us look for something that would last longer.
         | 
         | [0] https://github.com/ndilieto/uacme
        
         | seanw444 wrote:
         | Yeah, was looking for someone to comment this. I use it. Works
         | great.
        
         | tialaramex wrote:
         | When I last checked this client is a classic example of OpenBSD
         | philosophy not understanding why security is the way it is.
         | 
         | This client really wants the easy case where the client lives
         | on the machine which owns the name and is running the web
         | server, and then it uses OpenBSD-specific partitioning so that
         | elements of the client can't easily taint one another if
         | they're defective
         | 
         | But, the ACME protocol would allow actual air gapping - the
         | protocol doesn't care whether the machine which needs a
         | certificate, the machine running an ACME client, and the
         | machine controlling the name are three separate machines,
         | that's fine, which means if we _do not_ use this OpenBSD all-
         | in-one client we can have a web server which literally doesn 't
         | do ACME at all, an ACME client machine which has no permission
         | to serve web pages or anything like that, and name servers
         | which also know nothing about ACME and yet the whole system
         | works.
         | 
         | That's more effort than "I just install OpenBSD" but it's how
         | this was designed to deliver security rather than putting all
         | our trust in OpenBSD to be bug-free.
        
           | donnachangstein wrote:
           | I said it was dead-simple and you delivered a treatise
           | describing the most complex use case possible. Then maybe
           | it's not for you.
           | 
           | Most software in the OpenBSD base system lacks features on
           | purpose. Their dev team frequently rejects patches and
           | feature requests without compelling reasons to exist. Less
           | features means less places for things to go wrong means less
           | chance of security bugs.
           | 
           | It exists so their simple webserver (also in the base system)
           | has ACME support working out of the box. No third party
           | software to install, no bullshit to configure, everything
           | just works as part of a super compact OS. Which to this day
           | still fits on a single CD-ROM.
           | 
           | Most of all no stupid Rust compiler needed so it works on
           | i386 (Rust cannot self-host on i386 because it's so bloated
           | it runs out of memory, which is why Rust tools are not
           | included in i386).
           | 
           | If your needs exceed this or you adore complexity then feel
           | free to look elsewhere.
        
         | rollcat wrote:
         | Came here to mention this.
         | 
         | Man page: https://man.openbsd.org/man1/acme-client.1
         | 
         | Source:
         | https://github.com/openbsd/src/tree/master/usr.sbin/acme-cli...
        
       | Aachen wrote:
       | They thought it was too complex and therefore insecure so the
       | natural solution was to roll your own implementation and now they
       | feel comfortable running that version of it?!
       | 
       | Edit: to be clear, I'd not be too surprised if their homegrown
       | client survives an audit unscathed, I'm sure they're a great
       | coder, but the odds just don't seem better than to the
       | alternative of using an existing client that was already audited
       | by professionals as well as other people
        
       | anonymousiam wrote:
       | "Skip the first 00 for some inexplicable reason" is something
       | that caught me a few months ago. I was comparing keys in a script
       | and they did not match because of the leading 00.
       | 
       | Does anyone know why they're there?
        
         | aaronmdjones wrote:
         | If the leading bit is set, it could be interpreted as a signed
         | negative number. Prepending 00 guarantees that this doesn't
         | happen.
        
           | anonymousiam wrote:
           | Okay, so why isn't it done consistently? Some tools report
           | the leading 00 and some don't.
           | 
           | I don't really buy this explanation. It's a very large
           | unsigned number. Everyone knows this. Is there some arbitrary
           | precision library in use that forces large integers to be
           | signed? Even if it were signed, or had the MSB set, it
           | wouldn't change any of the bits, so the key would still be
           | the same. So why would we care about the sign?
        
             | mras0 wrote:
             | The standard format for RSA private keys is ASN.1
             | (https://www.rfc-editor.org/rfc/rfc8017#appendix-C) with
             | the components encoded as INTEGERs. An INTEGER is always
             | signed in ASN.1, so you need the leading 0 byte if the MSB
             | of your positive number is set.
             | 
             | OpenSSL is just dumping the raw bytes comprising the value.
             | Tools that don't show a leading zero in this case are doing
             | a bit more interpretation (or just treating it as an
             | unsigned value) to show you the number you expect.
        
             | aaronmdjones wrote:
             | > Okay, so why isn't it done consistently? Some tools
             | report the leading 00 and some don't.
             | 
             | This is probably a bug (where an unsigned integer with its
             | high bit set is not printed with a leading 00) and should
             | be reported.
             | 
             | Note that RSA key moduli generated by OpenSSL will always
             | have the high bit set, and so will always have 00 prepended
             | when you ask it to print them. The same is not necessarily
             | true of other integers.
             | 
             | This is trivial to demonstrate:                   $ while
             | true ; do openssl genrsa 1024 2>/dev/null | openssl rsa
             | -text 2>/dev/null | grep -A1 modulus | tail -n1 | egrep -v
             | '^\s*00:[89abcdef]' ; done
             | 
             | > I don't really buy this explanation. It's a very large
             | unsigned number. Everyone knows this.
             | 
             | Everyone knows that an RSA modulus is a very large unsigned
             | number yes. Not everyone knows that every number is
             | unsigned.
             | 
             | > Is there some arbitrary precision library in use that
             | forces large integers to be signed?
             | 
             | OpenSSL's own BN (BigNum) library, which tests if the high
             | bit is set in the input (line 482):
             | 
             | https://github.com/openssl/openssl/blob/a0d1af6574ae6a0e387
             | 2...
             | 
             | > Even if it were signed, or had the MSB set, it wouldn't
             | change any of the bits, so the key would still be the same.
             | So why would we care about the sign?
             | 
             | Because the encoding doesn't care about the context. RFC
             | 3279 specifies that the modulus and exponent are encoded as
             | INTEGERs:
             | 
             | https://datatracker.ietf.org/doc/html/rfc3279#section-2.3.1
             | 
             | ... and INTEGERs are signed (which means OpenSSL has to use
             | signedness == SIGNED):
             | 
             | https://learn.microsoft.com/en-
             | us/windows/win32/seccertenrol...                   Integer
             | values are encoded into a TLV triplet that begins with a
             | Tag         value of 0x02.  The Value field of the TLV
             | triplet contains the encoded         integer if it is
             | positive, or its two's complement if it is negative.
             | If the integer is positive but the high order bit is set to
             | 1, a leading         0x00 is added to the content to
             | indicate that the number is not negative.
             | 
             | See also the canonical specification (page 15, section
             | 8.3): https://www.itu.int/ITU-T/studygroups/com17/languages
             | /X.690-...
             | 
             | This is exactly the same way that signed integers are
             | represented in e.g. x86 (minus the leading tag and length
             | fields) -- if the leading bit is set, the number is
             | negative.
             | 
             | You're right that it wouldn't change any of the key's bits,
             | but it would change the math performed on them, in a manner
             | that would break it.
        
               | mras0 wrote:
               | > moduli generated by OpenSSL will always have the high
               | bit set
               | 
               | Correct for 1024, but...                   openssl genrsa
               | 1023 | openssl rsa -text -noout
               | 
               | :)
               | 
               | Also just noticed that openssl rsa actually has a
               | -modulus switch so you can make do with "cut -b9-"
        
               | aaronmdjones wrote:
               | Ah, yeah, should have clarified that for any 8n-bit
               | moduli it will be set.
        
               | anonymousiam wrote:
               | Excellent reply. Thank you!
        
             | BlackFly wrote:
             | Java BigInteger is signed.
             | 
             | https://docs.oracle.com/javase/8/docs/api/java/math/BigInte
             | g...
             | 
             | You deserialize with a leading 0x80 you don't get a working
             | key. You can check the bit or you can just prepend a 0x00.
        
       | abujazar wrote:
       | Lost me at "make an RSA key". RSA is ancient.
        
         | wolf550e wrote:
         | They also support P-256 and P-384 ECDSA, but I think 4096 bit
         | RSA is ok for account key.
         | 
         | https://letsencrypt.org/docs/integration-guide/#supported-ke...
        
         | Slasher1337 wrote:
         | yes, LetsEncrypt really should support ed25519. Sadly, they do
         | not.
        
         | upofadown wrote:
         | The triangle theorem of Pythagoras is thousands of years old.
         | Still valid. RSA ... still secure.
        
       | jlundberg wrote:
       | acme_tiny.py is a good choice of client for anyone who don't want
       | to write a client from scratch -- but still want to review the
       | code.
        
       | mcpherrinm wrote:
       | I'm the technical lead for the Let's Encrypt SRE/infra team. So I
       | spend a lot of time thinking about this.
       | 
       | The salt here is deserved! JSON Web Signatures are a gnarly
       | format, and the ACME API is pretty enthusiastic about being
       | RESTful.
       | 
       | It's not what I'd design. I think a lot of that came via the IETF
       | wanting to use other IETF standards, and a dash of design-by-
       | committee.
       | 
       | A few libraries (for JWS, JSON and HTTP) go a long way to making
       | it more pleasant but those libraries themselves aren't always
       | that nice, especially in C.
       | 
       | I'm working on an interactive client and accompanying
       | documentation to help here too, because the RFC language is a bit
       | dense and often refers to other documents too.
        
         | dwedge wrote:
         | What is she talking about that you have to pay for certs if you
         | want more than 3? Am I about to get a bill for the past 5 years
         | or did she just misunderstand?
        
           | belorn wrote:
           | to quote the article (or rather, the 2023 article which is
           | the one mentioning the number 3).
           | 
           | "Somehow, a couple of weeks ago, I found this other site
           | which claimed to be better than LE and which used relatively
           | simple HTTP requests without a bunch of funny data types."
           | 
           | "This is when the fine print finally appeared. This service
           | only lets you mint 90 day certificates on the free tier.
           | Also, you can only do three of them. Then you're done. 270
           | days for one domain or 3 domains for 90 days, and then you're
           | screwed. Isn't that great? "
           | 
           | She don't mention what this "other site" is.
        
             | jchw wrote:
             | FWIW, it is ZeroSSL. I want there to be more major ACME
             | providers than just LE, but I'm not sure about ZeroSSL,
             | personally. It seems to have the same parent company as
             | IdenTrust (HID Global Corporation). Probably a step up from
             | Honest Achmed but recently I recall people complaining that
             | their EV code signing certificates were not actually
             | trusted by Windows which is... Interesting.
        
               | killjoywashere wrote:
               | IdenTrust participates in the US Federal PKI ecosystem,
               | so they likely have strong incentives to charge
               | exorbitantly. Those free certs are probably meant to
               | facilitate development of gov-specific capabilities by
               | random subcontractors long enough to figure out how to
               | structure a contract mod that passes the anticipated cost
               | onto the government.
               | 
               | Don't hate the player, hate the game.
        
               | AStonesThrow wrote:
               | > Honest Achmed
               | 
               | I had to stop and Google that, wondering if it was a
               | pastiche of "Akbar & Jeff's Certificate Hut"...
               | 
               | https://bugzilla.mozilla.org/show_bug.cgi?id=647959
        
               | jchw wrote:
               | I'm glad to give you an xkcd 1053 moment. Honest Achmed
               | is one for the books.
        
               | nickf wrote:
               | ZeroSSL is owned by Identrust, but the infra is operated
               | by another CA. Also Microsoft killed EV codesigning early
               | last year - not stopping it working, just making it
               | identical to 'normal' codesigning certs.
        
               | mkup wrote:
               | Could you please provide more info on this topic, e.g. a
               | link? I intended to buy EV code signing certificate as a
               | sole proprietor to fix long-standing problem with my
               | software when Windows Defender pops up every time I
               | release a new version. Is EV code signing certificate no
               | longer a viable solution to this problem? Is there no
               | longer a difference between EV and non-EV code signing
               | certificate?
        
               | nickf wrote:
               | Sure: https://learn.microsoft.com/en-us/security/trusted-
               | root/prog...
               | 
               | 3.D.3 covers the details about EV CS.
        
               | arccy wrote:
               | Google's CA offers them for free via ACME
               | https://pki.goog/
        
               | jchw wrote:
               | That's pretty cool, though it does seem that you need to
               | authenticate with a GCP account. A little bit less
               | convenient. I do think there are actually a few other
               | providers of ACME out there that require registration
               | beforehand, ZeroSSL actually offers it without pre-
               | registration like Let's Encrypt.
        
               | rmetzler wrote:
               | A while ago I saw that acme.sh now uses ZeroSSL by
               | default.
               | 
               | https://github.com/acmesh-
               | official/acme.sh/blob/42bbd1b44af4...
        
               | _hyn3 wrote:
               | "We now have another confirmation on Twitter that remote
               | code is executed and a glimpse into what the script is...
               | _it appears to be benign._ "
               | 
               | https://github.com/acmesh-official/acme.sh/issues/4659
               | 
               | It was not. Don't use acme.sh.
        
               | rsync wrote:
               | I went down the acme/HiCA/RCE rabbit hole a year or so
               | ago and, while I don't remember the specifics, my
               | _feeling_ was that the RCE was not that dangerous and was
               | put into place by greedy scammers thwarting the rules of
               | cert (re)selling and not by shadowy actors trying to
               | infiltrate sensitive infra ...
               | 
               | Is there new information ? Was my impression wrong ?
        
               | birktj wrote:
               | Buypass provides ACME certificates as well [1]. The usage
               | limits are not quite as generous as LE, but they work
               | pretty well in my experience.
               | 
               | [1] https://www.buypass.com/products/tls-ssl-
               | certificates/read-m...
        
         | cryptonector wrote:
         | > JSON Web Signatures are a gnarly format
         | 
         | They are??
         | 
         | As someone who wallows in ASN.1, Kerberos, and PKI, I don't
         | find JWS so "gnarly". Even if you're open-coding a JSON Web
         | Signature it will be easier than to open-code S/MIME, CMS,
         | Kerberos, etc. Can you explain what is so gnarly about JWS?
         | 
         | Mind you, there are problems with JWT. Mainly that HTTP user-
         | agents don't know how to fetch the darned things because there
         | is not standard for how to find out how to fetch the darned
         | things, when you should honor a request for them, etc.
        
           | asimops wrote:
           | Don't you think you are falling for classic whataboutism
           | here?
           | 
           | Just because ASN.1 and friends are exceptionally bad, it does
           | not mean that Json Web * cannot be bad also.
        
             | cryptonector wrote:
             | > Don't you think you are falling for classic whataboutism
             | here?
             | 
             | I do not. This sort of codec complexity can't be avoided.
             | And ASN.1 is NOT "exceptionally bad" -- I rather like
             | ASN.1. The point was not "wait till you see ASN.1", but
             | "wait till you see Kerberos" because Kerberos requires a
             | very large amount of client-side smarts -- too much really
             | because it's got more than 30 years of cruft.
        
           | mcpherrinm wrote:
           | I'd take ASN.1/DER over JWS any day :) It's the weekend and I
           | don't feel I have the energy to launch a full roast of JWS,
           | but to give some flavour, I'll link
           | 
           | https://auth0.com/blog/critical-vulnerabilities-in-json-
           | web-...
           | 
           | Implementations can be written securely, but it's too easy to
           | make mistakes.
           | 
           | Yeah, there's worse stuff from the 90s around, but JOSE and
           | ACME is newer than that - we could have done better!
           | 
           | Alas, it's not changing now.
           | 
           | I think ASN.1 has some warts, but I think a lot of the
           | problems with DER are actually in creaky old tools. People
           | seem way happier with Protobuf, for example: I think that's
           | largely down to tooling.
        
             | cryptonector wrote:
             | The whole not validating the signatures thing is a problem,
             | yes. That can happen with PKI certificates too, but those
             | have been around longer and -perhaps because one needed an
             | ASN.1 stack- only people with more experience wrote PKI
             | stacks than we see in the case of JWS?
             | 
             | I think Protocol Buffers is a disaster. Its syntax is worse
             | than ASN.1 because you're required to write in tags, and it
             | is a TLV encoding very similar to DER so... why _why_ does
             | PB exist? Don't tell me it's because there were no ASN.1
             | tools around -- there were no PB tools around either!
        
         | tasuki wrote:
         | > and the ACME API is pretty enthusiastic about being RESTful
         | 
         | Without looking at it, are you sure about that?
         | 
         | I once used to know what REST meant. Are you doing REST as in
         | HATEOAS or as in "we expose some http endpoints"?
        
           | peanut-walrus wrote:
           | REST has for a long long time meant "rpc via json over http".
           | HATEOAS is a mythical beast nobody has ever seen in the wild.
        
             | hamburglar wrote:
             | Eh, I think that's what it meant for a while. I've now
             | interacted with enough systems that have rigor about
             | representing things as resources that have GET urls and
             | doing writes with POST etc that I don't think it's always
             | the ad hoc RPC fest it once was. It may be rare to see
             | according-to-hoyle HATEOAS but REST is definitely no longer
             | in the "nobody actually does this" category.
        
           | mcpherrinm wrote:
           | Everything is an object, identified by a URL. You start from
           | a single URL (the directory), and you can find all the rest
           | of the resources from URLs provided from there.
           | 
           | ACME models everything as JSON objects, each of which is
           | identified by URL. You can GET them, and they link to other
           | objects with Location and Link headers.
           | 
           | To quote from the blog post:
           | 
           | > Dig around in the headers of the response, looking for one
           | named "Location". Don't follow it like a redirection. Why
           | would you ever follow a Location header in a HTTP header,
           | right? Nope, that's your user account's identifier! Yes, you
           | are a URL now.
           | 
           | I don't know if it's the pure ideal of HATEOS, but it's about
           | as close as I've seen in use.
           | 
           | It has the classic failing though: it's used by scripts which
           | know exactly what they want to do (get a cert), so the
           | clients still hardcode the actions they need. It just adds a
           | layer of indirection as they need to keep track of URLs.
           | 
           | I would have preferred if it was just an RPC-over-HTTP/JSON
           | with fixed endpoints and numeric object IDs.
        
             | tasuki wrote:
             | That's pretty good! Better than 99% claims of REST for
             | sure! Thanks for the long reply.
        
       | dwedge wrote:
       | I really don't understand why this blog gets so much traction
       | here. Ranting against rss scrapes, FUD about some atop
       | vulnerability that turned out to be nothing, and thinking you
       | have to pay for acme certs and caring about the way it's parsed?
        
         | AStonesThrow wrote:
         | Well, I will perhaps endure flak or downvotes for pointing a
         | few things out, but Rachel:
         | 
         | - Is female [TIL the term "wogrammer"]
         | 
         | - Works for Facebook [formerly Rackspace and Google] so an
         | undeniably _Big MAMAA_
         | 
         | - Has been blogging prolifically for at least 14 years [let's
         | call it 40 years: she admin'd a BBS at age 12]
         | 
         | - Website is custom self-hosted; very old school and
         | accessible; no ads or popup bullshit
         | 
         | - Probably has more CSE/SWE experience+talent in her little
         | pinky finger than 80% of HN commenters
         | 
         | https://medium.com/wogrammer/rachel-kroll-7944eeb8c692
         | 
         | So I'd say that her position and experience command enough
         | respect that we cannot judge her merely by peeking at a few
         | trifling journal entries.
        
           | dwedge wrote:
           | And yet that's exactly what HN does
        
         | renewiltord wrote:
         | Community favourite. Once you hit a critical mass with a
         | community you will always be read by them.
        
       | patrickmay wrote:
       | ACME aside, I love the description of how the OP iterated to a
       | solution via a combination of implementing simple functions and
       | cussing. That is a beautiful demonstration of what it means to be
       | an old school hacker.
        
       | viraptor wrote:
       | I understand the issues listed, but some assumptions at the
       | beginning are not really problems in practice. Or at least not of
       | your alternative is a custom implementation.
       | 
       | > They haven't earned the right to run with privileges for my
       | private keys and/or ability to frob the web server (as root!)
       | 
       | None of that is needed. You can setup the update system in
       | isolation, redirect the required paths and copy the keys
       | manually. None of that needs to run as root either if you can set
       | the permissions correctly or delegate actions to other processes.
        
       | benlivengood wrote:
       | Apache (is anyone else still using that?) now ships with an
       | official ACME module, which is nice.
       | 
       | Professionally it's been cert-manager.
       | 
       | I haven't paid for a TLS certificate in almost a decade I guess.
        
         | wlonkly wrote:
         | The Caddy[1] webserver also has built-in ACME. It has all the
         | problems Rachel mentioned, of course, because now it's an ACME
         | client embedded in an even bigger piece of software, but it's
         | handy for sure!
         | 
         | I don't know much about Caddy scalability but it's worked great
         | for my personal sites.
         | 
         | [1] https://caddyserver.com/
        
       | arkadiyt wrote:
       | > Make an RSA key of 4096 bits. Call it your personal key.
       | 
       | This is bad advice - making a 4096 bit key slows down visitors of
       | your website and only gives you 2048 bits of security (if someone
       | can break a 2048 bit RSA key they'll break the LetsEncrypt
       | intermediate cert and can MITM your site). You should use a 2048
       | bit leaf certificate here
        
         | nothrabannosir wrote:
         | Amateur question: does a 4096 not give you more security
         | against passive capture and future decrypting? Or is the
         | intermediate also a factor in such an async attack?
        
           | arkadiyt wrote:
           | > does a 4096 not give you more security against passive
           | capture and future decrypting?
           | 
           | If the server was using a key exchange that did not support
           | forward secrecy then yes. But:                   % echo |
           | openssl s_client -connect rachelbythebay.com:443 2>/dev/null
           | | grep Cipher         New, TLSv1.2, Cipher is ECDHE-RSA-
           | AES256-GCM-SHA384         Cipher    : ECDHE-RSA-AES256-GCM-
           | SHA384
           | 
           | ^ they're using ECDHE (elliptic curve diffie hellman), which
           | is providing forward secrecy.
        
             | nothrabannosir wrote:
             | I thought FS only protected _other_ sessions from leak of
             | your current session key. How does it protect against
             | passive recording of the session and later attacking of the
             | recorded session in the future?
        
               | arkadiyt wrote:
               | If using a non-FS key exchange (like RSA) then the value
               | that the session key is derived from (the pre-master
               | secret) is sent over the wire encrypted using the
               | server's public key. If that session is recorded and in
               | the future the server's private key is obtained, it can
               | be used to decrypt the pre-master secret, derive the
               | session key, and decrypt the entire session.
               | 
               | If on the other hand you use a FS key exchange (like
               | ECDHE), and the session is recorded, and the server's
               | private key is obtained, the session key cannot be
               | recovered (that's a property of ECDHE or any forward-
               | secure key exchange), and none of the traffic is
               | decryptable.
        
               | nothrabannosir wrote:
               | Thanks I think I understand better now!
        
               | dingaling wrote:
               | > the session key cannot be recovered
               | 
               | Of course it can, but only for that specific session.
        
               | KAMSPioneer wrote:
               | No, my GP is correct: if the server's RSA private key is
               | compromised it does not allow decryption of any
               | previously-recorded sessions.
               | 
               | You would need to compromise the _ephemeral session key_
               | which is difficult because it is discarded by both
               | parties when the session is closed.
               | 
               | Compromising the RSA key backing the certificate allows
               | _future_ impersonations of the server, which is a
               | different attack altogether.
        
           | upofadown wrote:
           | The certificate is for authentication of the server. It has
           | nothing to do with the encryption of the data.
           | 
           | Basically forward secrecy is where both the sender and
           | receiver throw away the key after the data is decrypted. That
           | way the key is not available for an attacker to get access to
           | later. If the attacker can find some way other than access to
           | the key to decrypt the data then forward secrecy has no
           | benefit.
        
         | Arnavion wrote:
         | My webhost only supports RSA keys, so I use an RSA-4096 key
         | just to annoy them into supporting EC keys.
        
         | asimops wrote:
         | The key in question is the acme account key though, correct?
        
       | jongjong wrote:
       | The whole DNS and TLS system is overcomplicated and was designed
       | to allow a small number of orgs to dominate the internet.
       | 
       | I've been thinking of using a Blockchain to register domain name-
       | to-IP-address mappings in a cryptographically secure way and then
       | writing a Chrome extension which connects to the Blockchain (to
       | any known peer IP; it would start from a few hardcoded seed node
       | IPs and discover more peers as is standard for most Blockchains
       | and P2P protocols; or you could host your own blockchain node
       | locally and use it as your personal DNS service) and then the
       | extension can do DNS lookups on-chain. The Chrome extension could
       | act as an alternative address bar; type the address in there and
       | it would read the Blockchain to figure out the IP address,
       | completely bypassing the whole mess of a DNS system and the
       | current centralized mess of an internet... We could then store
       | all the certs on-chain in a similar way, just make it support the
       | bare minimum necessary to get the browser to shut up and accept
       | the cert whilst maintaining the essential cryptographic security
       | guarantees.
       | 
       | It's kind of ridiculous how easy it would be, technically, to
       | create an alternative internet and DNS system. I think there are
       | already similar solutions like Brave browser supporting a
       | parallel internet with .eth domains but they don't seem to get
       | much attention. There needs to be search engines for these
       | alternative internets to get things going.
       | 
       | Surely once the current internet gets spammed into oblivion and
       | becomes devoid of opportunities, there should be an incentive use
       | create an alternative system from scratch. Surely there is a
       | point when a network of scarce data is better than one of
       | abundant spam data.
        
         | oasisaimlessly wrote:
         | See also: Namecoin [1], the first altcoin.
         | 
         | [1]: https://en.wikipedia.org/wiki/Namecoin
        
       | mastazi wrote:
       | > About six months ago, I realized that it was probably time to
       | get away from Gandi as a registrar and also SSL provider
       | (reseller).
       | 
       | I've recently heard similar takes about Gandi but I am out of the
       | loop, can someone explain the controversy? Currently using it for
       | one of my domains and I'd like to know more.
       | 
       | EDIT: tried googling but results are about people whose name is
       | either Gandi or Ghandi, could not find much about Gandi the
       | company.
        
         | agarren wrote:
         | I'm seeing a lot about price hikes, new service charges, and
         | bad customer service since the acquisition. I noticed the price
         | hike with one of my domains through them, I paid it but was
         | close to reconsidering. Thinking I'll try transferring it now.
         | 
         | https://techrights.org/n/2023/12/14/_Video_Lessons_to_be_Lea...
         | 
         | https://blog.cogitactive.com/website/gandi-outrageous-price/
        
           | mastazi wrote:
           | I see, I hadn't noticed because back when I got my domain I
           | paid for a multi-year offer. Thanks for that, when my paid-
           | for period ends I will look into alternatives.
        
       | chrisweekly wrote:
       | > "So far, we have (at least): RSA keys, SHA256 digests, RSA
       | signing, base64 but not really base64, string concatenation, JSON
       | inside JSON, Location headers used as identities instead of a
       | target with a 301 response, HEAD requests to get a single value
       | buried as a header, making one request (nonce) to make ANY OTHER
       | request, and there's more to come.
       | 
       | We haven't even scratched the surface of creating an order,
       | dealing with authorizations and challenges, the whole "key
       | thumbprint" thing, what actually goes into those TXT records, and
       | all of that other fun stuff."
       | 
       | Yikes. It's almost unbelievable. What a colossal tangle of
       | complexity. Thank you for sharing the fruits of your labors.
       | Great writing style and content.
        
       | red_admiral wrote:
       | It bugs me that we're still on RSA/4096. Ed25519 has fewer
       | parameters to mess around with (no custom exponent or modulus),
       | keys and signatures are shorter and have a well-defined forman
       | (just binary data) and there's no network byte order confusion.
       | 
       | Meanwhile, ECDSA is so complex to write that most people will get
       | it wrong and end up with a security hole that makes the NSA
       | happy.
        
       | AndyMcConachie wrote:
       | My main gripe about certbot is that it requires a plugin to
       | essentially send a SIGHUP to apache/nginx/etc. Openbsd acme-
       | client got this right and left that functionality out of itself.
        
       | pornel wrote:
       | The numbers are sent in this peculiar format, because that's how
       | they are stored in the certificates (DER encoding in x509 uses
       | big endian binary), and that's the number format that OpenSSL API
       | uses too.
       | 
       | It looks silly for a small value like 65537, but the protocol
       | also needs to handle numbers that are thousands of bits long. It
       | makes sense to consistently use the same format for all numbers
       | instead of special-casing small values.
        
       | greatgib wrote:
       | I was totally horrified at all the acme clients I saw that were a
       | hell of complexity and dependencies, and installing random things
       | from internet until I found the one acme.sh that is a breeze.
       | 
       | You just need bash, no weird dependency, not that much
       | complicated. Easy to manipulate.
        
       | gr4vityWall wrote:
       | I loved how she explained it in a tutorial-like way at the end.
       | But I didn't enjoy the disdain and the tone of the rest of the
       | article.
        
       | gethly wrote:
       | I too am not a fan of ACME and LE. I'd rather manually buy and
       | activate the certificate once per year rather than deal with the
       | "automation" that everyone is constantly overjoyed with. And that
       | was the case for Gethly.com since the beginning. But the prices
       | of certificates are not cheap and they provide ZERO advantage
       | over the free LE certificates. Especially the wildcard
       | certificates, which are the only types that make any sense
       | whatsoever anyway. So a decisions was made to switch to LE with
       | DNS challenge, which is the only type that supports wildcard
       | certificates. Long story short, DNS provider had this built-in
       | and so now it is a 50:50 automation and manual work. DNS provider
       | sends a notification when certificate is going to expire and to
       | get the new one and that is about it. A matter of two minutes of
       | manual labour to sign into DNS provider's interface, copy the
       | certificate and paste it into actively running application that
       | then simply distributes it to all services. Longer story - doing
       | the DNS challenge with DNS provider's API was doable, but ACME
       | failed to detect updated records so luckily, instead of wasting
       | time trying to get it working, DNS provider got us covered from
       | the get-go.
        
       | ddtaylor wrote:
       | > Random side note: while looking at existing ACME clients, I
       | found that at least one of them screws up their encoding of the
       | publicExponent and ends up interpreting it as hex instead of
       | decimal. That is, instead of 65537, aka 0x10001, it reads it as
       | 0x65537, aka 415031! > > Somehow, this anomaly exists and
       | apparently doesn't break anything? I haven't actually run the
       | client in question, but I imagine people are using it since it's
       | in apt.
        
       | Tractor8626 wrote:
       | Aren't all those complexeties come more from using c++ than acme?
       | 
       | Is base64-encoded number really something to be mad about?
        
       ___________________________________________________________________
       (page generated 2025-05-24 23:01 UTC)