[HN Gopher] Google Public DNS's approach to fight against cache ...
___________________________________________________________________
Google Public DNS's approach to fight against cache poisoning
attacks
Author : tatersolid
Score : 139 points
Date : 2024-04-07 11:46 UTC (11 hours ago)
(HTM) web link (security.googleblog.com)
(TXT) w3m dump (security.googleblog.com)
| LinuxBender wrote:
| I see most of the bot traffic hiding behind Google and
| Cloudflare. I appreciate that. They take a lot of load off my
| servers. If I had one request of Google it would be to remove the
| TTL cap of 21600 or raise it to 86400 to further reduce the
| traffic that comes through from them as my TTL records are very
| high on purpose for my own reasons that I will not debate. I know
| memory is still a limit. CF seem to honor my TTL's with the
| caveat I have to wait for each node behind their anycast clusters
| to cache the record but that's fine.
|
| As a side note, whatever group in Google are running DNS are
| doing a great job. I do not see malformed garbage coming from
| them. _Some people watch birds when they retire, I watch packets
| ... and critters._
| WirelessGigabit wrote:
| I aspire to be you when (if) I retire.
| freedomben wrote:
| > _my TTL records are very high on purpose for my own reasons
| that I will not debate._
|
| Why are your records high? Is the load on your servers too
| intense otherwise? (not trying to debate, am curious because
| I've really appreciated your comments/insights in the past)
| contingencies wrote:
| Guess - possibly some sort of outbound port-knock
| alternative: if (SYN->dest port && TTL > X)
| { ACK };
| egberts1 wrote:
| this
| ThePowerOfFuet wrote:
| How is this supposed to work, exactly?
| radicaldreamer wrote:
| my comment has people asking a lot of questions already
| answered by my comment
| ape4 wrote:
| Good to hear the world's infrastructure relies on a kludge like
| case randomization.
| bradfitz wrote:
| Longer domain names are more secure!
| jamespwilliams wrote:
| And conversely, short domains like Google's own g.co are less
| secure!
| akira2501 wrote:
| That's how you maintain backwards compatibility while improving
| security. I'm not sure lamenting the imperfection is valuable,
| but it is a worthwhile lesson for those designing new
| protocols.
|
| If it becomes popular enough you will certainly face future
| security challenges you failed to even imagine. Leave some room
| for that.
|
| Otherwise, this is great work.
| AndyMcConachie wrote:
| Weird that they don't even mention that Google's public DNS
| performs DNSSEC validation. It does, and that's the ultimate
| defense against cache poisoning attacks.
| tptacek wrote:
| No, it obviously isn't, because less than 5% of zones are
| signed, and an even smaller fraction of important zones (the
| Moz 500, the Tranco list) are.
|
| That's why they don't mention DNSSEC: because it isn't a
| significant security mechanism for Internet DNS. It's also why
| they do mention ADoT: because it is.
|
| I think this is also why DNSSEC advocates are so fixated on
| DANE, which is (necessarily) an even harder lift than getting
| DNSSEC deployed: because the attacks DNSSEC were ostensibly
| designed to address are now solved problems.
|
| Note also that if ADoT rolls out all the way --- it's already
| significantly more available than DNSSEC! --- there won't even
| be a real architectural argument for DNSSEC anymore, because
| we'll have end-to-end cryptographic security for the DNS.
|
| Thanks for calling this out! That Google feels case
| randomization is more important than DNSSEC is indeed telling.
| tialaramex wrote:
| > the attacks DNSSEC were ostensibly designed to address are
| now solved problems.
|
| Maybe you can help figure out where you went wrong here by
| explaining what - in your understanding - were the problems
| that DNSSEC was "ostensibly designed to address" ?
|
| > because we'll have end-to-end cryptographic security for
| the DNS.
|
| In 1995 a researcher who was annoyed about people snooping
| his passwords over telnet invented a protocol (and gave away
| a free Unix program) which I guess you'd say delivers "end-
| to-end cryptographic security" for the remote shell. Now,
| when you go into a startup and you find they've set up a
| bunch of ad hoc SSH servers and their people are just
| agreeing to all the "Are you sure you want to continue ..?"
| messages, do you think "That's fine, it's end-to-end
| cryptographic security" ? Or do you immediately put that on
| the Must Do list for basic security because it's an obvious
| vulnerability ?
| Hazelnut2465 wrote:
| > there won't even be a real architectural argument for
| DNSSEC anymore
|
| ADoT relies on NS records to be DNSSEC signed.
|
| The TLS certificates that ADoT relies on need to be hashed
| into TLSA records (DANE, DNSSEC).
| tptacek wrote:
| https://educatedguesswork.org/posts/dns-security-adox/
| Hazelnut2465 wrote:
| And? The IETF RFC draft for ADoT still specifies that it
| relies on DNSSEC.
|
| https://datatracker.ietf.org/doc/draft-dickson-dprive-
| adot-a...
| LinuxBender wrote:
| Just guessing but it could be the lack of adoption. Despite
| having climbed rapidly in the last few years [0] the percentage
| is still very low. [1]
|
| [0] - https://www.verisign.com/en_US/company-
| information/verisign-...
|
| [1] - https://www.statdns.com/
| omoikane wrote:
| The low adoption of DNSSEC might be due to posts like these:
|
| https://news.ycombinator.com/item?id=36171696 - Calling time
| on DNSSEC: The costs exceed the benefits (2023)
|
| And also many news regarding validation failures:
|
| https://hn.algolia.com/?q=dnssec
| tptacek wrote:
| The rabbit hole on people gradually pulling up stakes on
| DNSSEC goes deeper than that; I'd say the canary in the
| coal mine is probably Geoff Huston switching from "of
| course we're going to DNSSEC everything" to "are we going
| to DNSSEC anything?":
|
| https://www.potaroo.net/ispcol/2023-02/dnssec.html
|
| (Geoff Huston is an Internet infrastructure giant.)
|
| But really it all just boils down to the fact that the DNS
| zones that matter --- the ones at the busy end of the fat
| tail of lookups --- just aren't signed, despite 25 years of
| work on the standard. IPv6 is gradually mainstreaming; in
| countries where registrars auto-sign zones, DNSSEC is
| growing too, but very notably in countries where people
| have a choice, DNSSEC deployment is stubbornly stuck in the
| low single digit percentages, and the zones that are
| getting signed are disproportionately not in the top 10,000
| of the Tranco list.
| 8organicbits wrote:
| For end users, TLS is the key protection. I don't care if my
| DNS is poisoned, MITMed, or malicious: if the IP address I
| connect to can't present a valid TLS cert, then I don't
| proceed.
|
| If you can't securely authenticate your server (as HTTPS/TLS
| does) you have other problems too.
| singpolyma3 wrote:
| Unfortunately it's quite easy for some actors to get a valid
| TLS cert https://notes.valdikss.org.ru/jabber.ru-mitm/
| phicoh wrote:
| DNS is where the web would have been if browsers basically
| didn't force websites to support HTTPS.
|
| The reasoning is that DNS is not important enough to go through
| the trouble of deploying DNSSEC. These days TLS is often cited
| as the reason DNSSEC is not needed.
|
| At the same time we see a lot of interest in techniques to
| prevent cache poisoning and other spoofing attacks. Suddenly in
| those cases DNS is important.
|
| If all DNS client software would drop UDP source port
| randomization and randomized IDs, then lots of people would be
| very upset. Because DNS security was more important than
| claimed.
|
| DNS cookies are also an interesting case. They can stop most
| cache poisoning attacks. But from the google article, big DNS
| authoritatives do not deploy them.
| tptacek wrote:
| The key thing to note is that anti-poisoning countermeasures
| deployed at major authority servers scale to provide value
| without incurring cost for every (serverside) Internet
| "user", and DNSSEC doesn't. A lot of these thing seem like
| (really, are) half-measures, but their cost/benefit ratio is
| drastically different, which is why they get rolled out so
| quickly compared to DNSSEC, which is more akin to a forklift
| upgrade.
| phicoh wrote:
| There is no quickly in the Google article. It took them
| ages to roll out 0x20. Cookies are not very well supported.
| And then the elephant in the room is the connection between
| the stub resolver and the public DNS resolvers.
|
| The interesting thing is what happens when BGP is used to
| redirect traffic to DNS servers:
| https://www.thousandeyes.com/blog/amazon-route-53-dns-and-
| bg...
| tptacek wrote:
| Did it take 25 years? That's the baseline. :)
| throwaway458864 wrote:
| If we didn't have the web, all networking above OSI L4 on all
| operating systems would have been encrypted by default. A
| simple set of syscalls and kernel features could have enabled
| it. But since the web was there, and popularized a solution
| for secure communications (TLS + HTTP), everyone just jumped
| on that bandwagon, and built skyscrapers on top of a used
| books store.
|
| The weird irony is it's the old "worse is better" winning
| again. HTTP and TLS are fairly bad protocols, in their own
| ways. But put them together and they're better than whatever
| else exists. It's just too bad we didn't keep them and ditch
| the browser.
| tptacek wrote:
| Can you articulate what you believe is bad about TLS?
| mjl- wrote:
| Wondering what prompted the blog post. The recent publication of
| RFC 9539?
|
| It would be interesting to hear how often the google dns servers
| see attempts to poison their cache. The mitigations probably
| prevent folks from even trying, but he numbers would be
| interesting.
|
| The OARC 40 presentation PDF mentions cookies deployment is low
| for large operators but open source software has compliant
| implementations. Are large operators writing their own dns
| servers, but badly? I would think there wouldn't be many custom
| implementations, and that you would be able to detect which
| software nameservers are running, each with known capabilities.
| But from the way the numbers are presented it seems they only
| look at behaviour without considering software (versions).
| rainsford wrote:
| I'm a little surprised Google only implemented case randomization
| by default in 2022 considering it's been around since 2008.
| Presumably they had concerns about widespread compatibility?
| Although my understanding is that for a lot of DNS servers it
| just worked without any specific implementation effort...but
| maybe there was a long tail of random server types Google was
| concerned about.
| spydum wrote:
| This is so weird to see. Just this morning I was checking thru my
| public authoritative NS query logs, and noticed the random
| capitalization. I had also noticed this in a similar work
| environment roughly end of 2023, but attributed it to people just
| doing DNS wordlist bruteforcing to find stuff (couldn't explain
| the case, but figured it was some evasion).
|
| Today I let my curiosity dive deeper and quickly found the ietf
| publication on 0x20 encoding and this article.
|
| Just odd to see others post it to hn on the same day..
| coincidences are weird.
| jeffbee wrote:
| I first heard about this 0x20 scheme around 2015 when I was
| working on a DNS cache (also at Google, but not for the public
| DNS team). I noticed and had to work around the fact that some
| servers were responding in vixie-case even when the requests were
| not. Those servers would be broken if the requests were paying
| attention to 0x20, right? I wonder what software was doing that.
| MarkSweep wrote:
| The that the longer your domain name, the less susceptible is it
| to cache poisoning attacks, right? Since there are more possible
| case variations.
___________________________________________________________________
(page generated 2024-04-07 23:00 UTC)