[HN Gopher] I read the federal government's Zero-Trust Memo so y...
___________________________________________________________________
I read the federal government's Zero-Trust Memo so you don't have
to
Author : EthanHeilman
Score : 311 points
Date : 2022-01-27 15:06 UTC (7 hours ago)
(HTM) web link (www.bastionzero.com)
(TXT) w3m dump (www.bastionzero.com)
| adreamingsoul wrote:
| Here in Norway we have BankID which uses MFA. To access any
| government, banking, or official system you have to authenticate
| with your BankID.
|
| Its simple amazing.
| Sesse__ wrote:
| BankID: A system with a secret spec, where the bank holds your
| secret key, there is no transparency log whatsoever (so you
| have no idea what your bank used that secret key for), can be
| used to authenticate as yourself almost everywhere, and where
| you can get huge, legally binding bank loans in minutes (and
| transfer the money away) with no further authentication.
|
| Oh, and if you choose to not participate in this system, enjoy
| trying to find out the results of your covid test :-) (I ended
| up getting a Buypass card, but they officially support only
| Windows and macOS.)
| zajio1am wrote:
| Here in Czechia we have BankID and it is problematic:
|
| 1) No verification that the user trusts that particular bank to
| perform this service. Most banks just deployed BankID for all
| their customers.
|
| 2) No verification between bank and government ensuring that
| particular person can be represented by particular bank. In
| principle a bank could inpersonate a person even if that person
| have no legal relation with that bank.
|
| 3) Bank authentication is generally bad. Either login+SMS, or
| proprietary smartphone applications. No FIDO U2F or any token
| based systems.
|
| Fortunately, there are also alternatives for identification to
| government services:
|
| 1) Government ID card with smartcard chip. But not everyone has
| a new version of ID card (old version does not have chip). It
| also requires separate hardware (smartcard reader) and some
| software middleware.
|
| 2) MojeID service (mojeid.cz) that uses FIDO U2F token.
|
| Disclaimer: working for CZ.NIC org that also offers MojeID
| service.
| mormegil wrote:
| #2 and partially #1 are solved by regulation and reputation:
| banks are highly regulated business, and BankID support
| requires specific security audit.
|
| Ad #3: FIDO is basically unusable for banking. It's designed
| for user authentication, not transaction signatures which
| banks need (and must do because of the PSD2 regulation).
| jollybean wrote:
| That's all good except for the 'bank' part.
|
| It was expedient but banks are not the orgs. that should be
| running that.
|
| Every nation needs to turn their Drivers ID and Passport
| authorities into 'Ministry of Identity' and issue fobs,
| passwords that can be used on the basis of some standard. Or
| something like that, maybe quasi distributed.
| kelnos wrote:
| I hear people say all the time that, in the US, the Postal
| Service would be great for this, and I can't help but agree.
| Sure, they'd have to develop in-house expertise around these
| sorts of security systems (just as any new federal government
| agency put in charge of this would have to do), which could
| be difficult. But they have the ability to distribute forms,
| documentation, and tokens to pretty much everyone in the US,
| with physical locations nearly everywhere that can be used to
| reach those who don't have physical addresses.
| brimble wrote:
| There's significant bi-partisan resistance, in the US, to
| anything like a national ID, unfortunately, with the result
| that we have one _anyway_ (because of course we do, the modern
| world doesn 't work without it) it's just an ad-hoc combination
| of other forms of ID, terrible to work with, heavily reliant on
| commercial 3rd parties, unreliable, and laughably insecure. But
| the end result is still a whole bunch of public and private
| databases that personally identify us and contain tons of
| information--kind of by necessity, actually, since our ID is a
| combination of tons of things.
|
| It's a very frustrating situation. Worst of both worlds.
| seniorThrowaway wrote:
| I've done some thinking about this, and a possible solution
| is a bunch of cross signed CA's like the Federal common
| policy / FPKI for cross trust amongst federal agencies, but
| done at a state DMV / DPS level. Driver's licenses / state
| IDs could have certs embedded into the cards and then be used
| for things like accessing government websites, banks, etc.
| Yes there are some access concerns, and some privacy concerns
| that this is in essence a national ID, but what we have now
| is horribly broken, and we're already being tracked. We get
| all the downside of pervasive tracking, but none of the
| upside.
| currency wrote:
| Would that look anything like the REAL ID system?[0]
|
| [0]https://www.tsa.gov/real-id
| thomascgalvin wrote:
| I'm America, about 30% of the population would start screaming
| about the Mark of the Beast if we tried to roll out something
| like this.
| toomuchtodo wrote:
| Which is why you ignore them. No reason for a nation to be
| held back by this type of person. Same reason you don't take
| cancer treatment advice from someone who suggests juicing.
| ketzo wrote:
| That 30% of the population translates to about 45% of
| federal elected representatives. Not quite as easy as
| "ignoring them," sadly.
| [deleted]
| jandrewrogers wrote:
| There is a large contingent of non-religious people who are
| against it on civil liberties grounds. The resistance to it
| truly crosses both parties, and it requires the cooperation
| of the States, which makes it politically non-viable as a
| practical matter.
| kelnos wrote:
| The thing I don't get about the non-religious arguments is
| that we already have a national ID, it's just a patchwork
| system of unreliable, not-particularly-secure forms of
| identification that are a pain in the ass for a regular
| citizen to have to deal with. And the REAL ID stuff
| essentially makes state IDs conform to a national ID
| specification anyway.
|
| And regardless, if you do want a national US ID, you just
| get a passport, and it'll be accepted as a form of ID
| everywhere a state-issued driver's license or state ID is
| accepted. Of course, in this case it's technically
| voluntary, and many Americans don't travel internationally
| and don't bother to get a passport.
| jandrewrogers wrote:
| Many State governments do not recognize a US passport as
| valid ID. This was unexpected when I first encountered an
| example of it, but apparently that is normal and I was
| just the last person to find out. The REAL ID legislation
| only regulates processing and format, there is no
| enforceable requirement to share that with the Federal
| government and many States (both red and blue) do not in
| practice. States recognize the ID of other States, as is
| required by the Constitution.
|
| Because there is no official national ID system, you can
| do virtually everything Federally with a stack of
| affidavits and pretty thin "evidence" that you are who
| you claim to be. They strongly prefer that you have
| something resembling ID but it isn't strictly required.
| This also creates a national ID bootstrapping problem
| insofar as millions of Americans don't have proof that
| they are Americans because there was never a requirement
| of having documentary evidence. As a consequence,
| government processes are forgiving of people that have no
| "real" identification documents because so many people
| have fallen through the cracks historically.
|
| Of course, this has been widely abused historically, so
| the US government has relatively sophisticated methods
| for "duck typing" identities by inference these days.
| paganel wrote:
| What happens to the people who are not banked?
| mkohlmyr wrote:
| We have that in Sweden too. As an expat it's a complete
| nightmare for me from day one. Getting my bank to successfully
| issue it was impossible.
|
| First, in the days before mobile bank-id, they sent windows-
| only hardware as I recall. Then came the days of
| letters/cards/hardware getting lost in the mail.
|
| I gave up on it in the end. I have multiple things (banking-
| wise) I no longer have online access to because of it.
|
| If you're going to make one system to rule them all you need to
| make sure the logistics actually work.
| fire wrote:
| I wonder if the recommendation for context-aware auth also
| includes broader adoption of Impossible Travel style checks?
|
| For context, Impossible Travel is typically defined as an
| absolute minimum travel time between two points based on the
| geographical distance between them, with the points themselves
| being derived from event-associated IPs via geolocation
|
| The idea is that if a pair of events breaches that minimum travel
| time by some threshold, it's a sign of credential compromise;
| It's effective for mitigating active session theft, for example,
| as any out of region access would violate the aforementioned
| minimum travel time between locations and produce a detectable
| anomaly
| judge2020 wrote:
| Is this practical? I would imagine with how peering can get
| better/worse in an instant (and continuously change as
| different routers pick up new routes) you can't use ping to
| measure this, and geoip databases don't seem like a source you
| could trust, especially with CGNAT throwing you onto some
| generic IP with a geoIP that everyone else in a 200 mile radius
| also gets.
| wmf wrote:
| Most likely GeoIP information is used as one of many inputs
| to a neural net that decides whether you can log on or not
| (see tons of "Google locked me out" examples).
| staticassertion wrote:
| This is pretty incredible. These aren't just good practices,
| they're the fairly bleeding edge best practices.
|
| 1. No more SMS and TOTP. FIDO2 tokens only.
|
| 2. No more unencrypted network traffic - including DNS, which is
| such a recent development and they're mandating it. Incredible.
|
| 3. _Context aware_ authorization. So not just "can this user
| access this?" but attestation about device state! That's
| extremely cutting edge - almost no one does that today.
|
| My hope is that this makes things more accessible. We do all of
| this today at my company, except where we can't - for example, a
| lot of our vendors don't offer FIDO2 2FA or webauthn, so we're
| stuck with TOTP.
| nextos wrote:
| > 1. No more SMS and TOTP. FIDO2 tokens only.
|
| SMS are bad due to MITM and SIM cloning. In EU many banks still
| use smsTAN, and it leads to lots of security breaches. It's
| frustrating some don't offer any alternatives.
|
| However, is FIDO2 better than chipTAN or similar? I like simple
| airgapped 2FAs, but I'm not an expert.
| tptacek wrote:
| The major advantage of FIDO2 is that it's difficult to phish.
| SIM cloning is not the primary reason organizations are now
| advocating against SMS 2FA.
| vmception wrote:
| Force banks to do this, immediately. They can levy it on any
| organization with a banking license or wants access to FEDWire
| or the ACH system. Force it for SWIFT access too, if the bank
| has an online banking system for users.
| criddell wrote:
| I asked my bank about their 16 character limit on password
| length because it suggests they are saving the password
| rather than some kind of hash. Their response - don't worry
| about it, you aren't responsible for fraud.
|
| Banks aren't going to want to implement any changes that cost
| more (in system changes and customer support) than the fraud
| they prevent.
| meepmorp wrote:
| Also, "Password policies must not require use of special
| characters or regular rotation."
|
| They even call out the fact that it's a proven bad practice
| that leads to weaker passwords - and such policies must be gone
| from government systems in 1 year from publication of the memo.
| It's delightful.
| mooreds wrote:
| To be fair, this was part of the NIST guidelines since Mar
| 2020. A whole appendix was added to justify it:
| https://pages.nist.gov/800-63-3/sp800-63b.html#appA
| gkop wrote:
| Way earlier than that, even.
|
| > Verifiers SHOULD NOT impose other composition rules
| (mixtures of different character types, for example) on
| memorized secrets
|
| Earliest draft in Wayback Machine, dated June 2016. Lots of
| other good stuff from 800-63 dates back this early too.
|
| https://web.archive.org/web/20160624033024/https://pages.ni
| s...
| dragonwriter wrote:
| SHOULD NOT and MUST NOT are very different from a
| compliance perspective.
|
| The former usually means something between nothing at all
| and "you can do it but you have to write paperwork that
| no one will actually read in detail, but someone will
| maybe check the existence of, if you do".
|
| The latter means "do it and you are noncompliant".
| dllthomas wrote:
| https://datatracker.ietf.org/doc/html/rfc2119 is a good
| reference, although those precise definitions may or many
| not be in effect in any particular situation (including
| this one).
|
| See also https://datatracker.ietf.org/doc/html/rfc6919
| atuladhar wrote:
| Somewhat unrelated, but hopefully this also means
| TreasuryDirect will get rid of its archaic graphical keyboard
| that disables the usage of password managers.
|
| (Graphical keyboards are an old technique to try to defeat
| key loggers. A frequent side effect of a site using a
| graphical keyboard is that the developer has to make the
| password input field un-editable directly, which prevents
| password managers from working, unless you use a user script
| to make the field editable again.)
| tbirdz wrote:
| Just saying in this in case it will help you. For
| treasurydirect, you can use inspect element and change the
| value="" field on the password element, and paste in your
| password from your password manager. It's not as convenient
| as autofill from your password manager, but it sure beats
| using the graphical keyboard.
| xoa wrote:
| Yeah, while the clear and correct focus overall is on moving
| away from passwords entirely (FINALLY!!!!!) it's still nice
| to see something immediately actionable on at least improving
| policies in the mean time since those should be very low
| hanging fruit. Although one thing one thing I don't see is a
| mention of is doing away with (or close enough) password max
| character limits and requiring that everything get hashed
| step 1. Along with rotation and silly complex rules, stupid
| low character limits is the other big irritation with common
| systems. If passwords must be used they should be getting
| hashed client-side anyway (and then again server-side) so the
| server should be getting a constant set of bits no matter
| what the user is inputting. There isn't really any need at
| all for character limits at this point. If anything it's the
| opposite, minimums should be a lot higher. If someone is
| forced to use at least 20-30 characters say that essentially
| requires a password manager or diceware. And sheer length
| helps even bad practices.
|
| But maybe they didn't bother giving much more effort to
| better passwords because they really don't want those to
| stick around at all and good for them. Password managers
| themselves are a bandaid on the fundamentally bad practice of
| using a symmetric factor for authentication.
| carbonx wrote:
| I worked for the govn't ~20 year ago - in IT - and even I
| hated our password policies. I just kept iterating the same
| password because we had to change it every 6 weeks.
| CGamesPlay wrote:
| I am a bit concerned that this will be read as "Password
| policies must require the use of no special characters",
| possibly as a misguided attempt to push people away from
| adding using "Password123!" as the password. I wish the memo
| had spelled out a little more clearly that there's nothing
| wrong with special characters, but they shouldn't be
| required. Also, is a whitespace a special character?
| pm90 wrote:
| If we were to stop using special characters and only use
| human friendly phrases (eg "jupiterIsTheSmallestPlanet") it
| wouldn't be the end of the world.
| suns wrote:
| Yea, imagine the implications of federal agencies all
| implementing this successfully. Can't wait to see what the
| trickle down(?) effect is.
| chasd00 wrote:
| lots of billable hours for the various consulting firms.
| "we'er so happy we can hardly count" - Pink Floyd
| dev-3892 wrote:
| "downstream" might be the word you're looking for
| c0l0 wrote:
| I think 3. is very harmful for actual, real-world use of Free
| Software. If only specific builds of software that are on a
| vendor-sanctioned allowlist, governed by the signature of a
| "trusted" party to grant them entry to said list, can
| meaningfully access networked services, all those who compile
| their own artifacts (even from completely identical source
| code) will be excluded from accessing that remote side/service.
|
| Banks and media corporations are doing it today by requiring a
| vendor-sanctioned Android build/firmware image, attested and
| allowlisted by Google's SafetyNet (https://developers.google.co
| m/android/reference/com/google/a...), and it will only get
| worse from here.
|
| Remote attestation really is killing practical software
| freedom.
| shadowgovt wrote:
| > If only specific builds of software that are on a vendor-
| sanctioned allowlist
|
| Yes, but for government software this is a bog-standard
| approach. Not even "the source code is publicly viewable to
| everyone" is sufficient scrutiny to pass government security
| muster; _specific_ code is what gets cleared, and
| modifications to that code must also be cleared.
| tablespoon wrote:
| >> 3. Context aware authorization. So not just "can this user
| access this?" but attestation about device state! That's
| extremely cutting edge - almost no one does that today.
|
| > I think 3. is very harmful for actual, real-world use of
| Free Software. If only specific builds of software that are
| on a vendor-sanctioned allowlist, governed by the signature
| of a "trusted" party to grant them entry to said list, can
| meaningfully access networked services, all those who compile
| their own artifacts (even from completely identical source
| code) will be excluded from accessing that remote
| side/service.
|
| Is that really a problem? In practice wouldn't it just mean
| you can only use employer-provided and certified devices? If
| they want to provide their employees some Free Software-based
| client system, that configuration would be on the whitelist.
| shbooms wrote:
| I think from the viewpoint of a business/enterprise
| environment, yes you're right, context-aware authorization
| is a good thing.
|
| But I think the point of your parent comment's reply was
| that the inevitable adoption of this same techonology in
| the consumer-level environment is a bad thing. Among other
| things, it will allow big tech companies to have an
| stronger grip on what software/platforms are OK to use/not
| use.
|
| If your employer forces you to, say, only use a certain
| version of Windows as your OS in order to do your job,
| that's generally acceptable to most people.
|
| But if your TV streaming provider tells you have to use a
| certain version of Windows to consume their product, that's
| not considered acceptable to a good deal of people.
| btbuilder wrote:
| I think browser-based streaming is the only scenario
| impacted. Apps can already interrogate their platform and
| make play/no play decisions.
|
| They are also already limiting (weakly) the max number of
| devices that can playback which requires some level of
| device identification, just not at the confidence
| required for authentication.
| dathinab wrote:
| Well, the fact that I can't do credit card payments for
| some banks if I don't have an iphone or non rooted,
| google android phone is a problem which already exists.
|
| Worse supposedly this is for security, but attackers
| which pulled of a privilege escalation tend to have
| enough ways to make sure that non of this detection finds
| them.
|
| In the end it just makes sure you can't mess with your
| own credit card 2FA process by not allowing you to
| control the device you own.
| ryukafalz wrote:
| This should be obvious from your comment but I think it's
| worth calling something out explicitly here: a bank that
| does that is mandating that you accept either Apple's or
| Google's terms of service. That's a lot of power to give
| to two huge companies.
|
| I think we'd do well to provide the option to use open
| protocols when possible, to avoid further entrenching the
| Apple/Google duopoly.
| pdonis wrote:
| _> Is that really a problem? In practice wouldn 't it just
| mean you can only use employer-provided and certified
| devices?_
|
| That's fine for employees doing work for their employers.
| It's not fine for personal computing on personal devices
| that have to be able to communicate with a wide variety of
| other computers belonging to a wide variety of others,
| ranging from organizations like banks to other individuals.
| seibelj wrote:
| Reproducible builds are a thing, I don't know how widespread
| they are. I know the monero project has that built in so
| everyone compiles the exact same executable regardless of
| environment, and can verify the hash against the official
| version https://github.com/monero-project/monero
| c0l0 wrote:
| Let me elaborate on the problem I do have with remote
| attestation, no matter if I can verify that the signed
| binary is identical with something I can build on my own.
|
| I use LineageOS on my phone, and do not have Google Play
| Services installed. The phone only meaningfully interacts
| with a very few and most basic Google services, like an
| HTTP server for captive portal detection on Wifi networks,
| an NTP server for setting the clock, etc. All other "high-
| level" services that I am aware of, like Mail, Calendaring,
| Contacts, Phone, Instant Messaging, etc., are either
| provided by other parties that I feel more comfortable
| with, or that I actually host myself.
|
| Now let's assume that I would want or have to do
| online/mobile banking on my phone - that will generally
| only work with the proprietary app my bank provides me
| with. Even if I choose to install their unmodified APK,
| (any lack of) SafetyNet will not attest my LineageOS-
| powered phone as "kosher" (or "safe and secure", or
| "healthy", or whatever Google prefers calling it these
| days), and might refuse to work. As a consequence, I'm
| effectively unable to interact via the remote service
| provided by my bank, because they believe they've got to
| protect me from the OS/firmware build that I personally
| chose to use.
|
| Sure, "just access their website via the browser, and do
| your banking on their website instead!", you might say, and
| you'd be right for now. But with remote attestation broadly
| available, what prevents anyone from also using that for
| the browser app on my phone, esp. since browser security is
| deemed so critical these days? I happen to use Firefox from
| F-Droid, and I doubt any hypothetical future SafetyNet
| attestation routine will have it pass with the same flying
| colors that Google's own Chrome from the Play Store would.
| I'm also certain that "Honest c0l0's Own Build of Firefox
| for Android" wouldn't get the SafetyNet seal of approval
| either, and with that I'd be effectively shut off from
| interacting with my bank account from my mobile phone
| altogether. The only option I'd have is to revert back to a
| "trusted", "healthy" phone with a manufacturer-provided
| bootloader, firmware image, and the mandatory selection of
| factory-installed, non-removable crapware that I am never
| going to use and/or (personally) trust that's probably
| exfiltrating my personal data to some unknown third
| parties, sanctified by some few hundreds of pages of EULA
| and "Privacy" Policy.
|
| With app stores on all mainstream and commercially
| successful desktop OSes, the recent Windows 11 "security
| and safety"-related "advances" Microsoft introduced by (as
| of today, apparently still mildly) requiring TPM support,
| and supplying manufacturers with "secure enclave"-style
| add-on chips of their own design ("Pluton", see
| https://www.techradar.com/news/microsofts-new-security-
| chip-...), I can see this happening to desktop computing as
| well. Then I can probably still compile all the software I
| want on my admittedly fringe GNU/Linux system (or let the
| Debian project compile it for me), but it won't matter much
| - because any interaction with the "real" part of the world
| online that isn't made by and for software freedom
| enthusiasts/zealots will refuse to interact with the non-
| allowlisted software builds on my machine.
|
| It's going to be the future NoTCPA et al. used to combat in
| the early 00s, and I really do dread it.
| kelnos wrote:
| I hadn't thought about extending this attestation to the
| browser build as a way to lock down web banking access.
| That's truly scary, as my desktop Linux build of Firefox
| might not qualify, if this sort of thing would come to
| pass.
| chaxor wrote:
| Wow, the monero project looks like they have some great
| ideas. I like this reproducible build - may try to get my
| team to work towards that. It seems like monero has more of
| a focus on use as a real currency, so hopefully it isn't
| drawing in the speculative people and maintains it's real
| use.
| nybble41 wrote:
| Reproducible builds allow the user of the software to
| verify the version that they are using or installing. They
| do not, by themselves, allow the sort of remote attestation
| which would permit a service to verify the context for
| authentication--the user, or a malicious actor, could
| simply modify the device to lie about the software being
| run.
|
| Secure attestation about device state requires something
| akin to Secure Boot (with a TPM), and in the context of a
| BYOD environment precludes the device owner having full
| control of their own hardware. Obviously this is not an
| issue if the organization only permits access to its
| services from devices it owns, but no organization should
| have that level of control over devices owned by employees,
| vendors, customers, or anyone else who requires access to
| the organization's services.
| InitialLastName wrote:
| > no organization should have that level of control over
| devices owned by employees, vendors, customers, or anyone
| else who requires access to the organization's services.
|
| It seems like the sensible rule of thumb is: If your
| organization needs that level of control, it's on your
| organization to provide the device.
| jacobr1 wrote:
| Or we could better adopt secure/confidential computing
| enclaves. This would allow the organization to have
| control over the silo'd apps and validate some degree of
| security (code tampering, memory encryption, etc) but not
| need to trust that other apps on the device or even the
| OS weren't compromised.
| nybble41 wrote:
| Secure enclaves are still dependent on someone other than
| the owner (usually the manufacturer) having ultimate
| control over the device. Otherwise the relying party has
| no reason to believe that the enclave is secure.
| wizzwizz4 wrote:
| I'm uncomfortable letting organisations have control over
| the software that runs on _my_ hardware. (Or, really, any
| hardware I 'm compelled to use.)
|
| Suppose the course I've been studying for the past three
| years now uses $VideoService, but $VideoService uses
| remote attestation and gates the videos behind a retinal
| scan, ten distinct fingerprints, the last year's GPS
| history and the entire contents of my hard drive?1 If I
| could spoof the traffic to $VideoService, I could get the
| video anyway, but every request is signed by the secure
| enclave. (I can't get the video off somebody else,
| because it uses the webcam to identify when a camera-like
| object is pointed at the screen. They can't bypass that,
| because of the remote attestation.)
|
| If I don't have ten fingers, and I'm required to scan ten
| fingerprints to continue, and I can't send fake data
| because my computer has betrayed me, what recourse is
| there?
|
| 1: exaggeration; no real-world company has quite these
| requirements, to my knowledge
| Signez wrote:
| Let's note that this very concerning problem is only one if
| organizations take an allowlist approach to this "context
| aware authorization" requirement.
|
| Detecting _changes_ -- and enforcing escalation in that case
| -- can be enough, e.g. "You always uses Safari on macOS to
| connect to this restricted service, but now you are using
| Edge on Windows? Weird. Let's send an email to a relevant
| person / ask for a MFA confirmation or whatever."
| hansvm wrote:
| Somebody made the front page here a few days ago because
| they were locked out of Google with no recourse from
| precisely that kind of check.
| freedomben wrote:
| It wasn't I, but this has been an absolute _plague_ on an
| organization I work with. There are only 3 people, and we
| all have need to access some accounts but they are
| personal accounts. Also, the boss travels a lot, often to
| international destinations. Every time he flies I can
| almost guarantee we 'll face some new nightmare. The
| worst is "we noticed something is a tiny bit different
| with you but we won't tell you what it is. We've emailed
| you a code to the email account that you are also locked
| out of because something is a tiny bit different with
| you. Also we're putting a flag on your account so it
| raises holy hell the next 36 times you log in."
| dpatterbee wrote:
| I feel like the issue with the post you mention was the
| absence of recourse rather than the locking out itself.
| pishpash wrote:
| Who gets to decide what changes are kosher? Sounds like
| bureaucratic behavior modeling.
| [deleted]
| dathinab wrote:
| If something like that is good enough to fulfill the
| requirements, that would be good.
|
| Some services already thinks like that, like I think
| discord.
| reilly3000 wrote:
| How could a build be verified to be the same code without
| some kind of signature? You cant just validate a SHA, that
| could be faked from a client.
|
| If you want to get a package that is in the Arch core/ repo,
| doesnt that require a form of attestation?
|
| I just don't see a slippery slope towards dropping support
| for unofficial clients, we're already at the bottom where
| they are generally and actively rejected for various reasons.
|
| Still, the Android case is admittedly disturbing, it feels a
| lot more personal to be forced to use certain OS builds; that
| goes beyond the scope of how I would define a client.
| lupire wrote:
| "Software freedom" doesn't really make sense when the
| software's function is "using someone else's software".
| You're stil at the mercy of the server (which is why remote
| attestation is even interesting in the first place).
|
| If you want to use free software, only commect to Affero GPL
| servces and don't use nonfree services, and don't consume
| nonfree content.
| wizzwizz4 wrote:
| This is a bad take. You're at the mercy of the server a lot
| less than people seem to think; a free YouTube client like
| VLC remains useful. A free Microsoft Teams client would be
| useful. I allege that free VNC clients are also useful,
| even if there's non-free software on the other end.
| no_time wrote:
| >Remote attestation really is killing practical software
| freedom.
|
| Which will continue marching forward without pro-user
| legislation. Which is extraordinarly unlikely to happen since
| the government has vested interest in this development.
| nonameiguess wrote:
| In practice, the DoD right now uses something called AppGate,
| which downloads a script on-demand to check for device
| compliance, and it supports free software distributions, but
| the script isn't super sophisticated and relies heavily on
| being able to detect the OS flavor and assumes you're using
| the blessed package manager, so right now it only works for
| Debian and RedHat descended Linux flavors. It basically just
| goes down a checklist of STIG guidelines where they are
| practical to actually check, and doesn't go anywhere near the
| level of expecting you to have a signed bootloader and a TPM
| or checking that all of the binaries on your device have been
| signed.
| alksjdalkj wrote:
| Totally locking down a computer to just a pre-approved set of
| software is a huge step towards securing it from the kind of
| attackers most individuals, companies, and governments are
| concerned with. Sacrificing "software freedom" for that kind
| of security is a trade off that the vast majority of users
| will be willing to make - and I think the free software
| community will need to come to terms with that fact at some
| point and figure out what they want to do about it.
| lupire wrote:
| Free software doesn't really work in a networked untrusted
| world.
| pdonis wrote:
| _> Totally locking down a computer to just a pre-approved
| set of software is a huge step towards securing it from the
| kind of attackers most individuals, companies, and
| governments are concerned with._
|
| No, it isn't. It's a way for corporations and governments
| to restrict what people can do with their devices. That
| makes sense if you're an employee of the corporation or the
| government, since organizations can reasonably expect to
| restrict what their employees can do with devices they use
| for work, and I would be fine with using a separate device
| for my work than for my personal computing (in fact that's
| what I do now). But many scenarios are not like that: for
| example, me connecting with my bank's website. It's not
| reasonable or realistic to expect that to be limited to a
| limited set of pre-approved software.
|
| The correct way to deal with untrusted software on the
| client is to just...not trust the software on the client.
| Which means you need to verify the user by some means that
| does not require trusting the software on the client. That
| is perfectly in line with the "zero trust" model advocated
| by this memo.
| reginaldo wrote:
| It depends on the level of attestation required. A simple
| client certificate should suffice for the majority of the
| non-DoD applications.
| kelnos wrote:
| It "should" suffice, but entities like banks and media
| companies are already going beyond this. As the parent
| points out, many financial and media apps on Android will
| just simply not work if the OS build is not signed by a
| manufacturer on Google's list. Build your own Android ROM
| (or even use a build of one of the popular alternative
| ROMs) and you lose access to all those apps.
| bigiain wrote:
| I'm not even so sure I'm totally against banks doing that
| either.
|
| From where I sit right now, I have within arms reach my
| MacBook, a Win11 Thinkpad, a half a dozen Raspberry Pis
| (including a 400), 2 iPhones only one of which is rooted,
| an iPad (unrooted) a Pinebook, a Pine Phone, and 4
| Samsung phones one with its stock Android7 EOLed final
| update and three rooted/jailbroken with various Lineage
| versions. I have way way more devices running open source
| OSen than unmolested Apple/Microsoft/Google(+Samsung)
| provided Software.
|
| My unrooted iPhone is the only one of them I trust to
| have my banking app/creds on.
|
| I'd be a bit pissed if Netflix took my money but didn't
| run where I wanted it, but they might be already, I only
| ever really use it on my AppleTV and my iPad. I expect
| I'd be able to use it on my MacBook and thinkpad, but
| could be disappointed, I'd be a bit surprised if it ran
| on any of my other devices listed...
| mindslight wrote:
| Putting a banking app on your pocket surveillance device
| is one of the least secure things you can do. What
| happens if you're mugged, forced to login to your
| account, and then based on your balance it escalates to a
| kidnapping or class resentment beatdown? Furthermore,
| what happens if the muggers force you to transfer money
| and your bank refuses to roll back as unauthorized
| because their snake oil systems show that everything was
| "secure" ?
| ethbr0 wrote:
| The clearer way to put this is: when faced with a
| regulatory requirement, most of the market will choose
| whatever pre-packaged solution most easily satisfies the
| requirement.
|
| In the case of client attestation, this is how we get
| "Let Google/Apple/Microsoft handle that, and use what
| they produce."
|
| And as a end state, leads to a world where large, for-
| profit companies provide the only whitelisted solutions,
| because they're the largest user bases and offer a turn-
| key feature, and the market doesn't want to do addition
| custom work to support alternatives.
| wyldfire wrote:
| > I think 3. is very harmful for actual, real-world use of
| Free Software.
|
| It has been a long, slow but steady march in this direction
| for a while [1]. Eventually we will also bind all network
| traffic to the individual human(s) responsible. 'Unlicensed'
| computers will be relics of the past.
|
| [1] https://boingboing.net/2012/01/10/lockdown.html
| enriquto wrote:
| The dystopia described in Stallman's "The right to read" is
| almost here... and we don't even get to colonize the solar
| system.
| codemac wrote:
| Google does 1, 2, and 3 internally. If you join
| https://landing.google.com/advancedprotection/ you can get
| something similar for personal public accounts.
| EthanHeilman wrote:
| We've been building to these goals at bastionzero so I've been
| living it everyday, but I feels validating and also really
| strange to see the federal government actually get it.
| dc-programmer wrote:
| For anyone interested in 3, Google's BeyondCorp whitepapers are
| an excellent starting point
| mnd999 wrote:
| Bleeding edge or complete fantasy? This is going to be very
| very expensive and guess who's going to be paying for it?
| pitaj wrote:
| What's wrong with TOTP?
| tptacek wrote:
| It's very phishable. Attackers will send text messages to
| your users saying "Hi, this is Steve with the FooCorp
| Security Team; we're sorry for the inconvenience, but we're
| verifying everyone's authentication. Can you please reply
| with the code on your phone?"
|
| It's even worse with texted codes because it's inherently
| credible in the moment because the message knows something
| you feel it shouldn't --- that you just got a 2FA code. You
| have to deeply understand how authentication systems work to
| catch why the message is suspicious.
|
| You can't fix the problem with user education, because
| interacting with your application is almost always less than
| 1% of the mental energy your users spend doing their job, and
| they're simply not going to pay attention.
| bradstewart wrote:
| They also come from (seemingly) random phone numbers and/or
| short codes, with absolutely no way to verify them.
| Sesse__ wrote:
| It authenticates the user to the service, but not the service
| to the user, so it's vulnerable to phishing (or MITM, of
| course, if you don't have TLS).
| jaycroft wrote:
| I was wondering the same thing - here's an article I found
| that describes both approaches. Not being in the cryptography
| space myself I can't comment on how accurate it is, but
| passes my engineering smell test.
|
| https://blog.trezor.io/why-you-should-never-use-google-
| authe...
|
| Edit - sorry that this is really an ad for the writer's
| products. On the other hand, there's a hell of a bounty for
| proving them insecure / untrustworthy, whatever your feelings
| on "the other crypto".
| tptacek wrote:
| Yeah these are very dumb arguments against TOTP.
| [deleted]
| [deleted]
| tims33 wrote:
| Really pleasantly surprised at how progressive this memo is. It
| will be interesting to see the timelines put in place to make the
| transition.
|
| Btw - I'd love to see the people who put this memo together re-
| evaluate the ID.me system they're implementing for citizens given
| how poor the identity verification is.
| YeBanKo wrote:
| I have an issue with using ID.me for government websites,
| because it is a privately owned company. Online authentication
| at this point seems as important as USPS service and warrants
| being owned and developed by the government itself.
| unethical_ban wrote:
| TOTP is not going anywhere for much of the Internet. Hold on
| while I get a Yuibikey to my dad who thinks "folders can't be in
| other folders" because that's not how they work in real life.
|
| TOTP is a great security enhancement, and while phishable,
| considerably raises the bar for an attacker.
|
| The fact that TOTP is mentioned as a bad practice in this
| document is an indicator that this should _not_ be considered a
| general best practices guide. It is a valid best practice guide
| for a particular use case and particular user base.
| adgjlsfhk1 wrote:
| the advantage of fido2/webauthn is actually biggest for non
| techies. tech people are the ones who won't fall for take bad
| phishing attempts. stopping malicious logins from fake sites is
| a massive win.
| tptacek wrote:
| Yubikeys aren't the serious long-term alternative to TOTP;
| software keys embedded in phones are what we're going to end up
| with.
| imrejonk wrote:
| > Today's email protocols use the STARTTLS protocol for
| encryption; it is laughably easy to do a protocol downgrade
| attack that turns off the encryption.
|
| This can be solved with DANE, which is based on DNSSEC. When
| properly configured, the sending mailserver will force the use of
| STARTTLS with a trusted certificate. The STARTTLS+DANE
| combination has been a mandatory standard for governmental
| organizations in the Netherlands since 2016.
| philosopher1234 wrote:
| Is DANE @tptacek approved of? You say DNSSEC and it triggers my
| internal alarm bells.
| jollybean wrote:
| They should have given out free identify FOBs with vaccines.
|
| I'm only half joking.
|
| It's really just a matter of changing gears - you carry a
| physical key to your house, car, and your online life. You lose
| the key, you have to go through a bit of pain to get a new one.
|
| But establishing that norm is beyond the purview of anyone it
| seems.
|
| Perhaps one of those advanced Nordic countries will have the
| wherewithal, it seems Estonia is ahead of all of us but we don't
| pay attention.
|
| But this doc looks good.
| PufPufPuf wrote:
| The .cz domain managing company, CZ.NIC, actually gave away
| free GoTrust keys to people who promised to use them for
| e-government services.
| solatic wrote:
| Meh. OMB also mandated moving to IPv6 more than a decade ago:
| https://www.cio.gov/assets/resources/internet-protocol-versi...
|
| Nobody cares. It just gets postponed forever.
| scarmig wrote:
| This sounds really beautiful, and I am saving the link for future
| reference.
|
| I'm curious about the DNS encryption recommendation. My
| impression was that DNSSEC was kind of frowned upon as doing
| nothing that provides real security, at least according to the
| folks I try to pay attention to. Are these due to differing
| perspectives in conflict, or am I missing something?
| dsr_ wrote:
| DNS over TLS and DNS over HTTP/TLS.
| uncomputation wrote:
| > "Enterprise applications should be able to be used over the
| public internet."
|
| Isn't exposing your internal domains and systems outside VPN-
| gated access a risk? My understanding is this means
| internaltool.faang.com should now be publicly accessible.
| enriquto wrote:
| As I understand it, this sentence says that the application
| should be safe even if it was exposed to the public internet,
| not that it needs to be exposed. It is a good practice to
| securize everything even if visible only internally. The
| "perimeter defense" given by a VPN can be a plus, but never the
| only line of defense.
| jaywalk wrote:
| No, the memo pretty clearly says that VPNs need to go away.
| MattPalmer1086 wrote:
| It says that VPNs and other network tunnels should not be
| relied on.
|
| Where does it say they should go away?
| nybble41 wrote:
| "Further, Federal applications cannot rely on network
| perimeter protections to guard against unauthorized
| access. Users should log into applications, rather than
| networks, _and enterprise applications should eventually
| be able to be used over the public internet_. In the
| near-term, every application should be treated as
| internet-accessible from a security perspective. As this
| approach is implemented, _agencies will be expected to
| stop requiring application access be routed through
| specific networks_ , consistent with CISA's zero trust
| maturity model."
|
| "Actions ... 4. Agencies must identify at least one
| internal-facing FISMA Moderate application and make it
| fully operational and accessible over the public
| internet."
| shkkmo wrote:
| Which is saying that agencies have to stop relying on /
| requiring VPNs for authorization and access control, not
| that any user has to stop using VPNs.
| nybble41 wrote:
| It's true that they didn't mandate detecting and blocking
| accesses from VPNs, if the user chooses to connect
| through one. However, they pretty clearly are saying that
| the application _should_ be exposed to the public
| Internet, which is the opposite of what enriquto
| claimed[0] earlier in this thread:
|
| > As I understand it, this sentence says that the
| application should be safe even if it was exposed to the
| public internet, not that it needs to be exposed.
|
| [0] https://news.ycombinator.com/item?id=30103558
| [deleted]
| [deleted]
| servercobra wrote:
| The memo does say each agency needs to pick one system that
| is not internet accessible and make it accessible in the next
| year. The way I read this memo is pushing that VPNs don't add
| much in the way of security (if you follow the rest of the
| memo) and should be removed.
| tptacek wrote:
| The other way to read that part of the memo is that the
| exercise of exposing an application on the public Internet
| is a forcing function that will require agencies to build
| application security skills necessary whether or not they
| use VPNs. Note that the memo demands agencies find a single
| FISMA-Moderate service to expose.
| Sesse__ wrote:
| internaltool.faang.com _is_ publicly accessible, as in, you can
| get to the login page.
| formerly_proven wrote:
| It's a different framing to get rid of figleafs. Everything has
| to be built so that it actually has a chance of being secure -
| if your state of mind is "this is exposed to the public
| internet", BS excuses like "this is only exposed to the
| TotallySecure intranet" don't work any more, because they don't
| work in the first place. Perimeter security only works in
| exceedingly narrow circumstances which don't apply - and
| haven't applied for a long time[1] - to 99.999 % of corporate
| networks.
|
| [1] Perimeter-oriented security thinking is probably the #1
| enabler for ransomware and lateral movement of attackers in
| general.
| 3np wrote:
| For anyone confused about the term "figleaf", I assume it's a
| reference to fig leafs being used by Renaissance artists to
| mask genitalia. So "things concealing the naked truth"
| approximately.
| jsmith99 wrote:
| It's older than that: it's a biblical reference to Adam and
| Eve covering themselves.
| 3np wrote:
| My memory serves me wrong; thought that it being a fig
| leaf in particular was newer than the Bible but it's not
| (Genesis 1:3:7)
| bspammer wrote:
| I believe Google uses https://cloud.google.com/beyondcorp
| nonameiguess wrote:
| The point isn't to actually expose your internal services. It's
| to not assume that attackers can't breach your network
| perimeter. Internal traffic should be treated with the same
| level of trust as external traffic, that is, none at all.
| tptacek wrote:
| It is a risk. The discourse on VPNs is messy. It's true that
| you shouldn't rely solely on VPNs for access control to
| applications. It's also true that putting important services
| behind a VPN significantly reduces your attack surface, and
| also puts you in a position to get your arms around monitoring
| access.
|
| The right way to set this stuff up is to have a strong modern
| VPN (preferably using WireGuard, because the implementations of
| every other VPN protocol are pretty unsafe) with SSO
| integration, _and_ to have the applications exposed by that VPN
| also integrate with your SSO. Your users are generally on the
| VPN all day, and they 're logging in to individual applications
| or SSH servers via Okta or Google.
|
| "RIP VPNs" is not a great take.
| SecurityLagoon wrote:
| Thanks for saying this. This was exactly my take. Saying
| goodbye to VPNs just completely ignores the risk of RCE
| vulnerabilities on your services. You can have a VPN that
| still brings you into a zero trust network.
| tptacek wrote:
| It does essentially define away the authentication bypass
| problem, which is a class of vulnerability we still
| regularly find in modern web applications. To say nothing
| of the fact that no human has ever implemented a SAML RP
| without a game-over vulnerability. Seems like a self-
| evidently bad plan.
| emptysongglass wrote:
| I don't like VPNs. I think there's better ways of protecting
| our infrastructure without them. AWS offers a lot of
| technologies for doing just that.
|
| A VPN is another failure layer that when it goes down all of
| your remote workers are hosed. The productivity losses are
| immense. I've seen it first-hand. The same for bastion hosts.
| Some tiny misconfiguration that sneaks in and everybody is
| fubared.
|
| Bastion hosts and VPNs: we have better ways of protecting our
| valuables that's also a huge win for worker mobility and
| security.
| tptacek wrote:
| That's true of legacy VPNs like OpenVPN, and less true of
| modern VPNs. But either way: a VPN is a meaningful attack
| surface reduction _for all internal apps_ that don 't
| require individual apps to opt-in or stage changes for, and
| doesn't require point-by-point auditing of every app. Most
| organizations I've worked with would be hard-pressed to
| even generate an inventory of all their internal apps, let
| alone an assurance that they're properly employing web
| application security techniques to ensure that they're safe
| to expose on the Internet.
|
| We're just going to disagree about this.
| count wrote:
| If you can get the govt to drop FIPS or wireguard to change
| to ...different crypto, wireguard would take off like
| hotcakes.
|
| I'd be flogging tailscale so hard!
|
| Stupid policies.
| mjg59 wrote:
| It's not clear to me that a VPN endpoint is a meaningfully
| smaller attack surface than an authenticating proxy? The VPN
| approach has a couple of downsides:
|
| * You don't have a central location to perform more granular
| access control. Per-service context aware access restrictions
| (device state, host location, that sort of thing) need to be
| punted down to the services rather than being centrally
| managed.
|
| * Device state validation is either a one-shot event or,
| again, needs to be incorporated into the services rather than
| just living in one place.
|
| I love Wireguard and there's a whole bunch of problems it
| solves, but I really don't see a need for a VPN for access to
| most corporate resources.
| tptacek wrote:
| Sure: an authenticating proxy serves the same purpose. I
| agree that unless you have a pretty clever VPN
| configuration, you're losing the device context stuff,
| which matters a lot in some places.
|
| I'd do:
|
| * SSO integration on all internal apps.
|
| * An authenticating proxy if the org that owned it was
| sharp and had total institutional buy-in both from
| developers and from ops.
|
| * A WireGuard VPN otherwise.
| mjg59 wrote:
| If you have the institutional buy-in to handle auth being
| done at the proxy level, that gets you away from having
| to implement SSO per service. I agree that doing this
| well isn't trivial, but in the long term there's a
| reasonably compelling argument that it makes life easier
| for both developers and ops.
| zajio1am wrote:
| Well, we can just move from traditional VPNs to IPSec in
| transport mode.
| mjg59 wrote:
| $ host buganizer.corp.google.com
|
| buganizer.corp.google.com is an alias for
| uberproxy.l.google.com.
|
| uberproxy.l.google.com has address 142.250.141.129
|
| uberproxy.l.google.com has IPv6 address 2607:f8b0:4023:c0b::81
|
| Google's corp services _are_ publicly accessible in that sense
| - but you 're not getting through the proxy without valid
| credentials and (in most cases) device identity verification.
| 3np wrote:
| There are different ways to look at it. From a defense-in-depth
| perspective, you are right. That is, however, one of the main
| points of a zero-trust environment (or you could say Zero
| Trust), which is a kind-of-new trend that much has been written
| about.
|
| Think about it this way: In the context of ransomware attacks,
| a lot of times it's game over once an internal agent is
| compromised. The premise of zero trust is that once an attacker
| is "inside the wall", they gain basically nothing. Compromising
| one service or host would mean having no venue for escalation
| from there.
|
| I wouldn't say it's objectively better (maybe by the time I
| retire I can make a call on that), but it's a valid strategy.
| Certainly better than relying on perimeter-based security like
| VPN alone, as opposed to it being just one layer of DiD,
| though.
| rodgerd wrote:
| The thing is that over-focus on perimeter security is still a
| huge problem, and one reason that e.g. ransomware owns orgs
| with depressing regularity. There's nothing _wrong_ with
| perimeter controls in and of themselves. But they become a
| substitute for actually security what 's on the internal
| network, so once you've bypassed the perimeter, it's all too
| easy to roam at will.
|
| The people over-relying on perimeter security are the folks
| buying a big sixties car and assuming that seatbelts and
| traction control are no substitute for chrome bumpers.
| ctime wrote:
| The real crux of the issue is the long-tail of applications which
| were never conceived with anything _but_ network-based trust. I
| 'm certain the DoD is absolutely packed with these, probably for
| nearly every workflow.
|
| The reason this was so "easy" for Google (and some other
| companies, like GitLab[1]) to realize most of these goals is that
| they are a web-based technology company - fundamentally the
| tooling and scalable systems needed to get started were web so
| the transition were "free". Meaning, most of the internal apps
| were HTTP apps, built on internal systems, and the initial
| investment was just to make an existing proxied internal service,
| external and behind a context aware proxy [1].
|
| The hard part for most other companies (and the DoD) is figuring
| out what to do with protocols and workflows that aren't http or
| otherwise proxyable.
|
| [1] https://cloud.google.com/iap/docs/cloud-iap-context-aware-
| ac...
|
| [2] https://about.gitlab.com/blog/2019/10/02/zero-trust-at-
| gitla...
| amluto wrote:
| Many workflows are proxyable using fine grained IP-level or
| TCP-level security. (I believe that Tailscale does more or less
| this.). This can't support RBAC or per-user dynamic
| authentication particularly well, but it can at least avoid
| trusting an entire network.
| zrail wrote:
| Yeah, a thing that I wish Tailscale could do is hand off an
| attestation of some sort that says a TCP connection is being
| used by user X who is authorized by rule Y. Maybe "magic TLS
| client certs" is a thing coming on the horizon.
| wordsarelies wrote:
| As if Gov't does their own IT infrastrucutre...
|
| This is a windfall for Gov't contractors.
| golem14 wrote:
| Hey, at least they're 'our kind' of Gov't contractors. No
| $640 toilet seats here ;)
| ineedasername wrote:
| _> It tells us to stop rotating passwords_
|
| Finally! Maybe the places I've worked will finally listen. But I
| stopped reading TFA to praise this, so back to TFA.
| 0xffff2 wrote:
| NIST has made this recommendation for years. Sadly, I work for
| another branch of the Federal government and despite the NIST
| guidance I still have to rotate my password every 60 days.
| (Actually, the starts sending me daily emails warning me 15
| days out, and the date is based on last change, so practically
| it's more like 45 days.)
| MrYellowP wrote:
| Inching closer to the complete digital lockdown, I see.
| [deleted]
| KarlKemp wrote:
| I'm somewhat unhappy the "zero trust" terminology ha caught on.
| The technology is fine, but trust is an essential concept in many
| parts of life[0], and positioning it as something to be avoided
| or abolished will just further erode the relationships that
| define a peaceful and civil society.
|
| 0: trade only works if the sum of your trust in the legal system,
| intermediates, and counterparts reaches some threshold. The same
| is true of any interaction where the payoff is not immediate and
| assured, from taxes to marriage and friendship, and, no, it is
| not possible to eliminate it, nor would that be a society you'd
| want to live in. The only systems that do not rely on some trust
| that the other person isn't going to kill them are maximum-
| security prisons and the US president's security bubble. Both are
| asymmetric and still require trust in _some_ people, just not
| all.
| rodgerd wrote:
| "Zero assumption" would have been a better phrase, but that
| horse is not just out of the stable, he's met a nice lady horse
| and is raising a family of foals and grand-foals.
| userbinator wrote:
| _nor would that be a society you'd want to live in._
|
| 100% agreed. My first thought upon seeing the title of the
| article was "and we _trust_ that you did read it? "
|
| The term "zero trust" certainly has a very dystopian
| connotation to me. It reminds me of things like 1984.
| krb686 wrote:
| Couldn't agree more on this being bad terminology. Something is
| always implicitly trusted. Whether it's your root CA
| certificates, your Infineon TPM, the Intel hardware in your
| box, or something else. When I first saw this term pop-up I
| thought it meant something completely different than it does, I
| guess because of the domain I work in.
| EthanHeilman wrote:
| Minimizing trust should always be a goal of a security system.
| If you can minimize trust without harming usability,
| compatibility, capability, security, cost, etc... you should do
| it.
|
| When we talk about trust we often mean different things:
|
| * In cryptography and security by "trust" we mean a party or
| subsystems that if they fail or are compromised then the system
| may experience a failure. I need to trust that my local city is
| not putting lead in the drinking water. If someone could design
| plumping that removed lead from water and cost the same to
| install as regular pipes than cities should install those pipes
| to reduce the costs of a trust failure.
|
| * In other settings when we talk about trust we are often
| talking about trust-worthiness. My local city is trustworthy so
| I can drink the tap water without fear of lead poisoning.
|
| As a society we should both increase trustworthiness and reduce
| trust assumptions. Doing both of these will increase societal
| trust. I trust my city isn't putting lead in the drinking water
| because they are trustworthy but also because some independent
| agency tests the drinking water for lead. To build societal
| trust, verify.
| judge2020 wrote:
| The terminology stems from "zero trusting" the network you're
| in - just because someone can talk to a system doesn't mean
| they should be able to do anything; the user (via their user
| agent) should be forced to prove who they say they are before
| you trust them and before anything can be carried out.
| coffeefirst wrote:
| Yeah, it's a terrible name. "Zero Assumptions" or similar might
| be more clear.
|
| Words matter. If nothing else, laypersons hear these terms and
| shape their understanding assuming based on what it sounds
| like.
| kstrauser wrote:
| The "trust" here largely refers to identity. Do you trust that
| everyone in your house is your relative, by virtue of the fact
| that they're in your house? That falls down when you have a
| burglar. Similarly, is it good to trust that everyone on your
| corporate network is an employee, and therefore should have
| employee-level access to all the resources on that network? I
| wouldn't recommend it.
| KarlKemp wrote:
| No, but I trust the people I regularly interact with and
| therefore allow them to be in my home. Nobody trusts people
| just because they happen to be in their home. To the extend
| that trust can go to "zero", my fear is it will harm the
| (existing) first form of trust, which is vital, and have
| little impact on the stupid latter definition of trust.
|
| I know tech operates on different definitions/circumstances
| here. That's why the word "zero" is so wrong here, because it
| seems to go out of its way to make the claim that less trust
| ks always better.
|
| Call it "zero misplaced trust" or "my database doesn't want
| your lolly", whatever.
| Terretta wrote:
| I'm not sure that ...
|
| > _"discontinue support for protocols that register phone numbers
| for SMS or voice calls, supply one-time codes, or receive push
| notifications. "_
|
| ... necessarily means TOTP.
|
| Could be argued "supply" means code-over-the-wire, so all 3 being
| things with a threat of MITM or interception: SMS, calls,
| "supply" of codes, or push. Taken that way, all three fail the
| "something I have" check. So arguably one could take "supply one-
| time codes" to rule out both what HSBC does, but also what Apple
| does pushing a one-time code displayed together with a map to a
| different device (but sometimes the same device).
|
| I'd argue TOTP is more akin to an open soft hardware token, as
| after initial delivery it works entirely offline, and passes the
| "something I have" check.
| kelnos wrote:
| No, I'd expect it does include TOTP. Read it as "discontinue
| support for protocols that supply one-time codes". A TOTP app
| would fall under that description.
|
| TOTP apps are certainly better than getting codes via SMS, but
| they're still susceptible to phishing. The normal attack there
| is that the attacker (who has already figured out your
| password) signs into your bank account, gets the MFA prompt,
| and then sends an SMS to the victim, saying something like
| "Hello, this is a security check from Your Super Secure Bank.
| Please respond with the current code from your Authenticator
| app." Then they get the code and enter it on their side, and
| are logged into your bank account. Sure, many people will not
| fall for this, but some people will, and that minority still
| makes this attack worthwhile.
|
| A hardware security token isn't vulnerable to this sort of
| attack.
| Terretta wrote:
| > _via SMS_
|
| Or push, or other supply of a code from somewhere. It's just
| oddly worded, sounding like the code in all 3 cases is coming
| over the wire.
|
| Granted, phishing is a diff story, but in practice, I see
| Yubikeys permanently inserted to their laptop hosts,
| requiring even less intervention.
| Godel_unicode wrote:
| All of this is really government we-don't-pick-winners speak
| for yubikeys.
| sodality2 wrote:
| My OpenSK chip begs to differ.
| https://github.com/google/OpenSK
| Godel_unicode wrote:
| The disclaimer on the linked page agrees with me.
|
| "This project is proof-of-concept and a research
| platform. It is NOT meant for a daily usage. The
| cryptography implementations are not resistent against
| side-channel attacks."
| Terretta wrote:
| This makes sense. :-)
| count wrote:
| PIV/CAC Smartcards.
| Godel_unicode wrote:
| That's an interesting subject, since there has been a lot
| of government push for PIV but the internet has
| essentially decided that FIDO2/webauthn are the way
| forward and making them work with PIV is non-trivial.
| paganel wrote:
| > Do not give long-lived credentials to your users.
|
| This screams "we'll use more post-it notes for our passwords
| compared to before", or maybe the real world to which this memo
| is addressed is different compared to the real (work-related)
| world I know.
| the_jeremy wrote:
| It specifically calls out not requiring regular password
| rotation. Short-lived credentials is for tokens with
| expiration, not the password you use to login to the service
| that gives you the token.
| paganel wrote:
| Got it, thanks.
| Godel_unicode wrote:
| This was a very unfortunate choice of words by the author, as
| they don't mean credentials as in the credential a user uses to
| initially authenticate to the system. Rather they mean
| authentication tokens, be they Kerberos tickets, bearer tokens,
| etc.
|
| This memo in particular emphasizes the existing guidance the US
| government has issued around not expiring passwords. If you are
| a federal agency, you can have (and are in fact encouraged to
| have!) users with passwords that are unchanged for years.
|
| Edit: it's worth pointing out that the memo does a great job of
| laying this out. I work in security, so possibly there's some
| curse of knowledge at play, but I found the blog post explainer
| to be less clear than the memo it is explaining...
| tptacek wrote:
| The general attitude among practitioners now is that "post-it
| notes with passwords on them" is superior to the more common
| practice of "shitty passwords shared across multiple services".
| cpach wrote:
| Back in 2009 or so I stored my most-frequently used psswords
| on a piece of paper in my wallet.
|
| (These days I simply use 1Password.)
| mikewarot wrote:
| Does any of this protect against a zero day exploit running in
| the client device?
| wmf wrote:
| No, but once the exploit is discovered they could use client
| posture information to prevent unpatched clients from logging
| on.
___________________________________________________________________
(page generated 2022-01-27 23:00 UTC)