[HN Gopher] How we protect our most sensitive secrets from the m...
___________________________________________________________________
How we protect our most sensitive secrets from the most determined
attackers
Author : intunderflow
Score : 197 points
Date : 2021-11-18 14:28 UTC (8 hours ago)
(HTM) web link (monzo.com)
(TXT) w3m dump (monzo.com)
| MayeulC wrote:
| Yet, it seems like their app is closed-source and distributed via
| the android play store (it seems like the main way to interact
| with that bank?).
|
| I wish it was available as a reproducible build on F-Droid.
| That's how critical apps ought to be distributed.
|
| This is a common trope in security-oriented businesses: more
| security is a good thing, but the consumer often isn't afforded
| the same attention.
| alias_neo wrote:
| It's also worth noting, (although there is nothing special
| about the security procedure described in OPs article; I'm in
| an industry that's been doing this for some time), Monzo
| doesn't stop you from running their app on a rooted Android
| phone; in some misguided belief that it's somehow more secure
| to prevent it, like all of the other major banks with their
| seriously sub-par app experiences (compared to Monzo).
| intunderflow wrote:
| We have an open API https://docs.monzo.com
| yawnr wrote:
| You want this private (banking) company's app to be open
| source? Lol
| OJFord wrote:
| Why not, really? I wouldn't expect it to be - just because
| it's so not the norm - but why not?
|
| If you're talking about security, surely a closed-source app
| is just obscurity, there should be (sounds like there are)
| better precautions in place that it doesn't add anything.
|
| If it's ripping off features.. meh, is it really development
| time in frontend logic/UI that keeps Monzo ahead of any
| competitors its ahead of? (I'm saying that vaguely and
| unassertively because users seem to get very tribal and
| argumentative when it comes to 'challenger banks'!)
|
| If it's wanting to hide the API for whatever other 'it's
| proprietary' reason, well Monzo has a ('beta' and 'developer
| only' for years) public API anyway. And 'open' banking of
| course, so you can use TrueLayer or whatever anyway.
| MayeulC wrote:
| Is most of their added value in the client app? I certainly
| doubt it. The app is probably a front-end that queries
| various API endpoints.
|
| Who could use the source code? Their competitors? Maybe, but
| there isn't that much competition in that space, and actually
| developing the app is certainly not the most difficult part.
| Auditing it probably is, and using a competitor's open source
| app as a base is probably a no-go for a lot of established
| players. For what purpose would they copy a functioning app?
|
| One thing I can see as an argument against would be raising
| the bar for would-be copycats and phishing attempts. Not that
| it raises it too much either.
|
| Or is it to limit documentation of their API? I know security
| trough obscurity is thought as a good additional measure in
| traditional banking, but that doesn't seem to be the case
| here. And reverse-engineering an app wouldn't be that
| complicated.
|
| I would actually love an open source, cross-bank app. Not
| sure why it couldn't happen. I'm not sure either what the
| value proposition is for banks to try and limit that kind of
| development. Control the user's interactions with the bank as
| much as possible?
|
| It's like these banking apps requiring "safetynet" or
| similar: they trust google or the manufacturer more than
| myself, while I am the actual client!
| intunderflow wrote:
| If you're interested in how other folks handle super important
| private keys, I highly suggest reading the IANA documentation on
| the DNSSEC Root Zone and how they manage that:
|
| https://www.iana.org/dnssec/archive
|
| https://www.iana.org/dnssec/ceremonies
|
| https://www.youtube.com/channel/UChND9hEeJQjtLDFZ-m8U47A
| ziddoap wrote:
| Also of interest:
|
| https://www.cloudflare.com/dns/dnssec/root-signing-ceremony/
| GoblinSlayer wrote:
| Do you test this system, like drills with planted failures?
| intunderflow wrote:
| Do you mean testing if someone can break in or testing that our
| system can handle disaster recovery?
| GoblinSlayer wrote:
| Ability to interrupt when someone notices a hole in a bag, a
| non-working camera, a missing witness, administrator's
| dismissal.
| intunderflow wrote:
| These are covered by the exceptions procedure as described
| in the script, just to clarify the Ceremony Administrator
| is a per-ceremony role rather than a permanent occupation
| so dismissal would just involve replacement with another
| team member.
|
| Exceptions are evaluated, documented and acted upon from
| the moment they are discovered. The Ceremony Administrator
| decides what to do about them after consulting the Internal
| Witness (but the CA gets to decide), at the end of every
| ceremony (with exceptions or not) we ask every participant
| to sign a sheet saying they're happy we effectively haven't
| breached security (last page of here:
| https://monzo.com/static/docs/redacted-root-certificate-
| auth... ), if they don't sign, that would kick off a large
| discussion about whether we're happy to continue using
| these roots or replace them.
|
| From memory we've had about 2 exceptions which have been
| resolved, both very minor (as ever, when you get into the
| room stuff comes up that you couldn't have planned for, if
| I recall one was something silly like a typo in the command
| documented on the script, so we just ran the command
| without the typo, etc).
| GoblinSlayer wrote:
| Yes, the system is assumed to work correctly, that's
| sensible. But does it? What happens when some component
| start to work in unintended way? If the participants
| don't have experience with failures, how they will
| behave?
|
| I mean when administrator denies interruption. Rotation
| of administrators and signatures is nice, but there's
| still one person that makes the most authoritative
| administrator. The bug to address here is conformity, see
| https://en.wikipedia.org/wiki/Asch_conformity_experiments
| wly_cdgr wrote:
| "For equipment that is easier to tamper with or where tampering
| could become a larger problem, we make unannounced visits at
| random to retail stores and purchase off-the-shelf products."
|
| Admit it...y'all just want a good reason to go to MicroCenter,
| right?
| tialaramex wrote:
| > (sort of like a private password)
|
| I think this is an unfortunate framing. The property of public
| key cryptography is that _unlike_ secrets such as passwords only
| one party has the private key. This instantly eliminates an
| important class of potential problems.
|
| And unlike with something sophisticated like an asymmetric PAKE,
| I think it is relatively practical to explain this benefit to end
| users, Monzo's counter-parties can't lose a private key they
| don't have and can't produce signatures from Monzo themselves. If
| there _are_ signatures that shouldn 't exist Monzo knows the
| problem necessarily must be with their systems.
|
| British Banks rely far too much on trust. A symptom of this is
| that periodically a big merchant (e.g. a supermarket) will
| accidentally run a transaction feed twice (e.g. all card
| transactions at Tesco on Thursday happen twice). The bank _could_
| insist (as it does for its own personal account holders) that
| these transactions have unique IDs authenticated by the card,
| which would thus mean the duplicates are rejected, but it trusts
| these huge merchant customers, so all the transactions they say
| occurred are just assumed to be fine, and the result is that the
| banks eat a bunch of customer anger and costs. It was remarkable
| to me that this would happen in 1995. It 's _outrageous_ that it
| still happens today, but it does.
| dahart wrote:
| Just a side note, but I have to give a thumbs up to this site
| being the first one I've seen since cookie popups shenanigans
| started that allows me to decline tracking cookies and dismiss
| the popup in a single click.
| forgotmypw17 wrote:
| No cookie popup without JS. Props.
| po1nter wrote:
| In France, it's the law. Refusing cookies should be as easy as
| accepting them ( https://www.cnil.fr/fr/refuser-les-cookies-
| doit-etre-aussi-s... )
| ziddoap wrote:
| And yet, in the majority of cases, the flow is not as easy.
| Yet nothing is done.
|
| Jaywalking is illegal in many places too. It's an enforcement
| problem, not a legislative one.
| [deleted]
| dspillett wrote:
| It isn't the case with this site, but I generally mistrust
| "decline all" buttons without further investigation because on
| some sites it is only equivalent to unticking "consent" while
| leaving a pile of "legitimate interest" options enabled.
| Nextgrid wrote:
| They're just complying with the regulation. Any consent flow
| that does not allow you to easily decline does not actually
| comply with the regulation (at which point you can just not
| bother with a consent flow at all - you're still in breach, but
| at least you're not destroying the user experience).
| dspillett wrote:
| _> but at least you 're not destroying the user experience_
|
| That is exactly why many do it. The chance of getting fined
| is low, practically non-existent in most cases, but making
| the user experience difficult both increases the chance of
| people just saying "fuck it" and agreeing1. Those high up the
| data collection chain also hope that annoying enough people
| with the bad UX will make them campaign to have the
| regulations rescinded2 in their territory.
|
| ---
|
| [1] losing people like me who say "fuck it" and move on
| elsewhere, is seen as a small price to pay
|
| [2] even though the regulation isn't at fault, the
| deliberately bad UX is
| johnchristopher wrote:
| For what it's worth the two top GDPR consent Wordpress plugins
| can be configured like that (it's actually their default if
| memory serves well).
|
| It's entirely up to the people authoring the website.
| trevcanhuman wrote:
| Yes. I was amazed by this. No 'more options' thingy which has
| the size weird when in phones, no scrolling to 'save cookie
| changes' etc. This is how companies should treat customers if
| their business is not harvesting their data.
| fsflover wrote:
| Expected that Qubes OS is used somewhere, but it's not. By the
| way, it's sometimes more secure than air gapping:
| https://invisiblethingslab.com/resources/2014/Software_compa....
| intunderflow wrote:
| Qubes is awesome, I think the main thing it protects against
| though is an application you don't trust too much escalating
| it's privileges (for example a website sandbox escapes into the
| browser and then tries to go up further) - Since we're on an
| air-gapped, immutable system all the tools we have on our CD
| have some implicit trust, so if you manage to get software that
| isn't playing nice onto the OS, you've probably done it through
| hijacking the OS CD anyway (I covered a similar-ish question
| here: https://community.monzo.com/t/how-we-protect-our-most-
| sensit... )
|
| We also don't run that many complex commands on the actual OS,
| it's basically one of three things 99% of the time:
|
| - Mount this CD containing some CSRs or etc
|
| - Ask the HSM to please sign this
|
| - Burn a CD with these files and then report the hash / PGP
| word list of the content on the CD
| captn3m0 wrote:
| Does coen boot from CD and switch to RAM? Or do you have
| 2-disk-drives to mount the second disk?
| intunderflow wrote:
| Coen boots off a CD and mounts itself into RAM as and when
| parts of it are loaded, but you still need the CD in the
| computer for it to load stuff not in RAM so we have a 2nd
| CD reader + writer for the material
|
| You can tell if you're using a part of the OS that hasn't
| been used in a session yet because you can hear the CD
| drive with the OS CD in it physically spin up :)
| beermonster wrote:
| QubesOS is security by compartmentalisation and allows you to
| segregate and air gap hardware virtually (which requires
| VT-d/IOMMU and trust there's no Xen guest escape or
| hypervisor exploit). It's a neat project though IMHO would be
| weaker for the threat model you describe in your blogpost.
| fsflover wrote:
| https://forum.qubes-os.org/t/the-benefits-and-drawbacks-
| of-a...
| mrb wrote:
| As an ex-InfoSec engineer for 12 years, my only concern is they
| seem to have no process to recover from a defective or destroyed
| HSM. But other than that it's quite good (though I have only
| skimmed the blog post.)
| intunderflow wrote:
| We have multiple identical backup HSMs in multiple different
| sites. Any one of these + a keyholder quorum can recover the
| system. The entire safe room burning down is part of our
| possible disasters list.
| mrb wrote:
| Great!
| alecco wrote:
| Banking establishment should take note. They often over-
| complicate things and rely on "security through obscurity". Monzo
| opening up their security architecture and even some of their
| source code puts them to shame. I really like the air gap
| Notebook and custom OS on CD-R and their ceremonies. It looks
| like they got very competent people to think it through properly.
|
| Source: formerly at one of the largest banks.
| pulse7 wrote:
| I hope air gaping will stay a niche. If it goes mainstream
| there will be more attacks on it...
| intunderflow wrote:
| There's some interesting attacks based on working out the
| sound of each key on a keyboard and then guessing keystrokes
| - https://github.com/ggerganov/kbd-audio
| GoblinSlayer wrote:
| Can a white noise machine help there? https://ggerganov.git
| hub.io/jekyll/update/2018/11/30/keytap-... says the method
| is sensitive to noise.
| LouisSayers wrote:
| Blasting some Meshuggah should do it
| GoblinSlayer wrote:
| Meshuggah is a fixed pattern that can be subtracted.
| Ideally noise should unpredictable.
| LouisSayers wrote:
| True, I guess they'd need to perform it live - as part of
| the security ceremony
| LouisSayers wrote:
| Preferably in ceremonial cloaks
| mathieubordere wrote:
| You could let the system ask the characters of the
| passwords in random order. It would be a bit of a hassle,
| but gives some added protection for largish passwords.
| hammock wrote:
| Airgapped machines ought to be housed in an airgapped room,
| equivalent to a SCIF.
| intunderflow wrote:
| Funny you mention this because the SCIF tech specs from
| DNI.gov are cited a lot in our physical security envelope
| docs, interesting but very terse read, I wouldn't read
| the whole thing for fun:
| https://www.dni.gov/files/Governance/IC-Tech-Specs-for-
| Const...
|
| I'd skip to Chapter 2: Risk Management and Chapter 3:
| Fixed Facility SCIF Construction
| beermonster wrote:
| "..it cannot connect to any wireless network, so we don't
| have to worry about it doing so maliciously. We've also
| taken other measures to frustrate attackers, for example we
| have physically removed the hard drive so there is no way
| to persist data on the laptop itself"
|
| Hopefully by physically removing where possible? Also
| consider microphone and speakers if you've not already..
| adontz wrote:
| Worked in 3 major banks in my country. All of these seems so
| overthinked and overenginered and not really answering any real
| fraud risk. To me, personally, it looks more like marketing move
| for geeks, rather than real necessity.
| lisper wrote:
| You should consider the possibility that your perceptions are
| the result of your not understanding the problem that these
| procedures are designed to solve rather than the people who
| instituted them being idiots.
| xchaotic wrote:
| What OP is trying to say is that a disproportionate amount of
| effort is spent on a rather narrow attack vector and it's
| still not 100% secure anyway. The people creating these
| procedures had goals but good UX was not one of them and
| definitely not the highest priority.
| lisper wrote:
| Yes, I get that. But a compromised root key is essentially
| synonymous with the end of your entire business (to say
| nothing of destroying the finances of many if not all of
| your customers) so the effort being expended doesn't seem
| disproportionate to me.
|
| If you feel differently, by all means go do business with a
| bank that doesn't waste all this effort.
| hogFeast wrote:
| Overthinked and overengineered describes Monzo very well. Their
| business model isn't really banking but coming up with ways to
| extract money from investors, so this kind of overengineering
| is a strong business move for them.
| bigtones wrote:
| So your entire system relies on a single consumer grade laptop
| without any redundancy or ECC. Nice.
| intunderflow wrote:
| I think either the blog post is worded wrong (sorry) or you've
| misread if you think that, I talked a bit about the laptop
| precautions here:
|
| https://news.ycombinator.com/item?id=29266521
|
| And why it's just a laptop here:
|
| https://news.ycombinator.com/item?id=29267685
|
| To be super clear, the actual private keys only ever exist on
| HSMs
| yabones wrote:
| I find it sort of ironic how the operating system they use,
| COEN[1], instructs people to turn off SELinux to build the image
| correctly...
|
| [1] https://github.com/iana-org/coen
| miles wrote:
| Seems like there'd be less need for SELinux on a network like
| theirs:
|
| "Our entire system is air-gapped, which means it is physically
| isolated from the outside world and has no way to connect to
| the internet."
| Johnny555 wrote:
| If your system is important enough to be air gapped, then
| it's important enough to have tight security on those air
| gapped computers. Air gapping is just one layer of security,
| it's not impossible to bridge the gap.
| miles wrote:
| What tangible benefits would SELinux bring to an airgapped
| network?
| sneak wrote:
| > _We've also taken other measures to frustrate attackers, for
| example we have physically removed the hard drive so there is no
| way to persist data on the laptop itself_
|
| This sentence made me believe in the incompetence of the authors.
| tornato7 wrote:
| Why do you think this shows incompetence?
| sneak wrote:
| There are about 10 other ways of persisting secret
| information on a system other than the hard disk. The claim
| they make is amateur hour.
|
| There are better ways of doing this. There are embedded
| systems that can run general purpose OSes that cannot store
| permanent state, such as rpi <=3 that are much better suited
| to such things.
| Nextgrid wrote:
| Well you have to start somewhere. You will never get perfect
| security, however persisting malicious code on disk (and
| getting the machine to boot from it) is easy and standardized.
| Doing the same in firmware (or non-volatile storage in other
| parts of the machine) is going to be very manufacturer-specific
| and will require knowledge of the exact hardware, versions, etc
| and because of the air gap you can't even tell whether your
| attempt worked (and of course if your malware needs to
| exfiltrate data it still needs to break out of the air gap).
|
| TLDR: it's not bulletproof but increases the effort required
| from an attacker significantly.
| intunderflow wrote:
| This is covered more here
| https://news.ycombinator.com/item?id=29266521
|
| Sorry for not being up to your standards I guess :(
| sneak wrote:
| competence/incompetence is not a compliment or slur.
|
| I am an incompetent welder, for example.
|
| My point is that if you are removing the hard disk to prevent
| state storage, that is ineffective, and if you think that it
| is effective (which you stated), you are objectively
| factually incorrect.
| xchaotic wrote:
| But does Monzo (or any other bank at this point) have a
| sustainable business model? Finance is getting commoditised so
| there's very little money to be made from saving accounts and
| Monzo has been loss making since the beginning. It's great that
| they have a slightly more modern software stack but it might not
| be enough to sustain a bank.
| hn_throwaway_99 wrote:
| Great writeup intunderflow. The question I have, though, is how
| this doesn't just "push" the security problem one level down.
|
| Point being that yes, you need to ensure that your root certs are
| super secure and that you have a full documented, videotaped
| chain-of-custody. But then at some point you need to use that
| root cert to sign some other cert that DOES live in your
| infrastructure and is used to sign requests. How do you control
| your systems at the point where you need to use your cert (or a
| cert chain) to sign requests that by their nature must be
| connected to the Internet?
| intunderflow wrote:
| This will probably come in a future blog post (and also I don't
| lead this part of secret management so I don't have all the
| context), but our Intermediate's mostly live in nice locked
| down machines in AWS (avoiding exact product names because I
| don't want to tread on other peoples toes)
|
| There's definitely a much bigger risk with an intermediate on
| the internet, however we at least have the mitigation of being
| able to revoke it if it goes wonky, the main priorities I see
| in this space are: (1) minimising the chance of a compromise as
| much as possible (very clear and well defined communication
| channels so that as few things are open to even being poked at
| for vulnz over the network as possible) and (2) being ready to
| react if there is a compromise (revoke the cert and recover)
| hn_throwaway_99 wrote:
| Thanks very much for the response and the writeup, really
| appreciate it!
| Nextgrid wrote:
| If the intermediate is compromised you'll need to do a new
| key ceremony with the root to sign a new CRL, correct? And
| I'm assuming every component is configured to explicitly
| check CRLs and fail if those are unavailable for whatever
| reason, right? How does it ensure that the CRL it's getting
| is the latest CRL including the now-revoked certificate, and
| not an earlier one that's being replayed?
| intunderflow wrote:
| PKI revocation as a technology is definitely not the
| greatest thing in the world, the best protocol at the
| moment is OCSP with a specifically configured Validation
| Authority. But even then quite a lot of OCSP
| implementations in modern software are configured to
| continue if they can't make a connection.
|
| Fortunately, because the usage of our CAs is very tight to
| Monzo and our partners, we can reach out and explicitly ban
| a certificate from our machines (and tell partners to do
| the same) without too much trouble (as compared to Public
| CA's who have no chance of being able to do this) in
| addition to following normal PKI revoking procedures.
| intunderflow wrote:
| > If the intermediate is compromised you'll need to do a
| new key ceremony with the root to sign a new CRL, correct?
|
| You can use Validation Authorities to avoid needing the
| full on root that can issue new certs just to revoke an
| existing one.
| https://en.wikipedia.org/wiki/Validation_authority
| feldrim wrote:
| Well, in that scenario, creating intermediate certificates
| offline, deploying them where needed, securing them with less
| effort than root but more effort than others, and creating
| actual certificates to use via the intermediate certificates
| should be a reasonable approach.
| truthwhisperer wrote:
| First rule is don't talk to strangers about your internal
| procedures
| raesene9 wrote:
| From having worked in UK FS in the past this reminds me of a more
| modern version of how PKI key signing ceremonies used to happen
| :)
|
| It's an interesting contrast with how carefully root CA keys are
| handled in this kind of setup, compared to things like Kubernetes
| clusters, where you'll typically get 3+ root CA keys in the clear
| on the API server disks, and I've seen them committed to
| configmaps (e.g. in RKEs default setup), and even checked into GH
| repo's...
| dilyevsky wrote:
| Why would you put priv key in the configmap? K8s has api for
| signing users csrs with master CAs
| raesene9 wrote:
| For the case of RKE, which I mentioned, I _think_ they do it
| to have the cluster config available, but no idea why they
| use a configmap rather than a secret
| (https://github.com/rancher/rke/issues/1024 is the relevant
| issue)
| bob331 wrote:
| The biggest secret being that they are a business failure
| hacker_newz wrote:
| The PKI infrastructure for a bank is on a laptop?
| intunderflow wrote:
| The keys are on a HSM, but something has to nicely ask the HSM
| to do things and an air-gapped laptop feels like a pretty
| sensible choice? (Especially given the precautions we've talked
| about)
| icare_1er wrote:
| It would be interesting to hear how efficient their whole IT/dev
| can be given that the slightest change would apparently take
| several days, a trip to a CD-R shelf, etc, just to be allowed to
| startup a laptop and get to work ?
| intunderflow wrote:
| This is only for secrets that are incredibly important to the
| point that their undetected loss could cause massive problems,
| I run these key ceremonies and I'm typing this from my bed on
| my work issued Macbook :)
| trevcanhuman wrote:
| Hey glc, FYI the GitHub repo [0] with the OS Code from Coen
| is not available, the url is probably broken.
|
| [0] https://github.com/monzo/coen
| intunderflow wrote:
| I should have linked to an internal page that redirected
| there so people didn't get a misleading 404, but the
| internal coen repo is only open to all Monzo employees not
| the public because it contains some HSM specific stuff :(
| sorry - the public coen (which we're a fork of) is open
| though here https://github.com/iana-org/coen
| kinard wrote:
| intunderflow - What process did you have to go through to get the
| OK to write and publish this article? I'm very impressed that
| you've published this.
| intunderflow wrote:
| Kickoff was 18th of October after I indicated to someone
| internally that I wanted to write something for the blog about
| this program (not gonna name them in case they don't want to be
| named :P)
|
| First draft (very rough) was 22nd October
|
| Second draft (toned around a bit) was 5th November
|
| Security review + a proper edit happened while I was on Holiday
| on the week of 8th-12th November (lots of comments left for
| me), basically nothing got removed, I think one image and about
| 2-4 lines of text from memory
|
| I came back to a bunch of comments (nearly all on the Monzo
| Tone of Voice https://monzo.com/tone-of-voice/), went through
| those, proof-read and then got a green light from:
|
| - Engineering
|
| - Marketing + Press
|
| - Some other folks in Security (because of the content)
|
| And then we moved this over from Google Docs where I was
| drafting it into our Blog system (Contentful), had a quick skim
| to make sure it read properly and then published :)
| beermonster wrote:
| Kudus to you and Monzo. It's great you wanted to write and
| publish this and it says a lot about Monzo that they let you
| do it.
| lifeisstillgood wrote:
| Off topic, but I love the suggestion in "tone of voice" on
| how to test if a sentence is written in passive or active
| voice. Just add ... by monkeys to the end of the sentence. If
| sentence still makes sense, its passive, so rewrite it
|
| e.g.
|
| A decision has been made to close your account ...by monkeys
|
| vs
|
| We decided to close your account ... by monkeys
| alecco wrote:
| > Monzonauts: you can see the code in the Coen repository
| https://github.com/monzo/coen
|
| 404 is it a private repo?
| [deleted]
| intunderflow wrote:
| Yeah this is confusing but that repository is basically just
| for Monzo employees, sorry :(
| https://news.ycombinator.com/item?id=29266757
|
| We're going to remove that link
| anned20 wrote:
| That is probably the reason it is targeted at Monzonauts (Monzo
| employees)
| netr0ute wrote:
| At this point, I'd just use pencil and paper.
| intunderflow wrote:
| (Disclaimer: I'm the author) the thought of using pencil and
| paper to do elliptic curves bends my mind
| _wldu wrote:
| It can be done by hand just like RSA.
| officeplant wrote:
| This gives me RSI
| intunderflow wrote:
| I'm not paid enough to generate primes for RSA by hand :P
| scoopertrooper wrote:
| Why generate them?
|
| https://www.amazon.com.au/first-100000-Prime-
| Numbers/dp/B089...
| matbatt38 wrote:
| They're all too small :(
| [deleted]
| jaggederest wrote:
| You can also use a deck of cards:
| https://en.wikipedia.org/wiki/Solitaire_(cipher)
| Veserv wrote:
| What does "the most determined attackers" mean in this context?
|
| Are we talking the real most determined attackers like the US or
| China who can easily deploy $10B (i.e. a team of ~3,000 people
| fulltime for 3 years) for a strategic advantage?
|
| Or $1B (i.e. ~300 people for 3 years) which is within reach of
| every government, international criminal organization, and
| thousands of multinationals?
|
| Or $100M (i.e. ~30 people for 3 years) which is now just a line
| item for those organizations?
|
| Or $10M (i.e. ~3 people for 3 years) which is 10x better than the
| "gold standard" used by every other bank in the world, but still
| profitable for small criminal outfits doing ransomware attacks?
|
| Are there any legally binding marketing statements made by any
| associated security executive on where they sit in these 4 orders
| of magnitudes of "most determined attackers"?
|
| It is frankly absurd that we continue to allow statements like
| "secure", "reasonably secure", "most determined", or whatever
| without expecting some degree of quantification within 4(!)
| orders of magnitude. If you can not quantify or demarcate an
| accurate lower bound within 4 orders of magnitude you probably
| have no idea what you are doing.
| [deleted]
| d0mine wrote:
| In a world where a bank robbery is called "identity theft," why
| would the care? All they need is security theater.
| tralarpa wrote:
| > $10M (i.e. ~3 people for 3 years)
|
| Where can I apply?
| Nextgrid wrote:
| When it comes to a bank (especially one mostly centered on the
| UK; Monzo still has very little international capabilities), I
| don't think state-sponsored attackers are a major problem.
| Those rarely need the money, and as I mentioned previously
| Monzo doesn't routinely do international/SWIFT transfers (if
| they can at all) so getting the money out would be tricky.
| WaitWaitWha wrote:
| > ... I don't think state-sponsored attackers are a major
| problem ...
|
| The small and new nature of the Monzo makes it an attractive
| target.
|
| An attacker may well use slightly smaller, slightly less
| established business to other, more established businesses,
| as a pivot point. Since this is a 'bank' it will have some
| connection with other institutions. It does not need to be
| SWIFT or similar to be exploited.
|
| > ... don't think state-sponsored attackers are a major
| problem. Those rarely need the money ...
|
| I disagree with this too. State-sponsored APTs may very well
| take advantage of financial gains, both to enrich the
| attacking country, and throw off investigations. (e.g. DPRK,
| Iran)
|
| Just my opinion.
| insomniacity wrote:
| They're still a target if an individual they're targeting (to
| harass, find or discredit) banks there.
|
| They're still a target for someone wanting to cause
| disruption or engage in hybrid warfare - imagine a
| simultaneous hack of all the neobanks, possibly followed by a
| few of the traditional banks a week later for maximum impact?
|
| They're still a target for someone wanting to pivot into
| other organisations they might be connected to, such as card
| networks or regulators.
| [deleted]
| chespinoza wrote:
| This looks to me like an open invitation to be attacked, banks
| don't use to describe openly the technologies or methodologies
| they use for good reasons and Monzo has been quite open about
| these matters for a long time (Go use for instance), they're
| doing pretty good though and probably as traditional banks are
| using quite old technologies they seem to be into a far less
| vulnerable position than traditional banking.
| throwaway744678 wrote:
| Plot twist: their procedures are actually completely different,
| and this blog is part of a larger scheme to confuse potential
| attackers.
| intunderflow wrote:
| This isn't an invitation to attack us beyond as part of our
| HackerOne program (please don't make my life more difficult)
| but to steal a line from someone else at Monzo: Kerckhoffs's
| principle probably has some cross-over relevancy to this stuff.
|
| Can't really comment on why other banks don't talk about this
| so you'll have to draw your own conclusions on that :P
| LegitShady wrote:
| bragging about security is definitely increasing the
| likelihood that some random guy will say "oh yah, we'll see
| about that" in response to a marketing campaign centered
| around security.
| ziddoap wrote:
| If Some Random Guy can just happen to compromise the
| processes described here after reading a few hundred word
| blog post, they should probably just go after every browser
| root cert program which use almost identical procedures and
| have also described them openly.
|
| Why go after Monzo when you can go after XYZ root cert
| trusted by every device out there?
| LegitShady wrote:
| because one initiated a marketing campaign about their
| security so as to get some attention and find a different
| vulnerability the company didn't know about.
| ziddoap wrote:
| Publicly describing your security procedures is common
| (and encouraged) practice in infosec. See CloudFare,
| Mozilla, Google, etc. who have all publicly described
| various security procedures (including key signing and
| key management).
|
| If you could actually do anything based on the knowledge
| here, you would go after a browser root cert because your
| hack would be exponentially more effective.
| LegitShady wrote:
| I wouldn't do anything, but I also wouldn't do a security
| driven marketing campaign. If someone thinks their root
| cert is safe it doesn't mean there isn't some other way
| to get access to user credentials that could allow
| compromise through some other avenue.
|
| edit since I can't reply down level further: Yes I would
| be very careful about security related marketing and
| really consider if its necessary at all.
| ziddoap wrote:
| > _it doesn 't mean there isn't some other way to get
| access to user credentials_
|
| You can always be hacked in some other way, so I guess we
| should never write anything about security ever?
|
| > _edit since I can 't reply down level further: Yes I
| would be very careful about security related marketing
| and really consider if its necessary at all._
|
| Interestingly, the security people at Google, Cloudfare,
| Microsoft, and just about every other major tech company
| (and security company, certificate authority, etc.) agree
| that openly talking about security best practices is..
| well.. A best practice. And that keeping security
| practices secret (obscure, you could say) benefits no
| one.
|
| Not sure why you have to shoe-horn marketing in every
| comment, literally anything a company posts is arguably
| considered marketing, what's your point?
| intunderflow wrote:
| I hope this post doesn't come across as a brag as it's not
| meant to be, being arrogant about security only ends one
| way after all...
|
| I published this because I want to be open about how we do
| these things, because I think we safely can be and it shows
| that we care and don't just pay lip-service to security
|
| If you'd like to show us up for our security though, please
| do peek at our HackerOne program. I'm a program admin on it
| and I'd love to read more interesting reports
| https://hackerone.com/monzo <3 (I know it doesn't have a
| paid bounty yet, I'm advocating for it)
| LegitShady wrote:
| it comes off as marketing based on security which is no
| different than bragging in my opinion. You have a program
| for dealing with security vulnerabilities - clearly this
| blog post was to tell people about how good your security
| is. It's marketing.
| mrb wrote:
| A lot of us InfoSec folks consider this very good practice to
| be 100% open about how security technologies are deployed. Ever
| heard about security by obscurity? It confers a false
| (dangerous!) sense of security, and prevents your security
| stack from being audited/reviewed even by casual comments, like
| it's literally happening right now on HN.
| 123pie123 wrote:
| are you saying that it's best for companies to openly publish
| all their (network) security?
| ziddoap wrote:
| Publish procedures?
|
| Yes. Generally speaking, an open and auditable system is
| more robust and secure than a closed and non-auditable
| system.
|
| There are obviously limits (e.g. publishing what specific
| VLAN numbering scheme you use is obviously not helpful to
| anyone and just provides information that wasn't known).
|
| But yes, it is best practice to publish and accept feedback
| on generalized procedures.
|
| You should sign up for the MDSP dev-security-policy mailing
| list and see how the (open) conversations have continued to
| improve security for all.
| 123pie123 wrote:
| fully audited absolutely yes, feedback come internally
| with many (100's) of technical/ network architects poking
| and proding
|
| I fail to see any positives in openly publishing
| anything, unless you provide an extememly very detailed
| view - I see only negatives
| ziddoap wrote:
| None of what is described here is revolutionary or some novel
| form of security when dealing with PKI and root certs. Not sure
| how it opens up attacks or becomes an invitation to be
| attacked.
|
| > _banks don 't use to describe openly the technologies or
| methodologies they use for good reasons_
|
| I don't think it's "for good reasons". I think it's for "don't
| want to be found to still be using XP" reasons.
| throwaway894345 wrote:
| I agree that's a significant part of it (most banks don't
| actually do a very good job of security). The other part is
| that most banks' leadership are finance people, while Mongo's
| leadership is largely technical (and consequently are more
| open to discussing technology).
| 123pie123 wrote:
| Not sure what bank you're referring to but most finiance
| companies I've delt with in the past (in the UK) have
| extremely good technical people.
|
| most of the stuff is security using multple layers - which
| can be a pain to modify
|
| Any technical debt is evaluated as a risk and signed off by
| the business
| throwaway894345 wrote:
| Having good people is necessary but not sufficient for
| good security. In many cases, management isn't even
| aiming for compliance much less _actual security_. This
| is based off of multiple close friends and family in the
| banking and bank auditing world in the US as well as my
| own experiences working on financial software (including
| the relevant regulatory and compliance bodies as well as
| the many banks who were our customers); can 't speak to
| UK specifically.
| 123pie123 wrote:
| From my past uk financial security experience - security
| personal can highlight and raise risks with any decision
| made by the business or other techies.
|
| the higher the risk the more hoops you need to jump
| through for security sign off (to the point where they
| will not sign it off) and higher the risk, the higher up
| the food chain of managment that is needed to sign it off
|
| obv if the execs are not bothered about security then
| anything can happen
|
| but if something goes wrong a big finger will be pointing
| at that person who signed it off - which seems in it self
| a massive deterrent
| icare_1er wrote:
| I'd be interested to hear the other side of this story, ie. how
| frustrating and slow it must be for IT/devs to implement any
| change within that fortress ?
|
| Any Devops on HN to tell us about the last time they implemented
| a change on a Monzo API ?
| amalter wrote:
| It sounds like this is just for them to mint Root Certificates.
| I don't see how this elaborate ceremony would impact other
| aspects of development. It perhaps speaks to a very security
| conscious development culture - and that can impact agility.
| But when you "move money" - that's a decent tradeoff.
| captn3m0 wrote:
| Mint intermediate and root certs, more likely.
| intunderflow wrote:
| Mint roots, sign intermediate CSRs
| boffbowsh wrote:
| Monzo engineer here. 99.99% of changes don't require key
| ceremonies. When they do, we can usually book them with a few
| days notice. It's normally for things like onboarding big new
| systems and gets planned into a project from the beginning.
|
| The people who implemented this process are the same type of
| "IT/devs" or "Devops" you mention. There isn't a whole lot of
| throwing things over the wall to Ops, or for that matter
| putting up arduous processes against "normal" engineers.
| intunderflow wrote:
| This is only for managing the most important secrets of all
| which are kept offline in safes for most of the time,
| development work just happens on regular Macbooks, I'm
| currently sat in my Bedroom writing this and I run this program
| / authored this blog post.
|
| These sorts of precautions are pretty common at large
| companies, there's some good coverage of them generally here:
| https://en.wikipedia.org/wiki/Offline_root_certificate_autho...
| walrus01 wrote:
| > Each keyholder has a smart card which has been sealed in its
| own tamper evident bag.
|
| presumably this is kept in a locked cabinet, in a basement locked
| room, behind a sign "BEWARE OF THE LEOPARD"
| xpe wrote:
| > COTTONMOUTH-1, a USB cable manufactured by the NSA that looks
| like a normal USB cable
|
| It is a shame the article doesn't bother to cite nor link to its
| source.
| wongarsu wrote:
| That's probably a case of assuming that it's well known, the
| ANT Catalog leak was a pretty big thing around the time of the
| Snowden leaks [1]. If you're deep in a topic it's easy to lose
| sight of what's common knowledge and what isn't.
|
| Of course by now you can just buy similar devices on the free
| market, what used to be expensive high-tech in 2008 is now
| possible with commodity hardware.
|
| 1: https://en.wikipedia.org/wiki/NSA_ANT_catalog
| shaicoleman wrote:
| You can now buy similar things, e.g. the O.MG Cable,
|
| https://shop.hak5.org/products/o-mg-cable-usb-a
|
| https://news.ycombinator.com/item?id=28394035
| sodality2 wrote:
| Probably because it was top secret.
| vntok wrote:
| > We have between 6 and 12 Monzonauts who are keyholders, a
| certain number of them are necessary to unlock the Hardware
| Security Module (this is known as the quorum, I'm going to keep
| the exact number secret ).
|
| I mean it's seven, it's literally written in the attached PDF
| describing the ceremony procedure.
| jaywalk wrote:
| Whoa, look at Hackerman over here! The Governor of Minnesota
| would like to have a word with you.
| intunderflow wrote:
| or is it twelve??? I edited this document before publishing it
| after all :P
|
| for all you know, I rolled two dice to decide how many
| keyholders I'd put in each part (which I would totally never
| do, would I?)
| [deleted]
| can16358p wrote:
| Simple question: what would be the mitigation against any
| security threats in the randomly-bought off-the-shelf products?
| E.g. In a case there is a general hardware exploit that isn't
| targeted to Monzo explicitly, but since you've bought that item
| (e.g. A router) there is a possible attack vector.
| intunderflow wrote:
| A few things:
|
| - Since all the components we purchase are kept air-gapped,
| you'd need to already be on a machine in the air-gapped system
| (which isn't assembled and powered on except when we need to
| deal with key material) to exploit a vulnerability
|
| - We're keeping things as minimal as possible in what we trust
| from coming out a store, for example if we purchase a Laptop we
| gut it of most of its components before it goes near our key
| material, Laptop's dont need batteries (or even CMOS batteries)
| to run basic live systems so they're going out.
|
| - Compromising one part of the system won't let you sneak
| private key materials out on its own by-design, the part we've
| tried to make the hardest is exfiltration, the only material
| that leaves the air-gapped system leaves either as QR codes on
| a screen or on CD-R drives that we keep for years in a safe in
| their own tamper evident bags. This is all part of an effort to
| try and make sneaking private material out (even if you had
| full control of the system) as difficult as possible to do
| without being detected.
| namibj wrote:
| Beware that normal CD-R uses organic dye subject to rotting.
| Use either something like M-DISC or normal BD-R discs (HTL,
| with an anorganic phase-change recording layer).
|
| I also suggest using at least dvdisaster (I suggest it's RS03
| codec in augumented-image mode) or equivalent (if you come
| across a proper competitor to dvdisaster RS03, please let me
| know).
| intunderflow wrote:
| I've heard about the rotting, we've got a plan in about a
| year or two to do a key ceremony to move stuff onto more
| permanent hardware we can retain for at least 20 years (our
| roots last 15)
|
| What that hardware will be is still an open question, but
| probably BD-R HTL.
|
| Also worth noting (I think it's in the script too from
| memory) we make at least 3 identical copies of each CD in
| case of CD failure.
| namibj wrote:
| Yeah, from my understanding the rot is literally a
| fungus/bacteria colony that eats the dye. If you store
| the 3 copies separately in sealed containers, regular (no
| worse than yearly) inspections should probably suffice.
| The damage is visible; if anything looks off, put it back
| and immediately organize preemptive data rescue. This can
| be obvious discoloration (likely with a gradient) or
| discrete specs, the latter can also happen to BD-R HTL
| where they are "just" delamination. That is thankfully a
| fairly slow progressive effect from AFAIK the acrylic
| glue slowly hydrolyzing, and can be counteracted by cold
| constant-temperature constant-low-humidity storage (an
| office cabinet with AC is better than a finished basement
| w/o AC).
| Nextgrid wrote:
| How do you secure physical access? Tamper evident bags and
| stuff are nice, but they'll only allow you to discover a
| breach after the fact, potentially days/weeks after the
| damage is done (said damage can be sneaky and just use the
| keys to extract sensitive data or introduce further malicious
| software into the infrastructure, which may remain there even
| after the keys themselves are rotated).
|
| Totally understandable if you can't/don't want to answer
| obviously.
| intunderflow wrote:
| Everything lives in safes and those safes are covered by
| cameras, if you try to drill the safes you'll be found (or
| heard) a long time before you manage to break in
| lisper wrote:
| Unless someone figures out how to compromise the cameras
| and microphones. (Maybe I've watched too many Hollywood
| heist movies?)
| joconde wrote:
| Hacking the closed-circuit cameras probably only works in
| a James Bond movie, since all you have to do is keep that
| circuit air-gapped to prevent it. I imagine that physical
| access to the cameras' control system is also guarded.
| can16358p wrote:
| I believe there is multiple redundancy both in cameras
| being in multiple places and also delivery mechanisms
| (local copy + remote stream etc). I'm inclined to believe
| that there are more mitigations that haven't been told in
| the post, which is plausible.
| MayeulC wrote:
| Have you gone fully paranoid and your air-gaped system is in
| a Faraday cage inside an anechoic chamber? Things like bus
| radio [1], coil whine and power fluctuations can be used (it
| has been shown) to exfiltrate data.
|
| [1] https://github.com/fulldecent/system-bus-radio
| GoblinSlayer wrote:
| In this case the private key is in HSM, and the laptop has
| no access to any interesting information worth to
| exfiltrate. Air gap is to prevent intrusion.
| intunderflow wrote:
| We haven't gone as far as a Faraday Cage inside an anechoic
| chamber because we think the risks of these attacks is
| (luckily) not big enough (yet) for now to justify these.
| Eventually though? Maybe.
|
| There's also projects to guess keystrokes based on sound
| which don't even require you to be on the host device
| https://news.ycombinator.com/item?id=29266783
| feldrim wrote:
| Having worked in systems in the basements of TEMPEST
| checked buildings, I can say that the attacker first needs
| to eliminate the physical security measures, and manage to
| get close to the devices close to the servers storing,
| processing, transmitting the classified information. After
| that first breach via personally, or a specific device such
| as a USB dongle used by a user, the software needs to
| infect airgapped computers and find its way to the servers.
| Afterwards, the attacker needs to be close to the facility
| to listen to the radio, whose waves managed to get through
| the fortified walls with Faraday cages. And if this
| scenario happens, there is a huge problem with security,
| other than a system of a malware and an AM radio.
| Buttsite wrote:
| How do you sleep at night?
___________________________________________________________________
(page generated 2021-11-18 23:01 UTC)