[HN Gopher] A Docker footgun led to a vandal deleting NewsBlur's...
___________________________________________________________________
A Docker footgun led to a vandal deleting NewsBlur's MongoDB
database (2021)
Author : quectophoton
Score : 99 points
Date : 2023-02-19 18:16 UTC (4 hours ago)
(HTM) web link (blog.newsblur.com)
(TXT) w3m dump (blog.newsblur.com)
| ilyt wrote:
| > Turns out the ufw firewall I enabled and diligently kept on a
| strict allowlist with only my internal servers didn't work on a
| new server because of Docker.
|
| OOF. Those can be nasty. Many of the tools "helpfully" inject
| jumps to their chains at the very front of the table, libvirt
| example: Chain INPUT (policy ACCEPT 0 packets,
| 0 bytes) pkts bytes target prot opt in out
| source destination 9105K 4151M
| LIBVIRT_INP all -- * * 0.0.0.0/0
| 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0
| packets, 0 bytes) pkts bytes target prot opt in
| out source destination 0
| 0 LIBVIRT_FWX all -- * * 0.0.0.0/0
| 0.0.0.0/0 0 0 LIBVIRT_FWI all -- *
| * 0.0.0.0/0 0.0.0.0/0 0
| 0 LIBVIRT_FWO all -- * * 0.0.0.0/0
| 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 229K packets,
| 39M bytes) pkts bytes target prot opt in out
| source destination 6893K 1628M
| LIBVIRT_OUT all -- * * 0.0.0.0/0
| 0.0.0.0/0
|
| So if you relied on your firewall blocking access and didn't test
| it it's easy to be bamboozled.
|
| The common approach is to allow other tools only manage their own
| chains and call those chains from your <firewall tool> but not
| that many tools allow for that and want to take over your
| firewall.
| ary wrote:
| Personally I blame the irritation and cost of setting up private
| subnets that can access the Internet via NAT in AWS VPCs in
| addition to Docker's interactions with UFW. Private VPC subnets
| with NAT Gateways are both unacceptably costly and complicated to
| setup (source: recently dealt with the combination of NAT
| Gateways + the Network Firewall service in AWS). Throw in the
| need for VPC endpoints for many workloads and AWS networking
| starts to look like an extortion racket for those who want the
| majority of their infrastructure to be sequestered from the
| Internet.
| lambdasquirrel wrote:
| I'm not sure I'm understanding this entirely. I thought the way
| you should set it up is have e.g. your nginx / istio gateways
| exposed to the public internet on EIPs. You shouldn't have your
| databases exposed to the world, or the hardware running your
| backend services directly exposed either. Also, security
| groups. Yes, if you have to use NAT gateways for serving
| traffic, those are very expensive.
| ary wrote:
| What I'm noting is in line with your thinking.
|
| Whatever ways AWS marketing might encourage people to do
| things via their "Well Architected" series of publications,
| the reality is that the path of least resistance with VPC
| networking is what the linked article describes: everything
| on public subnets (with maybe security groups or network ACLs
| to restrict it somewhat).
|
| Splitting infrastructure into public/private subnets is very
| tedious, and can at times be difficult to debug. In some of
| my most recent work it took some time to understand why and
| how the components needed to be linked to make a combination
| of an Internet Gateway, Network Firewall, and NAT Gateway
| work together. Combined with the use of multiple private
| subnets residing in different AZs meant testing permutations
| and fussing with the "reachability" analyzer. It is this kind
| of thing that makes people go with the first thing that works
| so that they can move on to getting productive work done.
|
| Typically one wouldn't serve traffic via a NAT Gateway, and
| that's true for our case as well. The problem we ran into was
| not properly configuring VPC endpoints for a workload that
| heavily interacts with S3 and running up a tiny, but still
| not ideal, bill for network utilization over the NAT Gateway
| instead. It's not obvious to may people that interacting with
| AWS APIs happens over the public Internet even from within
| AWS infrastructure. The cost of ongoing cost NAT Gateways
| versus the free per-VPC Internet Gateway feels like it
| disincentivizes starting with a more secure setup.
|
| Ultimately what I'm trying to say is that it's somewhat
| negligent of AWS to not make an almost universal requirement
| of modern web and SaaS cloud networking be simple and cheap
| to implement.
| jacquesm wrote:
| It's a pity that the chances of nailing the perps is so low.
| Obviously docker _and_ the person that put this together share
| some of the blame but: the original internet would have never
| gotten off the ground if it wasn 't for people cooperating with
| each other rather than to try to tear things down all the time.
|
| And with the chances of your average script kiddie/hacker/idiot
| getting caught being lower than a typical bike thief in Amsterdam
| this likely will not stop. It would be great if we could get a
| secure internet. But what we get instead is an internet that can
| only be kept running by large companies because they supposedly
| have the budget and the knowledge to do this at a scale that
| makes security affordable. Which means that a ton of innovation
| will simply never happen because for a large number of fledgling
| companies the decision between working on their business or
| working on the security of their business is a moot one, if they
| don't do the one the other will kill it and doing both at the
| same time is too costly.
|
| NewsBlur is useful, destroying it serves no purpose at all. And
| make no mistake: the hacker clearly aimed to simply destroy it
| and pretend they have copied the data, so they were more than
| willing to do just that, wanton destruction for a miserly amount
| of money. Whoever did this may think they're l33t and cool but I
| personally think they are utter trash.
| Temporary_31337 wrote:
| If we accept that this is the default state of the Internet
| (insecure) then I think it is a correct assumption and forces
| everyone to think twice before exposing anything to the
| Internet by default. Here the default Docker behaviour was very
| much at fault but also the assumption that a Linux level
| firewall is good for Docker. We have docker deployments that
| are completely hidden behind NAT in AWS and such things are a
| non issue.
| wkdneidbwf wrote:
| exactly. ideally, accidentally exposing your db to the
| internet should be all but impossible
| cm2187 wrote:
| I am also puzzled that a firewall rule would be the only
| thing protecting user data. If any machine behind your NAT
| gets compromised then it is open buffet to the attackers (in
| addition to a misconfigured NAT like in this example). It is
| a bit shocking that it is 2023 and people are still doing
| that.
| Tijdreiziger wrote:
| I have run into this firewall problem too (not in any sort of
| production environment, thankfully).
|
| The problem is that your mental model of a firewall tends to
| be that it sits 'in front of' all applications on your
| machine. Docker breaks this assumption by implicitly
| inserting rules into iptables.
|
| Worse still, if you then inspect the state of the firewall
| (using UFW), it does not show you any of the rules that
| Docker inserted, because those rules are on a separate
| iptables chain.
|
| The problem, then, is the interaction between Docker and UFW.
| UFW is designed as a 'simple' frontend for iptables, for
| situations where the advanced configurability of iptables is
| unneccessary. If you manage the firewall using UFW, you are
| not really supposed to also inject 'raw' iptables rules,
| especially not on non-standard chains. However, this is
| exactly what Docker does.
|
| Therefore, we end up in the hairy situation where Docker
| adjusts your firewall, but UFW does not provide any
| indication that anything has changed. Neither project
| considers this a bug, and they just point to each other for a
| solution.
|
| Personally, I think it should be on Docker to, at the very
| least, throw up some warnings when it is about to change
| firewall rules, as this is clearly unexpected behavior for
| many users (despite Dockers claims to the contrary).
| cpuguy83 wrote:
| Docker is not implicitly doing anything. The user requested
| for the port to get published and so docker published it.
|
| As nice as it would be for Docker to be more configurable
| in terms of where rules get inserted, that would not have
| helped here because the user already misconfigured things
| (should be using docker networks for db access).
| Technically docker even has a chain that you can throw
| rules into that will get hit before docker's rules... so
| it's there just very manual (instead of, say, having a ufw
| mode).
| Tijdreiziger wrote:
| > The user requested for the port to get published and so
| docker published it.
|
| Yes, which runs counter to how every other application
| works. If you ask, say, sshd to listen on port 22, your
| firewall will still sit in front of it, unless you
| explicitly poke a hole in it.
| cpuguy83 wrote:
| Listening on a port and opening a port are different
| things. `-p` is for opening ports. Docker is not
| listening on behalf of the application. It is routing
| traffic to the application.
| ilyt wrote:
| Docker is implicitly overriding existing rules by putting
| their own ones in front. The least it should do is to put
| a warning "hey, there are other rules here,I'm adding my
| own, please verify.
|
| It should also have option (I only found "complete
| on/complete off" one) to "just" create its DOCKER* chains
| and leave the jumping to them to the user
| cpuguy83 wrote:
| > I only found "complete on/complete off" one
|
| Docker has a `DOCKER-USER` chain where the user can
| inject their own rules before docker's rules are run.
|
| But even then, the user flat out should not be using `-p`
| unless they _want_ to expose the service outside of the
| machine. That is the well documented networking model of
| docker. Docker also includes a network abstraction that
| should have been used here to give access to other
| services that need it and isolate it from the things that
| don 't.
| tgv wrote:
| Perhaps security should be the next innovation. But I don't
| think it will be a technical innovation: there's enough tech
| and know-how to keep script-kiddies out. It's the lack of
| interest, the hunt for the MVP, the break-fast culture.
|
| > I personally think they are utter trash.
|
| In this case, even criminal, given the extortion attempt.
| jacquesm wrote:
| > there's enough tech and know-how to keep script-kiddies out
|
| I think that this is true for the parties that have the
| budget for a proper security team. But your average
| programmer is not able to keep up with both the very rapid
| evolution of the security scene, a bunch of tooling to build
| their application with (backend, frontend) and then the
| burden of writing the application itself. Even with an
| interest, and without the urge for an MVP or the subscription
| to 'break-fast' you are essentially a sitting duck given
| enough time. After all there is an army of attackers out
| there and you are just you.
|
| I have a ton of ideas of how I could make my current project
| much better by adding a service. But I'm not going down that
| route because I know I will inevitably end up in an arms race
| that will stop me from being able to focus on the application
| and that will shift more and more resources to the security
| domain. So instead the app is limited in functionality and
| only works using local storage in the browser, everything
| else is utterly static. It's a dumb and very much brute force
| solution but it works. It is also severely limiting.
| Mogzol wrote:
| > NewsBlur is useful, destroying it serves no purpose at all.
| And make no mistake: the hacker clearly aimed to simply destroy
| it and pretend they have copied the data, so they were more
| than willing to do just that, wanton destruction for a miserly
| amount of money. Whoever did this may think they're l33t and
| cool but I personally think they are utter trash.
|
| I doubt this was even specifically targeted at NewsBlur.
| Reading the article it sounds like it's just an automated
| attack that scans for any open mongo servers on the internet
| and does the same thing to any of them. Not that that's any
| better, really.
| xwdv wrote:
| It's not better, but at least it wasn't personal, which would
| be worse.
| jacquesm wrote:
| Well, someone controls those tools and someone is on the
| receiving end of that bitcoin address.
| Kwpolska wrote:
| That someone very likely doesn't care whose databases they
| break. The operation is probably automated and they try to
| ransom anyone with a broken server.
| nubinetwork wrote:
| They'll just run the money through one of those mixing
| services and you'll never know where the money went.
| duxup wrote:
| It was amazing how open things were back in the day.
| Whitehouse.gov had an open mail relay that I enjoyed sending
| mail through.
| [deleted]
| [deleted]
| mannykannot wrote:
| "When I containerized MongoDB, Docker helpfully inserted an allow
| rule into iptables, opening up MongoDB to the world."
|
| Yet another reminder that the most important ability in systems
| engineering is good judgement.
| c7DJTLrn wrote:
| The poor judgement here is not using an appliance firewall in
| addition to a system firewall. Exposing an instance directly to
| the public Internet is just not a good move.
| ilyt wrote:
| And not having password. And not even trying to nmap the
| service after configuration to verify if firewall is doing
| what is intended.
| cpuguy83 wrote:
| To be clear, Docker inserted an allow rule because the user
| asked it to. Docker is relatively simple easy to set up without
| needing to use port forwarding to allow communication. Docker's
| networking model is explicitly designed to prevent the scenario
| in the post.
|
| Not to negate the fact that many are surprised by how docker
| bypasses ufw rules. This is painful and I would love to see a
| way for people to safely use port forwarding without surprises
| like this.
| nickstinemates wrote:
| Kinda weird to blame Docker. This is pure user error.
| rollcat wrote:
| Yes it's user error, but it's also a nasty trap for users who
| are not careful. And it hurts the most, where people are most
| likely to make the mistake (developers need to publish ports to
| access local containers on their development machines, but must
| take care not to do so when deploying to production).
| nickstinemates wrote:
| Use `docker network` or the equivalent in the docker-compose
| file. Not taking time to think about how the software works
| is not the fault of the software.
| rollcat wrote:
| > Use `docker network` or the equivalent in the docker-
| compose file.
|
| Everyone knows what the correct solution is. That's not
| what the discussion is about.
|
| > Not taking time to think about how the software works
|
| You're blaming the effect of poor design on alleged
| incompetence of people you know nothing about.
| Badumtss8 wrote:
| nginx listens on port 80. Accessible from machine.
| Inaccessible from outside unless allowed by ufw.
|
| docker listens on port X. Accessible on machine. Also
| accessible from outside regardless of ufw.
|
| No amount of time and experience will make you think that
| configuring a software to listen on a port will
| automagically poke a hole in the firewall.
| nickstinemates wrote:
| The scope of Docker and nginx are incomparable, so the
| comparison is wrong.
|
| It starts with the simple truth: `docker` doesn't
| `listen` on any port.
|
| Or maybe a simple question: How can I run `docker run -p
| 8080 nginx` over and over without port conflict?
|
| Or - lets expand scope even more. How is docker supposed
| to know about your choice of firewall? What about
| upstream firewalls? What about multiple versions of
| firewalls on a host (ufw vs. fern vs.)?
|
| Can go on and on..
| joombaga wrote:
| Just tried this because I usually use docker with k8s or
| compose, so wasn't sure of the behavior.
|
| > It starts with the simple truth: `docker` doesn't
| `listen` on any port.
|
| If I run the command below, `docker-proxy` starts
| listening on an incrementing port.
|
| > Or maybe a simple question: How can I run `docker run
| -p 8080 nginx` over and over without port conflict?
|
| Because you're not specifying a port on the host; you're
| specifying a port on the container. I've never used the
| single port form of `-p`; I would've guessed it was the
| same as `-p 8080:8080`.
| andybak wrote:
| If everyone designed software like you are advocating then
| we'd be even worse off.
|
| What's wrong with "users should be careful" _and_ "software
| shouldn't contain footguns"?
| throwaway744678 wrote:
| Software following the path of least surprise is a good
| rule of thumb.
| nickstinemates wrote:
| I disagree with the premise that in this instance there
| is a footgun. There isn't.
| joaoqalves wrote:
| This happened to me as well. Only that it was on a toy server for
| a side-project.
|
| https://twitter.com/joaoqalves/status/1332638533572550656?s=...
| de6u99er wrote:
| You should consider yourself lucky that you didn't lose your
| users data.
|
| My 2c:
|
| - NEVER expose a database to the public. Use at least a micro-
| service BFEs that do exactly what the app needs and nothing more.
|
| - Use a load balancer/gateway between your servers and the
| outside world (only port 80 and 443 should be open, 80 should
| redirect to 443).
|
| - Use Docker-(Compose) as runtime/orchestrator only for
| development. For production use hardened cloud runtimes (e.g.
| Google Cloud Run, k3s/k8s, ...) with zero trust as default
| setting. Adhere to best practices!
|
| - Before migrating production data, make sure everything works
| and nothing is exposed to the webs.
|
| Tbh. I think your blog post is ridiculous, because IMHO the whole
| thing is your fault.
| revskill wrote:
| I think even "better best practices" should be something like
| this:
|
| Even in case we accidently violate some security rules (like
| open ports,...) the application is still safe ?
| yjftsjthsd-h wrote:
| > NEVER expose a database to the public
|
| They didn't intentionally expose the database; they thought
| that it was safe and docker bypassed their security.
|
| > because IMHO the whole thing is your fault
|
| How so? They had a working firewall, and then docker bypassed
| it, which they had no way of knowing to expect.
| ThePhysicist wrote:
| Using "ufw" on a production server is an anti-pattern, in my
| opinion. You should use something like ferm instead.
| return_to_monke wrote:
| can you please explain why?
| rollcat wrote:
| ferm is absolutely fantastic, but in my experience, it doesn't
| always cope well with some more obscure iptables features; I've
| inherited a couple systems where retrofitting ferm was not
| possible, because it literally barfed when parsing the legacy
| ruleset. If you can use it, you should use it though.
|
| Also, I do need to play with nftables.
| wkdneidbwf wrote:
| db on public subnets are gross.
|
| if it's unavoidable, then monitoring your db isn't reachable from
| the public internet is important.
| andrewstuart wrote:
| I'm glad this wasn't another "database lost, we have no backup"
| stories.
| chrismeller wrote:
| This is why you shouldn't be using Docker in production. It's a
| great tool, but it's simply not designed for that kind of
| environment.
|
| Edit: note I said Docker specifically, nothing about
| containerization.
| wkdneidbwf wrote:
| what. docker isn't the problem here. dbs on public subnets, and
| the lack of monitoring for accidental db exposure are the
| actual issues here.
| Kwpolska wrote:
| And running a database without authentication.
| chrismeller wrote:
| True, but even the author covered that. No one has
| mentioned Docker.
| yjftsjthsd-h wrote:
| > docker isn't the problem here
|
| I mean... if they weren't using docker it would have been
| fine, but because they used docker it wasn't fine. That reads
| like docker is the problem. That further layers could have
| mitigated it doesn't make docker not the problem.
| wkdneidbwf wrote:
| it wouldn't have been fine--they didn't realize the
| configuration mistake until their database had been
| dropped. there are plenty of scenarios that could have led
| to the exact same outcome. i'd argue it would have been a
| matter of time.
|
| the problem is that it was possible to accidentally expose
| their credential-less db to the internet _at all_ and that
| they had no monitoring or tools in place to detect the
| misconfiguration. that's a design flaw, and again, not a
| docker-specific problem.
| DistractionRect wrote:
| Where does the buck stop? This was simply a few layers of
| bad configuration.
|
| They didn't secure their database with auth/access control
| and they misconfigured docker.
|
| They have a few things at their disposal:
|
| - Using the docker-user chain to set firewall rules
|
| - Running docker such that the default bind address for the
| port directive is 127.0.0.1 instead of 0.0.0.0. This puts a
| safety on the footgun.
|
| - Explicitly setting the bind address of the port directive
| when bringing up the container.
|
| Docker didn't come along, install itself, and open up the
| port. The sysadmin did. It's unfortunate they didn't know
| how it interacted with the firewall or how to properly
| configure it, but given that, should they really have been
| rolling it out to production systems?
|
| They tested on prod and learned in trial by fire.
| yjftsjthsd-h wrote:
| > They didn't secure their database with auth/access
| control
|
| True, although they had reason to believe that that was
| safe.
|
| > they misconfigured docker.
|
| Ah, no, that's where we disagree. They didn't configure
| it, docker shipped an insane default that bypasses
| existing security measures. There is absolutely no reason
| to expect that running a program in docker magically
| makes it ignore the system firewall.
| xboxnolifes wrote:
| > True, although they had reason to believe that that was
| safe.
|
| That's like the antithesis of layered security. There
| would be no point in layers if you assume a single layer
| will protect you.
| DistractionRect wrote:
| The sysadmin is responsible for what they install and how
| it's configured. Docker didn't specify what ports to
| open, what container to run, volume mounts, image etc.
| That's part of the config the sysadmin supplies when they
| spin it up. No matter how you slice it, it down to them
| not understanding what they were pushing to prod.
|
| By the same token having an unsecured database is a bad
| default, so we can keep passing the buck around.
| luckylion wrote:
| If they had understood how docker works, it would've been
| fine, too. But because they didn't and used a software
| firewall as their only line of defense and didn't bother
| with authentication for the DB server, it wasn't fine.
| yjftsjthsd-h wrote:
| > If they had understood how docker works, it would've
| been fine, too.
|
| They shouldn't have needed to, for this.
|
| > But because they didn't and used a software firewall as
| their only line of defense
|
| Okay? That should have been safe. Like, sure, there are
| ways to add more layers, but that layer shouldn't have
| failed them.
|
| > didn't bother with authentication for the DB server, it
| wasn't fine.
|
| They shouldn't have needed to. Like yes, running a DB
| without authentication isn't a great idea, but they had
| good reason to believe that it wouldn't be exposed.
| luckylion wrote:
| Running a server is a lot like doing electrical work. If
| you don't really know what you're doing and rely on vague
| intuition about how things should probably work, you
| might be in for a shock.
| bornfreddy wrote:
| I respectfully disagree. It solves a huge problem -
| reproducible app environment. It comes with its share of
| footguns too (hint: _always_ explicitly specify pool of IP
| addresses it can use!), yes, but what doesn 't?
| chrismeller wrote:
| I didn't say he shouldn't use containerization. He just
| shouldn't have used Docker. Docker has always been very dev
| environment focused.
| bornfreddy wrote:
| Are you aware of another containerization technology that
| would be more suitable?
|
| (genuinly curious - I only have experience with Docker and
| never felt the need to look elsewhere, even with footguns,
| but I'm still curious)
| KronisLV wrote:
| Some will suggest that Podman is a good choice:
| https://podman.io/
|
| Though personally I would miss Docker Swarm which comes
| with Docker by default and, in my opinion, is a nice
| bridge between something like Docker Compose (easy single
| node deployments) and Nomad/Kubernetes (both more complex
| in comparison, need separate setup and lots of admin
| work).
|
| Personally I also think that Docker is sufficient in most
| cases and is pretty boring and stable. Docker Desktop
| might irk some, but seems like they wanted that revenue.
|
| However, for desktop GUI, Podman has Podman Desktop:
| https://podman-desktop.io/
|
| Or one can also consider Rancher Desktop as another GUI
| tool, if desired: https://rancherdesktop.io/
| convolvatron wrote:
| its probably not impossible to do this without the footguns
| bornfreddy wrote:
| True, it just wasn't done in this particular universe we
| live in. :shrugs: Still worth using it in production, but
| being careful.
| Helithumper wrote:
| Thread from previous discussion
| https://news.ycombinator.com/item?id=27613217 (June 4, 2021)
| nickjj wrote:
| Am I missing something? The article wrote:
|
| > When I containerized MongoDB, Docker helpfully inserted an
| allow rule into iptables, opening up MongoDB to the world
|
| But the blog post doesn't mention how Docker "helpfully inserted
| an allow rule". Is this because NewsBlur ran the container using
| the -p 27017:27017 flag without reading the docs around what
| publishing a port does?
|
| You don't need to publish a port for (2) containers to talk to
| each other, they can do that over a private Docker network that
| both containers belong to (something Docker Compose does for you
| by default). You can also choose to -p 127.0.0.1:27017:27017
| which will only publish the port so that only localhost can
| access it, something like this is handy if you publish a web port
| to localhost so nginx not running in Docker can connect to it.
| nubinetwork wrote:
| Oh that's funny, we were just talking about docker and ufw
| mishaps over here: https://news.ycombinator.com/item?id=34858775
| djm_ wrote:
| Fantastic write-up.
|
| This seems to me like a combination of multiple foot-guns, first
| being the Docker one - followed by the fact Mongo was not
| configured to authenticate the connection.
|
| Heroku by default run PostgreSQL open to the world (which is
| problematic for other reasons) but they get away with it by
| relying on PG's decent authentication.
|
| My default is to prefer to build systems with multiple layers of
| security, such that there is no reliance on a single issue like
| this.
| ilyt wrote:
| If it is on same machine you can even get away with using
| PostgreSQL via socket, eliminating all of the auth problems as
| now app user = db login
| aidenscott2016 wrote:
| >Before the former primary server could be placed into rotation,
| a snapshot of the server was made to ensure the backup would not
| delete itself upon reconnection
|
| I don't understand this. Was the snapshot made of the compromised
| database or of the old not in use one? Why would the backup
| (snapshot?) delete itself on reconnection?
| mackman wrote:
| My guess is they were going to attempt point in time recovery
| by replaying transaction or relocation log after the snapshot
| and stop it before the drop. Unless they made a mistake.
| protortyp wrote:
| Great write-up. I remember how shocked I was a few years back
| when we moved our backend to a micro-service architecture and our
| native Postgres installation to a docker container. As (I
| suppose) almost everyone, we also used ufw to manage the
| firewall.
|
| Security should be the default stance, and any port exposed
| through docker should initially be restricted to local only.
| Going global should be explicit, imo.
| dmazin wrote:
| Just wanted to say that I recently started reading RSS again and
| NewsBlur has been excellent. It's a good aggregator, has a great
| web interface, and a great iOS app. The free tier is all you need
| (so I guess I'll need to pay just out of my desire to support).
| BostonEnginerd wrote:
| I've been a subscriber for a long time now. Newsblur has been
| great. It's been pretty stable, and they haven't succumbed to
| frequent radical redesigns.
|
| I could self-host an RSS reader, but Newsblur works well enough
| that I haven't bothered.
| superlopuh wrote:
| I pay just for the newsletter aggregation support. I actually
| read it through NetNewsWire on iOS and macOS. Very happy with
| the setup.
| spicyusername wrote:
| Great writeup! Writeups like these help everyone learn from each
| other's mistakes.
|
| > We had been relying on the firewall to provide protection
| against threats, but when the firewall silently failed, we were
| left exposed.
|
| > a change needs to be made as to which database users have
| permission to drop the database
|
| I think these two definitely highlight the importance of the
| always using the "layered onion" model of security.
|
| The combination of a firewall, a password, and a nuanced
| permissions model would have been sufficient to mitigate the
| attack, even if one or both of the others failed simultaneously.
| nickstinemates wrote:
| And, also, develop an understanding of what the software you
| chose to run is doing.
| cinbun8 wrote:
| Nice read. It might be a good idea to also lock a user account
| after N failed password attempts. Mongo does not seem to support
| that off the shelf -
| https://www.mongodb.com/community/forums/t/limit-failed-logi...
|
| Neither do other databases like PG, curiously enough. The
| recommendation seems to be to link to LDAP or use authentication
| hooks.
|
| Or perhaps use client and server certificates for increased
| security - https://www.postgresql.org/docs/current/ssl-
| tcp.html#SSL-SER...
| QuinnyPig wrote:
| They becomes a highly effective denial of service vector if
| you're not careful.
| bastawhiz wrote:
| > if you're not careful.
|
| I don't think there's a way to avoid a DOS vector even if
| you're careful. If someone can access your database directly,
| they can make enough attempts to lock a user. The only way to
| be careful is to avoid public access to the db. But if you do
| that effectively, you don't have the issue of accounts
| getting locked.
|
| It's a dubious argument to ever lock an account as a safety
| measure. Arguably, downtime is not a good tradeoff,
| especially if you're following best practices and using a
| very long, truly random password. An extremely secure db
| setup shouldn't require you to disable a "safety" feature
| that creates a new DoS vector because you've followed best
| practices.
| throwaway744678 wrote:
| Fail2ban is a good mitigation strategy, blocking _the IP
| address_ after N failed attempts (obviously, it does not
| protect completely from a determined attacker controlling a
| network of bots but it raises the bar significantly)
| SahAssar wrote:
| How does locking a account after failed attempts to access it
| make sense? By failing to access the account I have proved
| nothing besides that I know a username (which is usually not
| secret).
| solidsnack9000 wrote:
| I am not able to follow the reasoning of this part of the write-
| up:
|
| _The most important bit of information the above chart shows us
| is what a full database transfer looks like in terms of
| bandwidth. From 6p to 9:30p, the amount of data was the expected
| amount from a working primary server with multiple secondaries
| syncing to it. At 3a, you'll see an enormous amount of data
| transfered.
|
| This tells us that the hacker was an automated digital vandal
| rather than a concerted hacking attempt. And if we were to pay
| the ransom, it wouldn't do anything because the vandals don't
| have the data and have nothing to release._
| jacquesm wrote:
| If this is what a backup looks like in terms of traffic and
| around the time of the hack there is no corresponding amount of
| traffic then the hacker never made the backup that they claim
| they made.
| dmurray wrote:
| I think he's saying, at 3am his own database replication kicked
| off, and you can see a spike in the bandwidth usage. There's no
| similarly-sized unexplained spike that could correspond to the
| hacker exfiltrating the data.
| sthatipamala wrote:
| They know what the bandwidth graph looks like when a full DB
| replication/dump is in progress.
|
| Since that level of data transfer only occurred once (when
| _they_ did it), they know the attacker did not actually
| transfer any data.
| glintik wrote:
| Lol, that's the same footgun I discovered myself when was
| checking open ports. Who that wise guy in Docket team who decided
| to pass default firewall rules and open containers ports to
| public?
| berkle4455 wrote:
| How would docker know which interface is publicly accessible?
| Binding to localhost isn't useful for actual deployments.
|
| Why not utilize the ingress/egress firewall rules offered by
| nearly all cloud/vps providers instead of relying on iptables
| of an instance?
| mhitza wrote:
| > Why not utilize the ingress/egress firewall rules offered
| by nearly all cloud/vps providers instead of relying on
| iptables of an instance?
|
| Because its advantageous to use all the facilities your OS
| provides. The fact that docker bypasses the highlevel
| firewall on the system and pokes holes through via iptables
| is very unfortunate, and something I myself have learned as
| recently as 2019.
|
| Related to your question about docker knowing which interface
| to bind to, it generally sets up it's own docker network
| interface. With a highlevel firewall it's trivial to attach
| that interface to any firewall zone created (eg public zone
| with only specific ports exposed).
| MrPowerGamerBR wrote:
| You can tame Docker with iptables by using the DOCKER-USER
| chain
|
| There are a bunch of tutorials about how to tame Docker, but
| this one is the solution that I use and it was the simplest
| that I've found https://unrouted.io/2017/08/15/docker-
| firewall/
|
| This way, even if you mistakenly expose your container ports
| to 0.0.0.0, you won't be able to access them until exposing
| them with iptables
| nickstinemates wrote:
| > Who that wise guy in Docket team who decided to pass default
| firewall rules and open containers ports to public?
|
| This is pure ignorance and slandering the Docker team for it
| seems weird.
| yjftsjthsd-h wrote:
| What's ignorant about it? Or wrong, for that matter, since it
| can't be "slander" if it's true. Docker _does_ bypass default
| firewall rules and expose container ports to the public. If I
| run anything else on a server (apache2, say), and tell it to
| bind to 0.0.0.0:80, it doesn 't matter, because the firewall
| will block it. If I tell a docker container to bind to
| 0.0.0.0:80, it magically skips over any other protections and
| immediately exposes itself to the world.
| nickstinemates wrote:
| How is it that you can spawn many containers and have them
| all bind to `0.0.0.0:80` in your example?
|
| Try doing the same with Apache2. Multiple instances
| listening on Port 80.
|
| This is the difference, and the source of ignorance.
| yjftsjthsd-h wrote:
| You... can't? I mean, you can bind it inside the
| container, of course, but you can't bind it on the host
| ("publish"), which seems like the analogous thing to
| running on the host.
|
| Also, AIUI podman manages to not ignore the firewall, so
| it's not like it's some inherent issue.
| convolvatron wrote:
| thats a really weird take. the whole point of developing and
| using infrastructure projects like docker is to package up
| best practices and let other people leverage them without
| becoming experts themselves.
|
| maybe its fair to say that we shouldn't attempt to find an
| individual to blame. but that doesn't mean docker as an
| organization didnt screw up here
| nickstinemates wrote:
| So in this case the best practice is to make UFW a
| dependency and delegate all routing to it?
|
| And if I don't want to use UFW or I am on a distro with
| another firewall, let alone an upstream firewall or other
| solution, tough luck?
| bob1029 wrote:
| Looking at their self-reported realtime stats makes me wonder why
| this architecture. The current numbers imply peak database
| request rates in the hundreds/second.
|
| This appears to me like another case where we could have built a
| simple, 1-process vertical on a single box and called it a day.
| KronisLV wrote:
| When I first read the original post, I felt inclined to shift the
| "blame" on the author, much like they shifted it on Docker,
| though they are aware of how this footgun can also be a benefit
| to some:
|
| > It would make for a much more dramatic read if I was hit
| through a vulnerability in Docker instead of a footgun. By having
| Docker silently override the firewall, Docker has made it easier
| for developers who want to open up ports on their containers at
| the expense of security.
|
| When I want to run my own web server on an arbitrary Linux distro
| inside of a container, I want to bind to 0.0.0.0:80 and
| 0.0.0.0:443. That's it.
|
| When I want to run my own database on an arbitrary Linux distro
| inside of a container, without making it accessible externally, I
| want to just omit exposing ports. That's it.
|
| I don't want to think about the firewall on the system (even what
| kind of a firewall it is running, if any) or configure it
| separately, otherwise it'd be as bind mount target directories on
| host system not being automatically created, making me use "mkdir
| -p PATH", which already adds unnecessary friction (if I want the
| files to be available in the host OS, as opposed to just in
| abstracted away Docker volumes). After all, if there were more
| steps, then an awkward dance would inevitably start: "Hey, we
| launched the container on SOME_DISTRO but we can't access the
| UI/API/whatever." and you'd have to write platform-specific
| instructions, as opposed to being able to just give a Docker
| Compose file or an equivalent and be done with it.
|
| Thus, my original thoughts were along the lines of: "It's not at
| the expense of security, as much as it is to the benefit of your
| convenience without any changes to security, if you are aware of
| what you are doing." though that isn't charitable enough.
| Something like this would be more helpful instead:
| # Suppose you want to let people know that your container wants
| to listen on the port 2000 # You can specify it in the
| Dockerfile, though this will be more like documentation for the
| container, as opposed to publishing anything # Docs:
| https://docs.docker.com/engine/reference/builder/#expose #
| File: Dockerfile EXPOSE 2000/tcp ... #
| If you want your port to be available locally, then Docker gives
| you that ability as well # You can even change the port,
| for example, to 2005 on the host # Docs:
| https://docs.docker.com/compose/compose-file/compose-
| file-v3/#ports # File: docker-compose.yml ports:
| - "127.0.0.1:2005:2000/tcp" ... # If you want
| your port to be available remotely, then you can either omit the
| IP address or use 0.0.0.0 # Let's take the above example
| and use 2005 as the exposed port again # File docker-
| compose.yml ports: - "2005:2000/tcp" ...
| # For examples of Docker CLI without Docker Compose/Swarm, have a
| look at the docs:
| https://docs.docker.com/config/containers/container-networking/
| # There's also good information there about additional networking
| options, such as creating custom networks for limiting what can
| talk to what.
|
| Actually, here's an example that anyone can run, with Docker CLI
| (using the httpd web server as example, same idea applies to
| DBs): # Launch everything (detached, so we can
| use the same terminal session, remove containers once stopped)
| docker run -d --rm --name "net_test_not_exposed" httpd:alpine3.17
| docker run -d --rm --name "net_test_local_access" -p
| "127.0.0.1:2005:80/tcp" httpd:alpine3.17 docker run -d --rm
| --name "net_test_public_access" -p "0.0.0.0:2006:80/tcp"
| httpd:alpine3.17 docker run -d --rm --name
| "net_test_public_access_shorthand" -p "2007:80" httpd:alpine3.17
| # Check that everything is running docker ps #
| Or different formatting docker ps --format "{{.Names}} is
| {{.Status}} with ports {{.Ports}}" # Then you can
| use a browser to check them, or do it manually, either locally or
| from another node (with actual IP address) # 2005 will be
| available locally, whereas 2006 and 2007 will be available
| remotely curl http://127.0.0.1:2005 curl
| http://127.0.0.1:2006 curl http://127.0.0.1:2007
| # Finally, the cleanup (the --rm will get rid of the containers)
| docker kill net_test_not_exposed net_test_local_access
| net_test_public_access net_test_public_access_shorthand
|
| I will say that this is a good suggestion:
|
| > Better would be for Docker to issue a warning when it detects
| that the most popular firewall on Linux is active and filtering
| traffic to a port that Docker is about to open.
|
| However, the thing with warnings is that they're easy to miss.
| Either way, I've been there - when I was learning Docker a number
| of years back, I left its socket open and by the morning the tiny
| VPS was mining crypto. Good docs help, maybe exposing things
| publicly by default with a port binding or even using the
| shorthand in the first place isn't such a good idea either.
___________________________________________________________________
(page generated 2023-02-19 23:02 UTC)