[HN Gopher] Permissive forwarding rule leads to unintentional ex...
___________________________________________________________________
Permissive forwarding rule leads to unintentional exposure of
containers (2021)
Author : password4321
Score : 190 points
Date : 2022-06-22 18:31 UTC (4 hours ago)
(HTM) web link (gist.github.com)
(TXT) w3m dump (gist.github.com)
| 0xbadcafebee wrote:
| It should, however, restrict the source ip address range to
| 127.0.0.1/8 and the in-interface to the loopback interface:
| Chain DOCKER (2 references) pkts bytes target prot opt in
| out source destination 0 0 ACCEPT tcp --
| lo docker0 127.0.0.1/8 172.17.0.2 tcp dpt:5432
|
| I think this would break intra-container network communication,
| since containers on docker1 (172.17.1.2) would not be coming from
| source 127.0.0.0/8 or device _lo_. Docker would need to create an
| explicit rule matrix from /to each container in each network
| (does it already do this?)
| Spivak wrote:
| Docker's behavior is unintutivie but makes sense given how
| container networking works. If you use UFW read
| https://github.com/chaifeng/ufw-docker and follow the guide.
|
| Then configuring firewall rules to containers is as easy as
| - name: Open HTTPS ufw: rule: allow
| proto: tcp route: true port: 443
| guns wrote:
| Docker users on the Mac are not affected by this issue, but they
| should be aware that the "Automatically allow downloaded signed
| software to receive incoming connections" option in the firewall
| settings must be unchecked in order to block incoming connections
| to container ports published to 0.0.0.0.
|
| This is necessary because Docker Desktop for Mac is presumably
| signed by Apple.
| infotogivenm wrote:
| Setting aside the (rather scary) issue of crafted external
| packets being bridged into the internal docker subnet - the
| "0.0.0.0" fw-bypassing default for published ports in a
| developer-oriented tool is a complete nightmare for infosec teams
| at software engineering companies. I've seen of thousands of
| employee hours wasted chasing that down.
| dekhn wrote:
| Docker was definitely a fork in the road that led to unhappiness.
| I was tracking linux containers for several years (specifically,
| with the goal of doing cross-machine container migration) and
| they were doing great. Then docker came along and... pretty much
| every experience Ive had with it involves realizing core
| decisions were made by people who had no experience and we're all
| paying for it, long-term.
| TekMol wrote:
| So every time you sit down in an internet cafe and connecting to
| the wifi, you are fucked because all the docker containers that
| run on your machine are open to everybody? Or - depending on the
| lan settings - at least to the wifi provider?
|
| Also - shouldn't the web be full off vurnurable database servers
| then?
| bnjms wrote:
| For a home style Wi-Fi yes. Hopefully you can't reach Wi-Fi
| peers for Starbucks and it's ilk.
| TekMol wrote:
| Even if not - the wifi operator could still access your
| docker containers.
| qbasic_forever wrote:
| > So every time you sit down in an internet cafe and connecting
| to the wifi, you are fucked because all the docker containers
| that run on your machine are open to everybody?
|
| Yes, if you're running docker on Linux. On Mac and Windows
| docker has to create a little linux VM so it's more isolated
| and not directly accessible without explicitly publishing
| ports.
| guns wrote:
| Publishing ports to '*' is commonly done to allow Mac and
| Windows users to access containers through their browsers.
|
| The macos firewall is able to block connections to these
| exposed sockets but:
|
| 1. The user has to explicitly turn on the firewall since it
| is off by default
|
| 2. The option "Automatically allow downloaded signed software
| to receive incoming connections" must be unchecked because
| Docker Desktop is signed by Apple.
|
| I don't use a Mac, but all of the developers that use Macs at
| my company either did not have their firewall enabled or did
| not realize that connections to Docker Desktop were
| whitelisted.
| R0b0t1 wrote:
| > Also - shouldn't the web be full off vurnurable database
| servers then?
|
| It already was, but yes.
| albatruss wrote:
| > Also - shouldn't the web be full off vurnurable database
| servers then?
|
| No, the docker bridge network is not on a routable subnet.
| TekMol wrote:
| Does it have to? The attack looks like it would also work
| over the internet: 2. [ATTACKER] Route all
| packets destined for 172.16.0.0/12 through the victim's
| machine. ip route add 172.16.0.0/12 via
| 192.168.0.100
|
| Here, "192.168.0.100" could be exchange for any ip address I
| guess?
| gcbirzan wrote:
| Except you would have to be on the same layer 2 network as
| the "victim" for this to work.
| thedougd wrote:
| If that's true, you can then send packets to it, but not
| receive replies. That's still a problem.
| jandrese wrote:
| That will only work if you are on the same subnet.
|
| When you craft a packet for that address, the stack will
| see that route and send an ARP "who has" request out
| whatever interface you assigned when you did that IP route
| rule (probably your default ethernet). If nobody responds
| than the packet dies in the stack.
| 0xbadcafebee wrote:
| Not if you have a firewall enabled on your machine, which
| everybody should.
|
| _edit:_ based on comments, added the FORWARD table, which
| supersedes Docker 's rules and should block new connections
| forwarding from external interfaces.
|
| _edit 2:_ I 'm wrong, this doesn't fix the bug. If you run
| _`docker network create foobar`_ , it will move its rules up
| above the custom FORWARD rules. Ugh.
|
| _edit 3:_ Modified to add the rules to the DOCKER-USER table,
| which according to Docker[1], will always be evaluated first.
| So now it should actually fix the problem. They explain in the
| docs how to fix this situation, actually, and even how to
| prevent Docker from modifying iptables. Another example of why
| we should _RTFM_... for tool in iptables
| ip6tables ; do for intf in wlan+ eth+ ; do
| for table in INPUT DOCKER-USER ; do $tool -I $table
| 1 -i $intf -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
| $tool -I $table 2 -i $intf -m conntrack --ctstate INVALID -j
| DROP $tool -I $table 3 -i $intf -m conntrack
| --ctstate NEW -j DROP done done iptables-
| save > /etc/iptables.rules ip6tables-save >
| /etc/ip6tables.rules
|
| https://help.ubuntu.com/community/IptablesHowTo
| https://wiki.debian.org/iptables
| https://wiki.alpinelinux.org/wiki/Configure_Networking#Firew...
| https://wiki.archlinux.org/title/Iptables#Configuration_and_...
| [1] https://docs.docker.com/network/iptables/
| sigg3 wrote:
| But isn't the point of TFA that regular docker (which runs
| with privileges) negates these rules effectively punching
| holes?
| [deleted]
| teraflop wrote:
| As discussed extensively in
| https://github.com/moby/moby/issues/22054, which is linked
| from the OP: this doesn't actually help, because Docker (by
| default) bypasses your existing firewall rules.
| 0xbadcafebee wrote:
| Ah, forgot about forwarding table. These should fix that:
| for tool in iptables ip6tables ; do $tool -I
| DOCKER-USER 1 -i eth+ -m conntrack --ctstate NEW -j DROP
| $tool -I DOCKER-USER 1 -i wlan+ -m conntrack --ctstate NEW
| -j DROP done
|
| If these are saved and loaded with the rest of networking,
| they will appear at the top of the FORWARD table before the
| DOCKER table jumps. Docker won't remove these rules, and
| they come first in the table, so they supersede Docker's
| rules. Any new connections forwarded from an external
| interface should drop.
|
| _edit:_ changed from FORWARD to DOCKER-USER table
| jandrese wrote:
| If you are going that far why not just:
| iptables -t filter -I FORWARD -j DROP
|
| and while you are at it: sysctl -w
| net.ipv4.conf.all.forwarding=0 sysctl -w
| net.ipv6.conf.all.forwarding=0
| 0xbadcafebee wrote:
| ....because then containers can't network at all to non-
| local networks, e.g. no internet access. for bridge
| networks at least (which is the default).
|
| by specifying the drop for new connections incoming from
| the external interface, you stop connections to listening
| services from external networks, but established and
| related connections can continue implicitly, so
| forwarding still works for outbound connections.
|
| if you really want to block all internet access for
| containers, and stop anything else on your system that
| might need to use forwarding, then your suggestion is
| correct.
| qbasic_forever wrote:
| Docker creates its own iptables rules when its daemon runs,
| so unfortunately whatever firewall you setup will be ignored
| by exceptions it creates.
| k8sToGo wrote:
| Have you ever heard of https://www.shodan.io/?
| TekMol wrote:
| It seems to be some kind of search engine. How is it related
| to the topic at hand?
| Matl wrote:
| It exposes/let's you search for unsecured IoT, IP cams etc,
| I assume OP implied searching for unsecured containers
| would be kind of similar.
| k8sToGo wrote:
| Sorry, should have been more clear. It was a reply to this
| part of your comment:
|
| > Also - shouldn't the web be full off vurnurable database
| servers then?
|
| You can search for open database servers there, which
| should answer your question.
| alexchantavy wrote:
| It lets users search for misconfigured database instances
| that it has scraped - very relevant to the topic.
| manigandham wrote:
| > _" shouldn't the web be full off vurnurable database servers
| then?"_
|
| It is. There are millions of servers out there with major
| security issues. Every few months there's another big story of
| a data breach from nothing more than a database left open.
| M0r13n wrote:
| Nice to see that "Docker and Kubernetes are in first and second
| place as the most loved and wanted tools" according to the Stack
| Overflow developer survey.
| Matl wrote:
| Docker is definitely a 'good enough' frontend for containers in
| that it did a lot to make them more accessible, hence why it
| tops that survey, (presumably).
|
| This can be quite separate from Docker being a security
| nightmare _at the same time_.
| asimpson wrote:
| Ah yes, this footgun. Sometime last year I discovered our QA
| databases had all been locked with a ransom demand, how were they
| exposed? We had `ufw` blocking the db port?! Ah yes, Docker -p.
| smh.
| contingencies wrote:
| Ouch. This is the security posture that half a billion dollars,
| hundreds of engineers, multiple layers of abstraction to a
| firewall ruleset and _each Stream-aligned team with an embedded
| Product Manager and UX Designer_ [0] buys you. I too contributed
| security fixes to Docker in the early stages but soon gave up.
|
| Docker is a fascinating study in hype. Containers already
| existed, docker wrapped them very badly, raised half a billion
| dollars, popularized their wrapper (I understand kubernetes is
| the fad-wrapper now) and then failed to obtain a revenue stream.
| I didn't see them return much to open source, though perhaps they
| contributed to projects I don't see.
|
| Honestly, the whole thing has a plotline like a Shakespearean
| tragedy. Someone could do a film, starting with interviews with
| the guys who wrote the original Linux containers and namespacing
| kernel code at IBM, the maintainers of jails in FreeBSD, and an
| early hacker at VMWare.
|
| [0] https://www.docker.com/blog/building-stronger-happier-
| engine...
| scubbo wrote:
| > Containers already existed
|
| Fascinating. I'm admittedly very new to container-land, but my
| understanding was precisely the opposite - that Docker invented
| this concept and practice of building a runtime environment
| that can then be replicated and deployed to many hosts. Do you
| have any search terms or articles I should pursue to learn more
| about the predecessors?
|
| I've read through this[0], but it mostly suggests that
| approaches prior to Docker provided isolation/jailing, which is
| an important part of containerization but neglects a lot of
| tooling around building, deploying, etc.
|
| [0] https://blog.aquasec.com/a-brief-history-of-containers-
| from-...
|
| EDIT: various other comments ("The Docker UX is what made the
| difference, not the underlying tech. Building containers in an
| easy incremental and reproducible way that could then be stored
| in a repo and pulled over HTTP was the evolution in packaging
| that was needed.", "I think the feature that won the hearts of
| devs was the incremental builds that let people fail quickly in
| a forgiving way", "Containers were painful as hell to set up
| and use; it was like using Git, but worse. Docker made it
| convenient, simple, and useful. By now everybody understands
| that immutable artifacts and simple configuration formats are
| extremely effective") seem to support this perception.
|
| EDIT2: Thanks to all those that replied!
| dekhn wrote:
| Containers absolutely existed before. THere was a long-
| running project, I think it was LXC but there may have been
| one before that, which was doing great work fixing all the
| underlying problems before the technology went exponential.
| turminal wrote:
| > I think it was LXC
|
| Still is :)
| jskrablin wrote:
| OpenVZ is around since 2005, see
| https://en.m.wikipedia.org/wiki/OpenVZ and it had
| checkpointing, live migration between hosts, dump container
| to tar, etc before Docker even existed.
| sillystuff wrote:
| And linux vserver* has been around since 2001.
|
| If you were willing to use out of tree patches, linux
| containers pre-date Solaris containers by years (Debian
| distributed pre-patched kernels pretty early on, so you
| didn't need to deal with compiling your own kernels, and
| Debian also had an option for vserver + grsec patched
| kernels). But, freebsd jails beats linux vserver by a year.
|
| * not to be confused with the similarly named linux virtual
| server (LVS) load balancer.
| tnorgaard wrote:
| I belive that Solaris (OpenSolaris) Zones predates LXC by
| around 3 years. Even when working with k8s and docker every
| day, I still find what OpenSolaris had in 2009 superior.
| Crossbow and zfs tied it all together so neatly. What
| OpenSolaris could have been in another world. :D
| contingencies wrote:
| https://en.wikipedia.org/wiki/LXC
| pjmlp wrote:
| HP-UX Vaults were introduced in late 90's, then came Solaris
| Zones, BSD jails,...
|
| This on the UNIX world, mainframes had a couple of decades
| earlier.
| jamal-kumar wrote:
| Containers at their very heart in Linux are three
| ingredients: cgroups, namespaces, and filesystem-level
| seperation.
|
| Here's an interesting way to learn more about this - Bocker,
| an implementation of docker in 119 lines of bash:
|
| https://github.com/p8952/bocker/blob/master/bocker
| doliveira wrote:
| Docker is an imperfect solution to problems we shouldn't have
| after decades of Linux and Unix and its value is exactly in its
| DX.
|
| But considering the pride in finding the right incantation to
| build C/C++ code, to set up a system service, to set up routing
| rules or any other advanced feature... I guess anything that
| tries to hide it will count as hype
| technonerd wrote:
| Security is an afterthought in docker, deploy first questions
| later the dirty harry way. I learned lots about dockers lack of
| defense while trying to harden my instances and questioned why
| some of the options weren't enabled by default (--cap-drop=all
| and --security-opt=no-new-privileges and userns remap) other
| than exposing bad dockerfiles practices.
| voidfunc wrote:
| 1. Docker has contributed a lot and provided if not technology
| at least ideas others have iterated on.
|
| 2. k8s is not a fad. Nor is it wrapper or really even
| equivalent with docker the container bit.
| jcelerier wrote:
| so, uhm, before Docker, what was the one-liner that allowed me
| to get a ubuntu, fedora, nixos, ... shell automatically ?
| [deleted]
| eddyschai wrote:
| You can do something with lxc - I had to use it instead of
| Docker on a Chromebook a while back. The wrapper isn't as
| nice, but it's not actually that hard to get going.
| neoromantique wrote:
| lxc-create -t ubuntu
| withinboredom wrote:
| tar -zf ubuntu.tar.gz && chroot ubuntu
|
| ish
| vbezhenar wrote:
| tar -zf ubuntu.tar.gz tar: Must specify one of -c,
| -r, -t, -u, -x
|
| Doesn't work on my shiny macbook. I'll stick with Docker, I
| guess.
| [deleted]
| jcelerier wrote:
| $ tar -xzf ubuntu.tar.gz && chroot ubuntu tar
| (child): ubuntu.tar.gz: Cannot open: No such file or
| directory tar (child): Error is not recoverable:
| exiting now tar: Child returned status 2
|
| if I have to do wget of something (and find the url to
| wget) it's not worth it
| corrral wrote:
| Docker accidentally(?) created my very favorite cross-distro
| Linux server package & service manager.
|
| That it uses containers is just an implementation detail, to
| me.
|
| All the docker replacements will be useless to me if they don't
| come with a huge repository of pre-built relatively-trustworthy
| up-to-date containers for tons of services.
|
| I love no longer needing to care much about the underlying
| distro. LTS? Fine. I still get up-to-date services. Red hat?
| Debian-based? Something else? Barely matters, once Docker is
| installed.
|
| On my home server it even saves me from ever having to think
| about the distro's init system.
|
| Bonus: dockerized services are often much easier to configure
| than their distro-repo counterparts, and it's usually easier to
| track down everything they write that you need to back up,
| _and_ that 's all the same no matter what distro they're
| running on.
| neoromantique wrote:
| >huge repository of pre-built relatively-trustworthy
| >relatively-trustworthy
|
| yeah, that's not really the case with docker either I'm
| afraid
| heywoodlh wrote:
| > yeah, that's not really the case with docker either I'm
| afraid
|
| I feel like two places I disagree with this statement:
|
| - Docker's official image library
|
| - When a project you are looking to run has an official
| image they maintain
|
| When those constraints are not met, I try to build my own
| image (based on the official images).
|
| I try not to trust any other external images (with some
| exceptions from reputable sources such as linuxserver.io or
| Bitnami).
| 0xbadcafebee wrote:
| Docker's hype train was absolutely absurd, but it was
| instrumental in making containers ubiquitous, and the wrapper
| is the reason why. Containers were painful as hell to set up
| and use; it was like using Git, but worse. Docker made it
| convenient, simple, and useful. By now everybody understands
| that immutable artifacts and simple configuration formats are
| extremely effective. Docker is one of the only tools I can
| think of that was designed with _a human being_ in mind rather
| than a machine.
| toomuchtodo wrote:
| Docker is like brew in that they are both huge productivity
| levers, except Docker found a way to pay a bunch of
| technologists with VC dollars to evangelize the model.
| Beltalowda wrote:
| I mean, these are the same people who released an entire new
| daemon in the 2013 that ... _can only run as root_. Party like
| it 's 1983. I believe they added some experimental caveated
| "non-root" mode last year, but how this was not even supported,
| much less the default, from day one is just ... ugh.
|
| That being said, Docker is more than just "Linux containers" or
| FreeBSD jails; the key thing it brought was the push/pull
| commands, which made setting up and distributing containers _a
| lot_ easier, as well as some other UI /UX things. I think these
| ideas are implemented _badly_ , but they do add value.
|
| Compare this with FreeBSD jails, where the instructions were
| "download the base set from the FreeBSD ftp and extract it to
| the destination dir, manually set up /dev, twiddle with
| /etc/rc.conf, muck about with ifconfig for networking, etc."
|
| Looking at the current handbook[1], things are better now:
| there's bsdinstall and /etc/jail.conf, but e.g. jail.conf was
| added only a few months before Docker was released, but the key
| value proposition of Docker (push/pull) is still missing. Right
| now, setting up nginx or PostgreSQL in a FreeBSD jail is _more
| work_ than setting it up system-wide. People don 't use Docker
| because it runs in containers per se, they use it so they can
| run something in one easy step.
|
| IMHO Docker is more or less the Sendmail of our age[2]; we just
| have to wait for "Docker Postfix" or "Docker qmail" :-) I'd
| make a punt, but I don't really have that much time.
|
| [1]: https://docs.freebsd.org/en/books/handbook/jails/#jails-
| buil...
|
| [2]: People have forgotten now, but Sendmail solved _a lot_ of
| issues in the early 80s and dealt with the wildgrowth of
| different email systems better than anything else; while crusty
| and ugly, it added a lot of value when it was first released.
| staticassertion wrote:
| I'd rather run Docker as root than enable unprivileged user
| namespaces, security-wise.
| deckard1 wrote:
| Docker non-root is some kind of sad twisted joke. The person
| that jumps through pages of hoops to get that mess working
| and the person that reaches for Docker in the first place are
| not the same person. When they say they recommend Ubuntu
| only, they aren't fucking around. I tried last year on
| Debian. Never again. I nuked it all and started over with
| regular root Docker and restored whatever was left of my
| sanity.
|
| Docker compose is also a bewildering mess of conflicting
| versions and baffling decisions. With incredibly poor
| documentation. Try to figure out network_mode without
| spending hours doing trial-and-error github issues
| stackoverflow head pounding.
|
| I think the UX is fine _if_ you ignore Dockerfile and docker-
| compose.yml. But those files are rather atrocious. The
| Faustian bargain of Docker is you fetch images from rather
| dubious sources and of dubious origin, run them with a root
| daemon, and let Docker molest your iptables. In return, you
| get the illusion of having accomplished some form of
| meaningful isolation. No security whatsoever. But hey, I get
| to run two versions of node on one Linux and pretend I didn
| 't just sweep all these serious issues under my, now, rather
| large bed.
| halostatue wrote:
| Wouldn't "Docker Postfix" or "Docker qmail" be podman?
| Beltalowda wrote:
| Maybe; I haven't used it. Whenever I needed containers I
| used runc or just set up some cgroups manually, but it's
| been a while since I had to do so.
| sjtgraham wrote:
| > failed to obtain a revenue stream
|
| Docker is throwing off incredible amounts of cash with Docker
| Desktop licensing alone.
| willjp wrote:
| I'm not sure that is entirely fair.
|
| Sure containers like LXC and bsd-jails existed, but I think the
| feature that won the hearts of devs was the incremental builds
| that let people fail quickly in a forgiving way.
|
| With jails, the best I know of achieving this is creating a
| base jail filesystem (thin jail), and creating a new one.
| (Apologies in advance if another tool addresses this).
| pjmlp wrote:
| HP-UX vaults date back to late 90's, as one of the first
| container models for UNIX systems.
| nonameiguess wrote:
| I get that this comment is currently being pummeled from all
| sides, but just to be clear on what Docker offered, and the
| value I think they created, it wasn't containers, but:
|
| * The public registry. Obviously, there are a lot of downsides
| to this with respect to security, but the convenience of "just
| install docker and then you can run basically anything that has
| already been uploaded to Docker Hub" can't really be
| overstated.
|
| * OCI. They were kind of forced into this much later on, but
| it's great to have a standard for how to package a container
| image and a bundle that can be loaded into a runtime.
| Containers may have already existed, but there was not standard
| way to package, deliver, and run them.
|
| * The Dockerfile. For better or worse, it isn't perfect and is
| full of footguns, but as the "hello world" equivalent of build
| systems go, it's pretty easy to get started and understand what
| it's doing.
|
| * I _think_ , but am not certain, that Docker was the first to
| combine containers with union filesystems, saving quite a bit
| of disk space when you're running many containers that share
| layers. This also helped usher in the era of read-only root
| filesystems and immutable infrastructure.
|
| It's also important to remember that developers are not the
| only people involved in delivering and running software. I see
| a lot of complaining here that containers add friction to their
| workflow, but look at it from the perspective of systems
| administrators, IT operators, and SREs that have to install,
| configure, run, monitor, and troubleshoot potentially hundreds
| of applications, some internally developed, some third-party,
| all using whatever language, whatever runtime, and whatever
| dependency chain they feel like without regard to what the
| others are doing. Having a standardized way to package,
| deliver, and run applications that is totally agnostic to
| anything other than "you need Linux" was pretty powerful.
| [deleted]
| xvector wrote:
| How is K8S a fad?
| ipaddr wrote:
| The shortening of the name to what you have above is your
| first clue.
| hn_throwaway_99 wrote:
| Uh-oh, guess we have to tell everyone internationalization
| is just a fad because everyone writes it i18n.
| lmm wrote:
| It absolutely is. Approximately no-one gets real business
| value out of internationalization, they just use it as a
| stick to beat technologies they dislike.
| iasay wrote:
| It's not a fad, but a fashion. People wear it because it
| looks cool on your CV or consultancy offering, not because it
| keeps you warm and dry. They don't even understand it
| properly. Even AWS, who I work with regularly on EKS issues,
| seem not to understand their own offering properly.
|
| Ultimately in a lot of companies it ends up an expensive piss
| soaked nightmare wearing a jacket with "success" printed
| across it.
|
| I _like_ the idea but the implementation sucks and echoing
| the root poster, containers have some unique problems which
| can indeed cause you some real trouble and friction. It makes
| sense for really horizontal simple things, which isn 't a lot
| of stuff on the market.
| dekhn wrote:
| I used borg for over a decade, I know what k8s is for.
| blowski wrote:
| Ask me in 10 years, I'll tell you whether it was a fad or
| not.
| languageserver wrote:
| a 20 year run in tech is not a "fad".
| 867-5309 wrote:
| "tech" was a fad to your future AI overlords
| natpalmer1776 wrote:
| On a long enough timeline, everything can be considered a
| fad!
| manigandham wrote:
| The Docker UX is what made the difference, not the underlying
| tech. Building containers in an easy incremental and
| reproducible way that could then be stored in a repo and pulled
| over HTTP was the evolution in packaging that was needed.
|
| UX alone can be a multi-billion dollar business, for example
| Dropbox was just a folder that syncs. I'll leave you to find
| that classic HN comment about it. Docker just executed poorly
| and didn't have much worth paying for.
| eptcyka wrote:
| How does one build containers reproducably? I can make parts
| of my stack reproducable if I use existing, pinned
| containers, but due to all the `apt-get`ing in my
| dockerfiles, I'd call them anything but reproducable.
| tjoff wrote:
| But that's the good thing about them. It isn't just a VM
| snapshot it is a recipe to build an image.
| compsciphd wrote:
| that's where docker failed in only copying part of what I
| built (they simplified it, and to an extent that was good
| for making a product, but I had proposed the need for (and
| prototyped) a container/layer aware.
|
| see https://www.usenix.org/conference/lisa11/improving-
| virtual-a....
|
| the "problem" is building that layer aware linux
| distribution, so Docker just skipped it and relied on
| existing things and building each image up linearly.
|
| This was actually an advantage of CoreOS's rkt (and the ACI
| format), where it enabled images to be constructed from
| "random" layers instead of linear layers (though the
| primary tooling didn't really provide it and were just
| docker analogues, there were other tools that enabled you
| to manually select which layer IDs should be part of an
| image - but docker "won" and we were left with its
| linearized layer model).
| AtNightWeCode wrote:
| The Docker images are not even versioned.
| endgame wrote:
| I use Nix, which has a `dockerTools.streamLayeredImage`
| function: https://nixos.org/manual/nixpkgs/stable/#sec-
| pkgs-dockerTool...
| heywoodlh wrote:
| > but due to all the `apt-get`ing in my dockerfiles, I'd
| call them anything but reproducable.
|
| Giving this some thought, I agree if you use package
| managers and base images I don't think it's possible to
| build images in a perfectly reproducible fashion.
|
| But I would ask: don't image tagging and registries to
| provide a reproducible _output_ (binary, packages, etc.)
| make it unnecessary to provide reproducible builds? Why
| does it matter if the build is perfectly reproducible if
| the actual output of the image can be reproduced? I.E. if I
| can run the software from an image built two years ago,
| does it matter if I can 't build the exact Dockerfile from
| two years ago?
|
| EDIT: fixed some grammar issues
| kwentin360 wrote:
| I've seen success building them in centralized server(s)
| and 'apt-get'ing specific dependencies instead of the
| latest ones.
| pxc wrote:
| You use Docker as a distribution mechanism for the output
| of a reproducibility-oriented build system like Guix or
| Nix. Pin your dependencies inside the build system, then
| the build system can generate a container.
| eptcyka wrote:
| I use NixOS. But I see no reason to dabble with
| containers since I use NixOS except for when I can't coax
| some software to work reliably on NixOS or I want to
| produce binaries that work on distros that are not NixOS,
| in which case I can't use NixOS based containers anyway.
| password4321 wrote:
| > _I can 't use NixOS based containers anyway_
|
| I'm no expert, but I believe the purpose of containers is
| to include all user space dependencies... so this doesn't
| make sense to me.
|
| I personally am surprised NixOS hasn't leaned into
| marketing itself as ideal for creating reproducible
| containers.
| wmanley wrote:
| With apt2ostree[1] we use lockfiles to allow us to version
| control the exact versions that were used to build a
| container. This makes updating the versions explicit and
| controlled, and building the containers functionally
| reproducible - albeit not byte-for-byte reproducible.
|
| [1]: https://github.com/stb-tester/apt2ostree#lockfiles
| manigandham wrote:
| How reproducible is up to you.
|
| You can build everything elsewhere and only copy in the
| artifacts. Or only install pinned versions of things. Or
| only use certain base tags. Or do the entire CI thing
| inside a container. Or use multi-stage containers for
| different steps.
|
| My previous startups all used pinned based images with
| multi-stage containers to build each language/framework
| part in its own step, then assembly all the artifacts into
| a single deployable unit.
| iasay wrote:
| I don't think it was even that. It was massive marketing and
| hype which drove docker adoption. The engineering concerns
| and the user experience were both negatives because they
| actually added a considerable cost and friction to the
| experience.
|
| I will obviously need to back that comment up but
| fundamentally the security posture of docker (and most
| package repositories) is scary and the ability to debug
| containers and manage them adds considerable complexity. Not
| to mention the half baked me-too offerings that circle around
| the product (windows containers anyone?)
|
| The repeatability argument is dead when you have no idea what
| or who the hell your base layers came from.
|
| What it did was build a new market for ancillary products
| which is why the marketing was accelerated. It made money.
| Quickly.
| blueflow wrote:
| Is this still the realm of "mistakes can happen sometimes" or is
| it neglience? Where is the limit?
| k8sToGo wrote:
| Is it neglience if it is a documented feature? Bad UX does not
| imply neglience?
| Beltalowda wrote:
| There was some site that got pawned because of this last year
| or so, I forgot the name but the owner did a nice write-up of
| it and there's a long HN thread on it. There were many
| experienced Docker users - often using it daily in a business
| setting - that were not aware of this "feature".
|
| Yes, you can document it somewhere, but 1) not everyone reads
| everything from cover-to-cover, and 2) even if you do, the
| real-world implications may not be immediately obvious (the
| way it was phrased, at the time, didn't make it obvious).
| k8sToGo wrote:
| That is why you usually secure your stuff by a hardware (or
| separate) firewall.
|
| But I too found this feature weird when I found out about
| it.
| JamesSwift wrote:
| Its documented, but still extremely surprising even for
| experienced docker users. The only place you will read about
| this is is you actually go to the docs page for the `-p`
| flag. But as I've said before, why would I do that? I already
| know what `-p` does (spoiler: I didn't know what it did).
|
| It was multiple years before I realized I was exposing my
| services like this, after it came up on HN a while back.
| k8sToGo wrote:
| I am not saying that this does not suck. I was a victim of
| this as well. I am just saying that it isn't really
| negligence, just bad architecture.
| JamesSwift wrote:
| Right, but theres bad architecture and then theres "this
| is a security risk and every tutorial in the wild + every
| app in production uses this in an insecure way and we
| haven't done anything about it".
|
| I just realized I posted my thoughts on this github issue
| [1] which is now _six_ years old. There have been no
| updates / changes made as far as I can tell.
|
| [1] - https://github.com/moby/moby/issues/22054
| k8sToGo wrote:
| Mhh good point. I guess if you keep bad architecture long
| enough it becomes negligence.
| aspyct wrote:
| I am (was) using portainer on my host to manage my docker
| containers.
|
| Turns out everything was, indeed, publicly accessible. I could
| fix it for my custom containers, by binding on 127.0.0.1:<port>,
| but I can't edit portainer's config.
|
| Anyone has a solution for this?
| DavideNL wrote:
| You can't fix this issue by binding to 127.0.0.1
|
| My solution has always been to just use some external firewall
| (outside of the docker host machine.) Often cloud providers
| have this as a feature, to configure a firewall for your
| vps/network.
| rootsudo wrote:
| Nice, that's interesting indeed!
| maxboone wrote:
| Related thread on the GitHub repo of Moby: -
| https://github.com/moby/moby/issues/22054#issuecomment-10997...
| password4321 wrote:
| In context,
| https://github.com/moby/moby/issues/22054#issuecomment-96220...
| and following:
|
| > _Docker containers in the 172.16.0.0 /12 ip range are
| definitely reachable from external hosts. All it requires is a
| custom route on the attacker's machine that directs traffic bound
| for 172.16.0.0/12 through the victim's machine._
|
| > _Notice that the common advice of binding the published port to
| 127.0.0.1 does not protect the container from external access._
|
| ...
|
| > _You don 't even have to --publish._
| koolba wrote:
| Would this attack require having control of a machine on the
| same local network? Otherwise how would you route that packet
| to the target machine?
| infotogivenm wrote:
| Yes, I believe so, although you need "privileged network
| access" on the box - meaning you can craft random IP packets,
| not just "control" of the box. So on the LAN or otherwise in-
| path for the route for that subnet's responses.
| jandrese wrote:
| Aren't you just crafting a packet to a 172.17.0.0/16
| address on port 80? That requires no special permission
| whatsoever.
| infotogivenm wrote:
| If your compromised box has a route already for that
| subnet and machine yes. That would be odd though I think.
| jandrese wrote:
| Right, you aren't going to send that packet over the
| Internet, and you'll only getting it through local routers
| that have the routes you need, which should not be common.
| sigg3 wrote:
| Might be common with IoT devices, cloud network equipment
| etc. :|
| ranger_danger wrote:
| yes
| infotogivenm wrote:
| I _believe_ this is only true when ipv4 forwarding kernel
| tunable is enabled, but am not sure. Does anyone know?
| ranger_danger wrote:
| If you disable it at least, your containers will no longer
| have internet access.
| infotogivenm wrote:
| Had to look it up, you're right:
|
| https://github.com/moby/moby/issues/490
|
| It's required by docker, ugh.
| metadat wrote:
| See sibling comment:
| https://news.ycombinator.com/item?id=31841487
| tinco wrote:
| Eh, isn't the real story then that Linux will route random
| incoming traffic on any interface? Even if you haven't
| configured it as a router?
|
| I know whoever was in charge of configuring Dockers iptables
| routes should have known this and messed up, but that is fucked
| up.
| lmm wrote:
| > Linux will route random incoming traffic on any interface?
| Even if you haven't configured it as a router?
|
| "configuring Linux as a router" is exactly what "adding an
| iptables rule that routes" is. That's what it's for, that's
| how you do it.
| jeroenhd wrote:
| I'm a bit confused; as far as I know Linux only forwards
| packets on interfaces where you've enabled routing, doesn't
| it?
|
| If you set up: net.ipv4.ip_forward=0
| net.ipv4.conf.eth0.forwarding=0
| net.ipv4.conf.docker.forwarding=1
|
| I'm pretty sure this isn't an issue, right?
|
| My guess is that the real real story is probably that every
| guide on the internet says to just set net.ipv4.ip_forward=1
| and that nobody bothers to stop and read up on the sysctl
| parameters they're copy/pasting from the internet.
|
| For this attack to succeed, the attacker also needs to be on
| the same network or have their upstream ISPs accept plain
| external traffic towards internal networks. Executing the PoC
| on Linux without being in the same subnet won't even be
| accepted (though raw sockets may still send out traffic
| towards the host that will probably get filtered somewhere
| along the way).
| Spivak wrote:
| Well yeah, you turned on ip forwarding and created a network on
| your host that can be reached. Imagine this was a VM host and
| those networks were meant to be routable, how else could it
| work?
|
| Are people just now discovering the FORWARD chain?
| cxcorp wrote:
| This is even worse than falsely binding to 127.0.0.1. If I
| don't publish a port in a container, I expect to *not publish
| any ports*.
| wiredfool wrote:
| The other fun thing here, at least with docker-compose, is that
| it behaves really badly with IPv6, essentially, if you don't
| manually set a IPv6 address, then all IPv6 to the host address is
| masqueraded to IPv4, with the visible address being the router
| address for the network.
|
| So, Any external IPv6 connections come in looking like they're
| from the local network.
| foresto wrote:
| This conversation makes me thankful that I spent time on the
| extra hoops to get Docker working in rootless mode. (That mode
| wasn't officially released yet; I imagine it's easier by now.)
|
| It also reminds me of Podman as a promising alternative. Now that
| it's in Debian Stable, I think it's about time I give it a try.
| geenat wrote:
| Rootless Docker is currently the only version that does not touch
| iptables.
|
| Modifying iptables was a massive oversight by Docker- they really
| should deprecate that.
| k8sToGo wrote:
| Or you just pass --iptables=false to dockerd. Then it will also
| not do anything.
| jck wrote:
| Securing docker, with it's weird iptables shenanigans is a
| nightmare. I prefer to use rootless podman, and also avoid
| exposing ports (by using internal routing and if needed, a
| reverse proxy with only one published port)
| Spivak wrote:
| Podman and Docker require the exact same shenanigans except in
| the one specific case using slirp4netns (rootless) in Podman or
| RootlessKit in Docker which requires using a tap device and an
| entire usemode networking stack.
|
| It's a neat trick for development environments but for real
| traffic you'll still have to actually do the iptables BS.
| jck wrote:
| Podman 4 rootless uses a different network stack:
| https://www.redhat.com/sysadmin/podman-new-network-stack
|
| It is performant enough for my usecase: services used by me
| and a few friends.
|
| I don't use the root mode, but I was under the impression it
| doesn't have the same well known docker issue where it
| exposes everything on the public interface(and using a
| firewall on top of it is complicated)
| 2OEH8eoCRo0 wrote:
| Same here. Are there even legit reasons to not run all
| containers rootless? I run rootless everything, as well as
| things that need privileged ports. The design choices by the
| Docker team are just baffling.
| withinboredom wrote:
| I believe, at the time, setting up a container required root.
| As in the kernel didn't expose the right pieces for a non-
| root user to ever dream of starting a container.
| BeefySwain wrote:
| Yeah it's nuts. The best solution I've found is some kind of
| cloud firewall, whether that be an off the self service that
| you use with whatever cloud provider you are using, or rolling
| your own by routing all traffic through another host that
| doesn't have Docker nuking all your firewall rules every time
| it restarts.
| notimetorelax wrote:
| To be fair either in the cloud or on premise, firewall is a
| must. It's just one of the layers of security.
| [deleted]
| iasay wrote:
| This.
|
| You need a last resort security control against stuff like
| this anyway. Even an automation failure or misunderstanding
| of a ruleset can leave you exposed.
|
| Security _must_ be layered.
| jwitthuhn wrote:
| Yeah this got me the first time I was using docker as well. I
| wanted my app server to only listen locally and configured it
| like that, then Docker helpfully punched a hole in my firewall
| so anything could talk to it.
|
| Agreed that podman has been a great experience in comparison.
___________________________________________________________________
(page generated 2022-06-22 23:00 UTC)