[HN Gopher] LXC and LXD: a different container story
___________________________________________________________________
LXC and LXD: a different container story
Author : zekrioca
Score : 132 points
Date : 2022-09-23 15:29 UTC (7 hours ago)
(HTM) web link (lwn.net)
(TXT) w3m dump (lwn.net)
| 4oo4 wrote:
| I use both lxc and docker and have uses cases for both, I think
| it really comes down to how stateful something is or how lazy I
| feel about writing a Dockerfile. I had a really hard time with
| learning lxd and really only got into using lxc without the
| daemon.
|
| One trend with docker that I personally don't like is that a lot
| of projects prefer docker-compose over regular Dockerfiles
| (though some of them support both), and this leads to a lot of
| bloat with how many containers it takes to run one application
| and duplication where each app will need its own database, redis,
| webserver, etc. containers.
|
| That's not a problem when you are at the scale to need that kind
| of orchestration and have the resources to run that many
| containers, but personally I would rather have one database host
| that all my containers can talk to and one reverse
| proxy/webserver to make them accessible to make better use of
| resources and get better density on one host.
|
| One downside with lxc is that there's been some fragmentation
| with the image creation side of it, I had been used to using lxc-
| templates for years, which are just shell scripts for
| bootstrapping and configuring your container and pretty common
| between distros. I found a bug with creating containers for the
| latest version of Alpine that I fixed, and only then did I find
| out that lxc-templates are deprecated and essentially
| unmaintained, and that now distrobuilder is the preferred way to
| build lxc containers, at least per Canonical. That seems to be
| another Canonical-ism, since the distrobuilder docs say that snap
| is the only way to install it unless you build from source, so
| that's a huge no from me, and I also didn't feel like learning
| the yaml config for it.
|
| I was actually considering moving my docker daemon into an
| unprivileged lxc container like the article mentions, but haven't
| gotten around to it.
| Helmut10001 wrote:
| I use Docker in unprivileged LXC* for the mere benefit of
| separation of concerns [1]. I don't use VMs for this because I
| am limited memory wise, and LXC allow me to share all host
| resources. I use docker because I don't want to mess with the
| way authors expect their dependencies to be set up - which is
| easy to circumvent by settling on a docker-compose.yml. Nix [2]
| appears to be the best future successor for my setup, but it is
| not used as widely yet.
|
| * Afaik, this only works on Proxmox, not LXC in bare Debian,
| because Proxmox uses a modified Debian Kernel with some things
| taken from Ubuntu. [1]:
| https://du.nkel.dev/blog/2021-03-25_proxmox_docker/
| [2]: https://nixos.org/
| weinzierl wrote:
| > _" [..] which are just shell scripts for bootstrapping and
| configuring your container and pretty common between distros.
| "_
|
| I find it a big advantage that this builds the containers
| basically from scratch and you only have to trust the distro
| and not any other parties. I'm always feeling uneasy with
| images which are hard to inspect and whose provenance is opaque
| and potentially involved multiple parties.
|
| That, combined with running them unprivileged, should make a
| fairly secure system. Unfortunately I had great difficulty to
| create an unprivileged Debian LXC container with this method.
| If I remember correctly you have to create the privileged
| container first and then use a tool to fix-up uid and gid. If
| anyone knows an easier way to do it, I would be grateful to
| know.
|
| EDIT: I think I used the following to create the container:
| lxc-create -n <name> -t debian -- -r stretch
|
| It uses deboostrap for the build.
| foresto wrote:
| I don't recall having to do any uid/gid fixup last time I
| made an unprivileged container. I did have to prepare the
| unprivileged host account, of course, by setting its
| subordinate uids/gids (/etc/sub?id) and virtual network
| interface limit (/etc/lxc/lxc-usernet).
|
| To create the container, I did this:
|
| lxc-create -t download -n <name> -- -d debian -r bullseye -a
| amd64
|
| Note that this runs the 'download' template, which (IIRC) is
| better suited to unprivileged containers than the 'debian'
| template is. The 'download' template will list its available
| distros if you do this:
|
| lxc-create -t download -n <name> -- --list
|
| Note that some versions of the 'download' template may fail
| with a keyserver error because sks-keyservers.net died
| somewhat recently. Workaround:
| DOWNLOAD_KEYSERVER=hkp://keyserver.ubuntu.com lxc-create ...
|
| https://github.com/lxc/lxc/issues/3894
| UnlockedSecrets wrote:
| I love using LXC, Finding documentation and such online regarding
| things however is an absolute pain in the ass in trying to figure
| out if the LXC commands they are talking about older depreciated
| syntax's, LXD which uses 'lxc' for their command line, Or the
| modern version of LXC which you are trying to make use of....
| 0xbadcafebee wrote:
| Docker's easier and does all the same stuff. It's integrated with
| everything else, has an interface everyone is familiar with, uses
| OCIs, is extensible, is supported on basically every platform,
| has a simpler config file, tons of community support, and now
| runs rootless.
|
| Does anyone have a use case where they _couldn 't_ use Docker?
| I'm sure they exist but the list must be tiny. Listed in the
| article is: - run systemd in a container
| - why?? does your system not have systemd or equivalent?
| - bad design; how will you monitor what's running and what's not,
| or upgrade an individual service in this container? restart
| everything? - apparently systemd can run in docker??
| https://medium.com/swlh/docker-and-systemd-381dfd7e4628
| - run lxc/docker in a container - docker-in-docker
| p4bl0 wrote:
| I teach an introduction to computer security course where I
| show my students how to write some basic rootkits for Linux. I
| need them to be able do that on the computer room machines
| (where they aren't root not sudoers), or on their own computer,
| but in both case I prefer if it doesn't cause a kernel panic if
| their code is wrong at some point.
|
| As far as I know you cannot load kernel modules in a container.
| You need an actual VM for that.
| pram wrote:
| I use LXD + ansible for my personal projects. IMO the main
| benefit I see is it requires very low cognitive overhead. It's
| insanely easy to set up a container, I only need to remember like
| 3 commands ever. These programs will never be clustered either so
| theres no point in having a scheduler.
| divbzero wrote:
| What would those 3 commands be? As a developer who has not used
| LXD, I'm curious for insight into the developer experience.
| pram wrote:
| lxc launch and lxc start/stop
|
| First one you supply the image you want (like ubuntu:20.04)
| and the rest are self explanatory.
| ncphil wrote:
| My shop went from VMware to OpenShift around the time I
| transitioned from systems to whatever it is I do now. In my
| private life I've used kvm since it was first rolled out, but
| over the last few years have redeployed all my home services to
| docker. I did try lxc for a while, and think it could replace kvm
| for lab work. On the other hand, lxd never seemed worth the
| effort: especially with its snapd dependency. Podman seems like a
| good alternative to docker, but I'm not sure migrating to it
| would be worth the effort as long as docker sticks around.
| sreevisakh wrote:
| I use LXD these days and I love it. It's easy enough to deploy
| without knowing too much internal details. Its CLI is also
| intuitive and well designed. It's true that LXD doesn't support
| installation outside snap. However, many distros like Alpine,
| Arch and Gentoo support its installation on bare metal from
| their software repos.
| locusofself wrote:
| My last company had hundreds possibly thousands of LXC
| containers, and we orchestrated everything via saltstack (which
| is similar to ansible or puppet if you aren't familiar).
|
| The justification was that we needed our SaaS to also work on-
| prem for financial companies and government entities, and thus we
| could not count on kubernetes or any specific cloud vendor to be
| available, so we rolled our own orchestration built on ubuntu
| hosts and (for some stupid reasons) centos LXC containers running
| on top of those.
|
| LXC is a good system and all, but what we were doing with it was
| a total nightmare, we were basically trying to reinvent
| kubernetes using traditional configuration management and LXC
| with two flavors of linux and a million dependencies.
| thedougd wrote:
| I can't speak to supporting these, but there are a few vendors
| that offer k8s in a box for this exact use case. Replicated is
| the first that comes to my mind. I've used this one as a
| customer. It worked fine but felt it necessary to do a bit of
| poking under the hood to help it understand things like our AZs
| in a private cloud:
|
| https://www.replicated.com/kubernetes/
| 0xbadcafebee wrote:
| > we needed our SaaS to also work on-prem for financial
| companies and government entities, and thus we could not count
| on kubernetes or any specific cloud vendor
|
| ....? kubernetes is not a cloud vendor, it runs on any Linux
| distribution, and it's FOSS.... ?!
| hobs wrote:
| And that's why the sentence contains an or - because they are
| not a cloud provider but you cant guarantee a local corp is
| gonna run kube.
| tonnydourado wrote:
| A previous employer had the same issue, and solved it by
| writing the application to kubernetes and deploying
| minikube to the clients that didn't have their own
| kubernetes cluster.
|
| To be fair, the situation ended up being: bunch of clients
| running minikube, 1 major client with their own kubernetes
| cluster, and a lot of other clients using the multi tenant,
| cloud hosted, offering. Made it a bit of a pain to support
| that one client with a cluster we could not control, but it
| was one of our biggest clients, so, them the breaks.
| tonnydourado wrote:
| I tried to invent docker and kubernetes with LXC back when docker
| was super alpha and kubernetes was barely a fetus. I failed,
| obviously, but learned a lot. It's kinda the only reason I
| managed to pick up kubernetes later on, honestly. Sometimes, not
| knowing what you're doing is an asset.
| mmsnberbar66 wrote:
| No layer caching mechanisms
| divbzero wrote:
| Does LXC and LXD support the functional equivalent of Dockerfile
| or docker-compose.yml? I appreciate how Docker has become widely
| accepted for defining reproducible environments but have always
| found it to be rather heavy on resources.
| tinus_hn wrote:
| And, newly, a licensing hassle.
| depingus wrote:
| Not really. But LXD containers are built by layering commands.
| I use a bash script with the commands to create reproducible
| containers.
|
| That said, LXD is good for containerizing OS's, not single
| applications. So when you say "defining reproducible
| environments" you can get reproducible containers, but not so
| much environments. In that sense, it behaves more like a VM.
| dvdkon wrote:
| I run everything in LXD containers on my home server(s), with
| Debian as the host (manually compiled, not the snap). Most of the
| setup is automated with Ansible that I hacked to read Python
| scripts instead of YAML.
|
| LXD is a treat to work with, and I feel its container model is
| perfect for home/small business servers that need to run lots of
| third-party software, much of which would work poorly in a
| "proper" stateless container.
| thanatos519 wrote:
| I have been running a handful of Debian systems in containers
| since OpenVZ was state-of-the art. As the hardware expired, I
| moved to linux-vserver, then LXC, then LXD ... at which point it
| all shifted from straightforward text configuration files to
| SQLite mediated by CLI programs. When I could not figure out the
| incantation required for some gnarly configuration rotations
| ("computer says no"), I was forced to shut everything down, alter
| the SQLite files directly, and spin it all back up again. On the
| next hardware refresh, I switched back to LXC.
|
| This sort of mediation makes sense and works great for ephemeral
| containers in Kubernetes. For full virtual servers, LXD is the
| worst of both worlds.
| jordemort wrote:
| Hi, author of the article, pleased to see it here! I'd really
| like to hear more from folks about how they're using LXC and/or
| LXD, and what they think their greatest strengths are compared to
| Docker or Kubernetes.
| leetnewb wrote:
| I use lxd for running core services at home, such as
| samba/sftp/webdav or home automation. I find it easier to
| attach a vpn to the container, which combined with a nat
| traversing sdn/mesh gives me easy, private, encrypted access to
| data or services. Also easy to pass a usb device (say zwave
| controller) to a container. I run it over a btrfs array and
| have a workflow to snapshot containers and send to a backup
| host. Using unique namespaces for each unprivileged container,
| I am reasonably confident that a security oops in one container
| should be isolated.
|
| I also use lxd on vps hosts, often nesting docker to give more
| granular control over networking.
| youainti wrote:
| I've been considering building a system to spin up containers
| to run repeatable statistical analyses. LXC was the direction I
| decided to go.
| Dwedit wrote:
| I'm using LXC just to have a painless way to have a second SSH
| server (for SFTP) running on the computer that has different
| usernames and passwords.
|
| Literally all that is installed is Alpine Linux and the SSH
| server. There's a symbolic link to get to the shared files.
| depingus wrote:
| Similar use case for me. But instead of SSH/SFTP I was
| running Caddy with the file_server directive!
|
| I've since switched to podman because my installation on
| Alpine Edge kept breaking on updates and I don't want to use
| Snap. Someone mentioned OpenSuse had it in their package
| manager and its been stable. Maybe I'll check that out in the
| future.
| [deleted]
| nullwarp wrote:
| We use LXC on physically distributed Proxmox nodes. It let's us
| easily launch new VMs and migrate around when we need to.
| Nothing we have is very complicated but it works well. We use
| SaltStack to tear up special debug containers with a bunch of
| integrated tools. To update Saltstack just downloads the new
| container and creates a new container in Proxmox.
|
| I guess you could technically replace what we built with pure
| Docker but the Proxmox UI is a godsend for our on the ground
| support engineers who aren't the most technically savvy types.
|
| We started going down the Kubernetes route initially but it
| quickly turned into an absolute nightmare for us and we gave up
| quickly and we're all pretty strong with Kubernetes. I guess
| though when all you need is a few stateful services running the
| "dynamic scaling" of kubernetes is kind of pointless.
| sreevisakh wrote:
| I use Kubernetes within an LXD container with Btrfs backing
| storage. This isn't anything special. But it has two
| advantages. The first is that you could try out multi-node K8s
| clusters on a single system. The second is that the containers
| can be deleted and rebuilt easily without affecting the host
| system. Both are very useful when you're learning multi-node
| K8s. I plan to expand my homelab to true multi-node setup.
| However, I will probably retain K8s inside LXD. It is useful to
| run certain applications that demand full control of the system
| - like PiHole or MailInABox. There are probably better ways of
| hosting them. However, LXD gives me a lot of flexibility to
| experiment and make mistakes.
| MuffinFlavored wrote:
| > The first is that you could try out multi-node K8s clusters
| on a single system.
|
| Why can't you do this without BTRFS/LXD and just with
| Docker/OCI?
| ipnon wrote:
| I used LXC to virtualize Mastodon instances for a while. It works
| as intended. The illusion was that there were separate kernels
| all on the same machine.
| fuzzy2 wrote:
| I fucking love LXC! It allowed me to "virtualize" several
| physical servers on a single newer server, keeping all of the
| original setup (except some networking changes). Combined with
| _ipvlan_ , I could even work around the MAC address restriction
| of my server provider, with both IPv4 and IPv6.
|
| I also use it to host a Jitsi instance. Jitsi is rather picky
| about the underlying distribution.
|
| All this without the overhead of an actual virtual machine of
| course.
|
| The only pain I have with LXC is its behavior when errors occur.
| It's not easy to tell what's wrong, the error messages are
| worthless. Sometimes the debug log can help, sometimes not so
| much.
| fariszr wrote:
| You are using hetzner right?
|
| I'm interested to see how you bypassed the Mac address problem,
| any blogs, guides ?
| dspillett wrote:
| I'm not sure what the problem is, but if it is stopping you
| bridging the physical interface effectively, could you not
| define a virtual bridge with local only addresses and have
| the host NAT to/from that and the physical interface? Adds a
| little extra latency to be sure, more so if you have two or
| more boxes on the same network with containers that would
| otherwise talk directly to each other on a private vlan, but
| if your physical box is being a firewall/router for the tasks
| in the containers anyway it is already practically doing the
| job.
| taskforcegemini wrote:
| lxc/lxd was the reason I went with ubuntu, but snap is why I
| migrated my containers to docker (and will soon move to debian)
| jonatron wrote:
| Back in 2016, I thought it'd be nice if LXC/D had something like
| a Dockerfile. It would just be a small layer or tool to do it,
| proved with a little python script:
| https://github.com/jonatron/lxf
| depingus wrote:
| LXD containers are built up using commands. I have one bash
| script for every container.
| gigel82 wrote:
| I use an LXC to run Docker and a bunch of containers on my
| Proxmox VE machine :). LXC is interesting, but you can't beat the
| ecosystem of Docker images available (and maintained, updated,
| etc.)
|
| Have a few apps running in plain LXC containers (like
| AdguardHome) but maintenance is non-free (unlike a docker-compose
| stack with Watchtower keeping everything nice and fresh).
| kasbah wrote:
| What do you mean by Watchguard? I did a search but didn't find
| anything. Did you mean Watchtower maybe?
|
| https://github.com/containrrr/watchtower
| gigel82 wrote:
| Yes, watchtower, sorry.
| chubot wrote:
| Do either of them support the concept of "layers" like Docker
| does?
|
| I think that feature combined with overlayfs2 is quite useful,
| despite my many criticisms of Docker.
|
| It is sort of a middle ground between Nix like granularity which
| requires rewriting upstream, and big LXC blobs created with shell
| scripts.
|
| Although I also think we need some kind of middle ground between
| docker and nix :)
| ansible wrote:
| > _Do either of them support the concept of "layers" like
| Docker does?_
|
| Never used LXD, but LXC does not have layers per-se.
|
| I usually run LXC containers on a btrfs filesystem, which
| easily supports snapshots and sending containers to other
| container hosts via "btrfs send | ssh otherhost btrfs receive".
|
| If you are treating your servers like pets (and not cattle) LXC
| is a very convenient means to consolidate servers onto fewer
| hardware systems.
| kwk1 wrote:
| > Never used LXD, but LXC does not have layers per-se.
|
| It's kind of a similar situation in LXD, where that sort of
| functionality being available depends on the storage
| backend/driver in use, for example LVM thin provisioning,
| various options via ZFS, or btrfs like you mentioned.
| chubot wrote:
| Hm yeah, in some sense I like the things to be more
| orthogonal, i.e. container images vs. the file system
|
| On the other hand I think storage and networking should be
| more unified, and the differential compression and caching
| you get from layers is important
|
| I suspect that doing the layers at the logical level
| (files) , like overlayfs, rather than the physical level
| (blocks) is also better for containers, but I'd be
| interested to read any comparisons
| depingus wrote:
| I ran LXD for about a year in my home lab. Unfortunately, the
| easiest way to get it running is by installing Snap; which I
| didn't do. Instead I ran it on Alpine Edge (one of the few
| distros that actually has it in their package manager). LXD kept
| breaking after system updates and I got tired of troubleshooting.
| I suppose that's just part of the perils of running bleeding
| edge.
|
| When LXD was running, I found that it feels like something
| between a VM and OCI container. In the whole pets vs cattle
| analogy, LXD containers definitely feel more like pets.
|
| On the host, concepts are similar to OCI containers (networking,
| macvlan, port mapping, etc). I like its approach to building up
| containers with several commands instead of one long command. You
| can add and remove ports, volumes, and even GPUs to running
| containers. It doesn't have compose files, I just used bash
| scripts to create reproducible containers.
|
| Inside the LXD container is where things get different. You're
| containerizing a whole OS, not an application. So you end up
| administering your container like a VM. You jump inside it,
| update its packages, install whatever application and
| dependencies you need. It doesn't work like a glorified package
| manager (which is how most homelabbers use Docker). As a long
| time VM user I actually prefer installing things myself, but I
| can see this turning away many people in the selfhosting
| community.
|
| I liked LXD and would've kept using it if it weren't so
| intimately linked to Snap. Last month it failed to start my
| containers after a system update...again. So I moved that box
| over to Rocky with Podman containers running as SystemD units. I
| do kinda miss having fast, throwaway OSes on that server for
| experiments (sometimes its nice to have pets).
| pxc wrote:
| > Unfortunately, the easiest way to get it running is by
| installing Snap; which I didn't do. Instead I ran it on Alpine
| Edge (one of the few distros that actually has it in their
| package manager).
|
| Enabling LXD is a one-liner on NixOS:
| virtualisation.lxd.enable = true;
|
| LXC and LXD are a somewhat common approach for NixOS users who
| want to quickly try something out in the environment of a
| traditional distro. :)
| kwk1 wrote:
| > Unfortunately, the easiest way to get it running is by
| installing Snap;
|
| The Debian package for LXD just entered the archive about 2
| weeks ago:
|
| https://tracker.debian.org/news/1361535/accepted-lxd-500-1-s...
|
| This means the next Debian release will make for stable LXD
| hosts without needing snap.
| leetnewb wrote:
| Someone packages it for opensuse. It has been stable for me on
| tumbleweed for a couple of years.
| depingus wrote:
| Thanks for the info. Never used OpenSuse, but next time my
| curiosity swings back around to LXD I'll check it out!
| leetnewb wrote:
| In the off chance you remember this conversation, the
| package does not require attr (even though lxd does), so
| you need to zypper in attr to get lxd running if another
| dependency didn't pull it.
| vladvasiliu wrote:
| I've been using it in my homelab for a while on Arch with ZFS.
| Never had any issues with upgrades.
|
| However, I once wanted to rename my zpool. Boy was I up for a
| world of hurt. In the end, I just reinitialized everything (the
| VMs themselves were easy to get back up).
| kiririn wrote:
| systemd-nspawn is worth a look - I found it more intuitive, well
| designed and security conscious (within reason for containers)
| than the other container options. It must get some serious use
| behind closed doors somewhere, because the quality outstrips the
| amount of publicity it gets
| tempest_ wrote:
| These seem like cool tech and I know things like proxmox can run
| them pretty easily.
|
| Still Dockerfiles and all the magic that goes with them was just
| so much easier to pick up and convince others on the team to try.
___________________________________________________________________
(page generated 2022-09-23 23:00 UTC)