[HN Gopher] Production-Grade Container Deployment with Podman Qu...
___________________________________________________________________
Production-Grade Container Deployment with Podman Quadlets -
Larvitz Blog
Author : todsacerdoti
Score : 48 points
Date : 2025-11-16 13:59 UTC (9 hours ago)
(HTM) web link (blog.hofstede.it)
(TXT) w3m dump (blog.hofstede.it)
| silasb wrote:
| I'm not trying to take a shot at the OP, but I keep seeing posts
| labeled "Production-Grade" that still look more like pet systems
| than cattle. I'm struggling to understand how something like this
| can be reproduced consistently across environments. How would you
| package this inside a Git repo? Can it be managed through GitOps?
| And if we're calling something production-grade, high
| availability should be a baseline requirement since it's table
| stakes for modern production applications.
|
| What I'd really love is a middle ground between k8s and Docker
| Swarm that gives operators and developers what they need while
| still providing an escape hatch to k8s when required. k8s is
| immensely powerful but often feels like overkill for teams that
| just need simple orchestration, predictable deployments, and
| basic resiliency. On the other hand, Swarm is easy to use but
| doesn't offer the extensibility, ecosystem, or long-term
| viability that many organizations now expect. It feels like
| there's a missing layer in between: something lightweight enough
| to operate without a dedicated platform team, but structured
| enough to support best practices such as declarative config,
| GitOps workflows, and repeatable environments.
|
| As I write this, I'm realizing that part of the issue is the
| increasing complexity of our services. Every team wants a clean,
| Unix-like architecture made up of small components that each do
| one job really well. Philosophically that sounds great, but in
| practice it leads to a huge amount of integration work. Each
| "small tool" comes with its own configuration, lifecycle, upgrade
| path, and operational concerns. When you stack enough of those
| together, the end result is a system that is actually more
| complex than the monoliths we moved away from. A simple
| deployment quickly becomes a tower of YAML, sidecars,
| controllers, and operators. So even when we're just trying to run
| a few services reliably, the cumulative complexity of the
| ecosystem pushes us toward heavyweight solutions like k8s, even
| if the problem doesn't truly require it.
| yrxuthst wrote:
| I have not used quadlets in a "real" production environment but
| deploying systemd services is very easy to automate with
| something like Ansible.
|
| But I don't see this as a replacement for k8s as a platform for
| generic applications, more for deploying a specific set of
| containers to a fleet of servers with less overhead and
| complexity.
| figmert wrote:
| > Ansible
|
| OP asked for something consistent and between K8s and Swarm.
| Ansible is just a mistake that people refuse to stop using.
| the_alchemist wrote:
| Please elaborate
| figmert wrote:
| Ansible is a procedural mess. It's like helm had a baby
| with a very bad procedural language. It works, but it's
| such a mess to work with. Half of the time it breaks
| because you haven't thought about some if statement that
| covers a single node or some bs.
|
| Comparing that to docker swarm and/or k8s manifests (I
| guess even Helm if you're not the one developing charts),
| Ansible is a complete mess. You're better off managing
| things with Puppet or Salt, as that gives you an actual
| declarative mechanism (i.e. desired state like K8s
| manifests).
| retroflexzy wrote:
| > Ansible is a complete mess. You're better off managing
| things with Puppet or Salt, as that gives you an actual
| declarative mechanism
|
| We thought this, too, when choosing Salt over Ansible,
| but that was a complete disaster.
|
| Ansible is definitely designed to operate at a lower
| abstraction level, but modules that behave like desired
| state declarations actually work very well. And creating
| your own modules turned out to be at least an order of
| magnitude easier than in Salt.
|
| We do use Ansible to manage containers via podman-
| systemd, but slightly hampered by Ubuntu not shipping
| with podman 5. It's... fine?
|
| Our mixed Windows, Linux VM and Linux bare metal
| deployment scenario is likely fairly niche, but Ansible
| is really the only tenable solution.
| mono442 wrote:
| All of them are trying to create something which seems
| declarative on top of a mutable system.
|
| In my experience, it only works decently well when a
| special care is taken of when writing playbooks.
| betaby wrote:
| > Ansible is just a mistake that people refuse to stop
| using.
|
| So is Helm! Helm is just a mistake that people refuse to
| stop using.
| zrail wrote:
| Nobody who has used Helm in anger will debate this with
| you.
| exceptione wrote:
| > What I'd really love is a middle ground between k8s and
| Docker Swarm
|
| Maybe this is what you mean:
|
| https://docs.podman.io/en/latest/markdown/podman-kube.1.html
| > that gives operators and developers what they need while
| still providing an escape hatch to k8s when required.
|
| Here you go, linked from the first page
|
| https://docs.podman.io/en/latest/markdown/podman-kube-genera...
|
| Podman has an option to play your containers on CRI-O as well,
| which is a minimal but K8s compliant runtime.
| xienze wrote:
| > I'm struggling to understand how something like this can be
| reproduced consistently across environments. How would you
| package this inside a Git repo?
|
| Very easily. At the end of the day, quadlets (which are just
| systemd services) are just text files. You can use something
| like cloud-init to define all these quadlets and enable them in
| a single yaml file and do a completely unattended install. I do
| something similar to cloud-init using Flatcar Linux.
| xomodo wrote:
| > How would you package this inside a Git repo?
|
| There are many ways to do that. Start with a simple repo and
| spin up a VM instance from the cloud provider of your choice.
| Then integrate the commands from this article into a cloud-init
| configuration. Hope you get the idea.
| MikeKusold wrote:
| > I'm struggling to understand how something like this can be
| reproduced consistently across environments. How would you
| package this inside a Git repo? Can it be managed through
| GitOps?
|
| I manage my podman containers the way the article describes
| using NixOS. I have a tmpfs root that gets blown away on every
| reboot. Deploys happen automatically when I push a commit.
| smjburton wrote:
| This is a great resource OP. Hopefully with more guides like this
| available, it will make it easier for people who want to explore
| Podman and increase adoption.
| ivolimmen wrote:
| Kubernetes is sometimes just overkill to deploy a simple
| application and just zip, unpack and start a script is sometimes
| too fragile and crappy. This is something I would like to try on
| my Pine64 when I run some simple utility (online) software.
| betaby wrote:
| This setup uses user-space networking as I understand.
| dilyevsky wrote:
| Default podman network driver is just standard linux bridge
| lewis1028282 wrote:
| I use socket activation instead of running reverse proxy with
| `CAP_NET_BIND`. Caddy supports socket binding, and can handle SSL
| certs. I now just log-in every week or so and run `journalctl
| --user -f --since "2025-11-09" --grep "error"` to check for
| anything going on.
|
| https://github.com/eriksjolund/podman-caddy-socket-activatio...
___________________________________________________________________
(page generated 2025-11-16 23:01 UTC)