[HN Gopher] Replacing Kubernetes with systemd (2024)
___________________________________________________________________
Replacing Kubernetes with systemd (2024)
Author : birdculture
Score : 66 points
Date : 2025-05-05 20:40 UTC (2 hours ago)
(HTM) web link (blog.yaakov.online)
(TXT) w3m dump (blog.yaakov.online)
| klooney wrote:
| https://github.com/coreos/fleet I feel like fleet deserved more
| of a shot than it ultimately got.
| pa7ch wrote:
| agreed, coreos pivoted to k8s almost immediately after
| releaseing fleet and didn't really get the chance to use and
| develop it much.
| byrnedo wrote:
| I created skate (https://github.com/skateco/skate) to be
| basically this but multihost and support k8s manifests. Under the
| hood it's podman and systemd
| zokier wrote:
| Why on earth would you run services on a server with --user and
| then fiddle with lingering logins instead of just using the
| system service manager?
| phoronixrly wrote:
| I'll answer this for you. You want rootless podman because
| docker is the defacto standard way of packaging non-legacy
| software now including autoupdates. I know, sad... Podman still
| does not offer convenient and mature way for systemd to run it
| with an unprivileged user. It is the only gripe I've had with
| this approach...
| maxclark wrote:
| I know some large AWS environments that run a variation of this
|
| Autoscaling fleet - image starts, downloads container from
| registry and starts on instance
|
| 1:1 relationship between instance and container - and they're
| running 4XLs
|
| When you get past the initial horror it's actually beautiful
| masneyb wrote:
| The next step to simplify this even further is to use Quadlet
| within systemd to manage the containers. More details are at
| https://www.redhat.com/en/blog/quadlet-podman
| rsolva wrote:
| This us the way! Quadlets is such a nice way to run containers,
| really a set and forget experience. No need to install extra
| packages, at least on Fedora or Rocky Linux. I should do a
| write up of this some time...
| overtone1000 wrote:
| I came to the comments to make sure someone mentioned quadlets.
| Just last week, I migrated my home server from docker compose
| to rootless podman quadlets. The transition was challenging,
| but I am very happy with the result.
| VWWHFSfQ wrote:
| We went back to just packaging debs and running them directly on
| ec2 instances with systemd. no more containers. Put the instances
| in an autoscaling group with an ALB. A simple ansible-pull
| installs the debs on-boot.
|
| really raw-dogging it here but I got tired of endless json-
| inside-yaml-inside-hcl. ansible yaml is about all I want to deal
| with at this point.
| SvenL wrote:
| Yes, I also have gone this route for a very simple application.
| Systemd was actually delightful, using a system assigned user
| account to run the service with the least amount of privileges
| is pretty cool. Also cgroup support does really make it nice to
| run many different services on one vps.
| secabeen wrote:
| I also really like in this approach that if there is a bug in a
| common library that I use, all I have to do is `apt full-
| upgrade` and restart my running processes, and I am protected.
| No rebuilding anything, or figuring out how to update some
| library buried deep a container that I may (or may not) have
| created.
| drivenextfunc wrote:
| I share the author's sentiment completely. At my day job, I
| manage multiple Kubernetes clusters running dozens of
| microservices with relative ease. However, for my hobby projects
| --which generate no revenue and thus have minimal budgets--I find
| myself in a frustrating position: desperately wanting to use
| Kubernetes but unable to due to its resource requirements.
| Kubernetes is simply too resource-intensive to run on a $10/month
| VPS with just 1 shared vCPU and 2GB of RAM.
|
| This limitation creates numerous headaches. Instead of
| Deployments, I'm stuck with manual docker compose up/down
| commands over SSH. Rather than using Ingress, I have to rely on
| Traefik's container discovery functionality. Recently, I even
| wrote a small script to manage crontab idempotently because I
| can't use CronJobs. I'm constantly reinventing solutions to
| problems that Kubernetes already solves--just less efficiently.
|
| What I really wish for is a lightweight alternative offering a
| Kubernetes-compatible API that runs well on inexpensive VPS
| instances. The gap between enterprise-grade container
| orchestration and affordable hobby hosting remains frustratingly
| wide.
| thenewwazoo wrote:
| > I'm constantly reinventing solutions to problems that
| Kubernetes already solves--just less efficiently.
|
| But you've already said yourself that the cost of using K8s is
| too high. In one sense, you're solving those solutions _more_
| efficiently, it just depends on the axis you use to measure
| things.
| sweettea wrote:
| Have you seen k0s or k3s? Lots of stories about folks using
| these to great success on a tiny scale, e.g.
| https://news.ycombinator.com/item?id=43593269
| mikepurvis wrote:
| Or microk8s. I'm curious what it is about k8s that is sucking
| up all these resources. Surely the control plane is mostly
| idle when you aren't doing things with it?
| nvarsj wrote:
| Just do it like the olden days, use ansible or similar.
|
| I have a couple dedicated servers I fully manage with ansible.
| It's docker compose on steroids. Use traefik and labeling to
| handle reverse proxy and tls certs in a generic way, with
| authelia as simple auth provider. There's a lot of example
| projects on github.
|
| A weekend of setup and you have a pretty easy to manage system.
| nicce wrote:
| What is the advantage of traefik over oldschool Nginx?
| 404mm wrote:
| I found k3s to be a happy medium. It feels very lean and works
| well even on a Pi, and scales ok to a few node cluster if
| needed. You can even host the database on a remote mysql
| server, if local sqlite is too much IO.
| hkon wrote:
| I've used caprover a bunch
| Alupis wrote:
| > Kubernetes is simply too resource-intensive to run on a
| $10/month VPS with just 1 shared vCPU and 2GB of RAM
|
| I hate sounding like an Oracle shill, but Oracle Cloud's Free
| Tier is hands-down the most generous. It can support running
| quite a bit, including a small k8s cluster[1]. Their k8s
| backplane service is also free.
|
| They'll give you 4 x ARM64 cores and 24GB of ram for free. You
| can split this into 1-4 nodes, depending on what you want.
|
| [1] https://www.oracle.com/cloud/free/
| waveringana wrote:
| the catch is: no commercial usage and half the time you try
| to spin up an instance itll tell you theres no room left
| SOLAR_FIELDS wrote:
| That limitation (spinning up an instance) only exists if
| you don't put a payment card in. If you put a payment card
| in, it goes away immediately. You don't have to actually
| pay anything, you can provision the always free resources,
| but obviously in this regard you have to ensure that you
| don't accidentally provision something with cost. I used
| terraform to make my little kube cluster on there and have
| not had a cost event at all in over 1.5 years. I think at
| one point I accidentally provisioned a volume or something
| and it cost me like one cent.
| Alupis wrote:
| > no commercial usage
|
| I think that's if you are literally on their free tier, vs.
| having a billable account which doesn't accumulate enough
| charges to be billed.
|
| Similar to the sibling comment - you add a credit card and
| set yourself up to be billed (which removes you from the
| "free tier"), but you are still granted the resources
| monthly for free. If you exceed your allocation, they bill
| the difference.
| investa wrote:
| SSH up/down can be scripted.
|
| Or maybe look into Kamal?
|
| Or use Digital Ocean app service. Got integration, cheap, just
| run a container. But get your postgres from a cheaper VC funded
| shop :)
| kartikarti wrote:
| Virtual Kubelet is one step forward towards Kubernetes as an
| API
|
| https://github.com/virtual-kubelet/virtual-kubelet
| czhu12 wrote:
| This is exactly why I built https://canine.sh -- basically for
| indie hackers to have the full experience of Heroku with the
| power and portability of Kubernetes.
|
| For single server setups, it uses k3s, which takes up ~200MB of
| memory on your host machine. Its not ideal, but the pain of
| trying to wrangle docker deployments, and the cheapness of
| hetzner made it worth it.
| dullcrisp wrote:
| I'm here just doing docker compose pull && docker compose down &&
| docker compose up -d and it's basically fine.
| arjie wrote:
| I also use systemd+podman. I manage the access into the machine
| via an nginx that reverse proxies the services. With quadlets
| things will probably be even better but right now I have a manual
| flow with `podman run` etc. because sometimes I just want to run
| on the host instead and this allows for me to incrementally move
| in.
| iAm25626 wrote:
| on the resource part - try running k8s on Talos linux. much less
| overhead.
| lrvick wrote:
| Or drop all brittle c code entirely and replace systemd with
| kubernetes: https://www.talos.dev/
| weikju wrote:
| Just replace the battle tested brittle C code with a brittle
| bespoke yaml mess!
___________________________________________________________________
(page generated 2025-05-05 23:00 UTC)